Staff Use of AI
Protecting Confidential Information
The single most important rule governing staff use of AI is the protection of confidential data.
It is prohibited for any employee to enter, upload or otherwise share any student PII (personally identifiable information), employee PII or other confidential, proprietary or copyrighted district information into any public, non-approved generative AI tool. This applies to all forms of sensitive data, including but not limited to:
- Student and staff names, ID numbers, addresses, email addresses, medical information, demographic information, date of birth, financial information and login credentials.
- Academic records, grades and assessment results.
- Individualized Education Program (IEP) details, 504 plans or any information related to student disabilities or health conditions.
- Disciplinary records.
- Images, videos or voice recordings of students or staff without explicit consent for that purpose.
The rationale for this strict prohibition stems from the operational model of most publicly available AI tools. Many "free" platforms use the data entered into their systems to further train their models. This means that any information provided in a prompt is no longer private and may be incorporated into the AI's knowledge base, potentially becoming accessible to other users. If a teacher, for example, pastes a student's essay containing their name and specific feedback needs into a public AI tool for revision ideas, that action constitutes an unauthorized disclosure of protected student information — a potential violation of federal laws like FERPA. Therefore, the convenience of these tools does not outweigh the legal and ethical risks they pose to our students and staff.
Vetting and Approved Tools
To provide staff with safe and effective AI capabilities, the district is committed to a rigorous vetting process for all AI-powered educational technology. Staff are only permitted to use AI tools for professional tasks or with students if those tools have been formally reviewed and approved by the district's Technology and Curriculum departments. This vetting process ensures that any tool used in our district meets stringent standards for:
- Data Privacy and Security: The vendor must have a signed data privacy agreement with the district that complies with all relevant state and federal laws, including FERPA, COPPA and CIPA. The agreement must clarify how district data is stored, used and protected and must prevent its use for training public AI models.
- Pedagogical Soundness: The tool must align with the district's instructional goals and support, rather than undermine, student learning and critical thinking.
- Ethical Standards: The tool must be evaluated for potential bias and its impact on equity.
A list of all district-vetted and approved AI tools will be maintained and made accessible to all staff.
Google Gemini and Google NotebookLM have built-in data protections that provide the most secure and safe environment for school-related AI use.
Professional Responsibility and Modeling
As leaders in the classroom and school community, staff are expected to model the highest standards of digital citizenship and ethical AI use. This includes:
- Critical Evaluation: Staff must critically evaluate all AI-generated content for accuracy, bias and appropriateness before using it in any professional capacity, such as in communications with parents or as instructional material.
- Transparency: When AI has been used to significantly assist in the creation of materials intended for public or parent communication, staff should disclose its use to maintain transparency and trust.
- Ethical Conduct: Staff must not use AI to create content that is misleading, harmful, discriminatory or that violates copyright or intellectual property laws.
Enhancing Professional Productivity
The district encourages staff to leverage approved AI tools to enhance their professional productivity and streamline administrative tasks. The primary goal of this practice is to reduce the time spent on routine work, thereby increasing the time available for high-impact activities such as direct student instruction, relationship building and personalized support. Approved AI tools can be effectively used for:
- Generating initial drafts of parent communications, newsletters or announcements.
- Brainstorming ideas for lesson plans, projects and learning activities.
- Creating differentiated instructional materials tailored to various student learning needs.
- Developing rubrics, practice questions and other assessment materials.
AI in Grading and Feedback
AI tools can be a valuable assistant in the grading process, but they cannot replace the professional judgment of an educator.
- Appropriate Use: Teachers may use AI for grading tasks or assessments that are objective in nature, such as multiple-choice questions or fill-in-the-blank exercises, where accuracy is easily verifiable. Tools like Canvas and Formative have AI tools built in for this purpose. AI can be used to provide initial feedback or check for grammatical conventions as part of the learning process. Google Gemini, due to its data protections, should be the only AI tool used for feedback purposes involving student work.
- Inappropriate Use: For assignments that require nuanced understanding, subjective evaluation of an argument or assessment of creativity, AI should not be the sole grader. The final evaluation and grade must be determined by the teacher.
