Artificial intelligence is quickly becoming part of the modern classroom. New AI-powered tools can assist with lesson planning, personalize learning experiences, and help educators manage administrative tasks more efficiently. As these technologies become more accessible, schools are beginning to explore how AI can support both teaching and learning.
However, the rapid adoption of AI has outpaced the development of clear policies and guidelines in many districts. School leaders are now faced with the challenge of balancing innovation with responsibility—ensuring that new technologies enhance education while protecting student privacy, strengthening cybersecurity, and minimizing potential liability.
How AI Is Being Used in Schools
AI tools are already appearing in many aspects of daily school operations. Teachers may use AI to generate lesson ideas, create quizzes, draft communications, or assist with grading. Students may rely on AI tools to brainstorm ideas, review concepts, or receive tutoring support outside of the classroom. In some districts, administrators are also exploring AI-driven platforms that analyze student performance data or help streamline administrative workflows.
While these technologies can improve efficiency and provide valuable learning support, their use also raises important questions about data security, responsible use, and oversight. Understanding how AI is being used across the school environment is an important first step in developing policies that encourage innovation while managing potential risks.
Student Data Privacy Concerns
One of the most significant issues surrounding AI in education involves student data privacy.
Many AI platforms require users to input text, assignments, or other information that may contain personally identifiable information (PII) or education records. If these tools store, transmit, or analyze that information, it can create potential exposure under federal requirements such as the Family Educational Rights and Privacy Act (FERPA), as well as applicable state student privacy laws and contractual obligations districts have to families and other partners.
Schools should carefully review the data policies of any AI tool before allowing it to be used in the classroom. Key questions to consider include:
Does the platform store or retain student information, and for how long?
How is the data protected (encryption in transit and at rest, access controls, etc.)?
Can the vendor use the data to train its AI models, or share it with subcontractors?
Does the district have a signed data privacy agreement/data protection addendum with the provider?
Where is the data hosted, and what breach-notification timelines apply?
Without clear policies and vendor agreements, schools may unintentionally expose student information to third parties or create compliance and notification challenges if an incident occurs.
Academic Integrity and Misuse
AI also introduces challenges related to academic integrity. Students can use AI tools to generate essays, solve math problems, or complete assignments in ways that make it difficult for educators to determine whether the work reflects the student’s own understanding.
While AI can be a valuable learning aid, schools may need to update academic honesty policies to address AI-assisted and AI-generated content. Some districts are implementing guidelines such as:
Requiring students to disclose or cite AI assistance in assignments
Limiting AI use for certain types of assessments
Incorporating AI detection tools when appropriate
Educating students on ethical and responsible AI use
The goal is not necessarily to ban AI, but to teach students how to use it appropriately and transparently.
Liability and Insurance Considerations
As AI becomes more integrated into school operations, districts should also consider how it may impact their insurance and liability exposure. Potential risk scenarios could include:
Data breaches involving AI platforms that store or process student or employee information
Cybersecurity vulnerabilities introduced through third-party tools, browser extensions, or integrations
Copyright/licensing and content ownership concerns, including the use of copyrighted materials in prompts, or uncertainty around rights to AI-generated outputs
AI-driven errors or inappropriate outputs that contribute to student harm, discrimination allegations, or other complaints (for example, a tool providing inaccurate guidance, biased recommendations, or unsuitable content)
From an insurance perspective, it’s important to understand that different coverages may respond to different types of claims:
Cyber liability coverage may help with costs related to privacy breaches, security incidents, and certain technology or media-related claims—but coverage varies significantly based on policy terms, definitions, and exclusions, and on how the event occurs (including whether the incident happens at a third-party vendor).
Claims involving alleged wrongful acts by the district or its leadership (such as failure to supervise, policy decisions, discrimination allegations, or other administrative errors) may implicate educators legal liability/school leaders E&O coverage, where purchased.
Claims alleging injury or damage may implicate general liability, though professional services-related allegations and exclusions can affect how coverage applies.
Because AI-related incidents can involve multiple parties (districts, staff, students, and vendors), districts should work with their insurance and risk management partners to review potential gaps and confirm which policies may respond in different scenarios.
Just as importantly, districts should evaluate vendor contracts for appropriate risk transfer. Risk management teams may want to confirm that AI vendors:
Carry appropriate insurance (such as general liability, technology E&O, and cyber, depending on the service provided)
Agree to contractual indemnification where feasible
Provide clear security requirements and breach-notification obligations in the contract
Name the district as an additional insured where applicable (commonly on general liability, and sometimes on other coverages depending on the agreement)
Developing Responsible AI Policies
As AI tools continue to evolve, school districts should consider establishing clear internal guidelines for staff and students. Effective AI policies may include:
Approved AI platforms for classroom and administrative use
Data privacy and security requirements (including what information may or may not be entered into AI tools)
Staff training on responsible AI use and confidentiality expectations
Student guidelines for academic integrity and disclosure of AI assistance
Vendor review and approval processes, including contract and security reviews
Ongoing monitoring and periodic re-evaluation as tools and regulations change
By proactively addressing these issues, districts can embrace the benefits of AI while maintaining strong protections for students and staff.
Preparing for the Future
Artificial intelligence will likely play an increasing role in education in the years ahead. Rather than avoiding the technology entirely, many experts recommend that schools focus on responsible implementation and clear governance.
By understanding the legal, contractual, and insurance implications of AI in schools, administrators can help ensure that innovation does not come at the expense of privacy, security, or compliance. As with any emerging technology, thoughtful planning, strong vendor management, and risk awareness will be key to helping schools navigate this rapidly changing landscape.
For more information, please contact one of our Insurance and Risk Management Advisors today!