We use cookies to improve your online experiences. To learn more and choose your cookies options, please refer to our cookie policy.

Artificial intelligence (AI) is a part of students’ daily lives from homework and research assistance to interactive chat platforms and daily advice. While many AI tools offer educational benefits, recent school incidents across the country have highlighted a serious concern. In one case, a student accessed an unmoderated AI chat room where the bot attempted to display violent material and encouraged inappropriate behavior. The situation was addressed quickly, but it revealed an important reality: not all AI platforms are built with adequate safeguards for children.
As AI technology develops rapidly, digital supervision and literacy must evolve alongside it.
Developmental research shows that adolescents are still building executive functioning skills, including impulse control, judgment, and risk assessment. [1] The prefrontal cortex, the region responsible for decision making and evaluating consequences, continues developing into early adulthood. [2]
Due to this ongoing development, children may:
AI chat systems are particularly complex because they simulate conversation. Research indicates that children often attribute human characteristics and authority to interactive technologies, which can increase trust in digital systems even when that trust is misplaced. [3] When an AI system communicates in a conversational tone, it may feel credible or friendly, lowering a child’s natural skepticism.
While educational AI platforms often include content filters, unregulated or anonymous AI chat environments may expose children to:
Research from the American Psychological Association has shown that exposure to violent media can influence emotional responses and behavioral outcomes in youth, particularly when exposure is repeated or unmoderated. [4] Additionally, digital environments without age verification increase the likelihood of inappropriate exposure. [5]
Children do not process harmful content the same way adults do. Studies suggest that exposure to violent or inappropriate media can increase anxiety, desensitization, and confusion about acceptable behavior, especially in younger adolescents. [4]
Children also have an inclination not to report uncomfortable experiences because they fear punishment or embarrassment. [5] Research on parent child communication indicates that children are more likely to disclose online risks when they perceive their caregivers as supportive rather than punitive, making calm, open dialogue essential. [6]
Protecting students in a rapidly evolving digital landscape requires proactive systems, not reactive responses. Grounded in research on digital literacy and adolescent development, our school has implemented a structured, multi-layered approach to AI and online safety. [7]
All AI tools accessible on our campus network are reviewed for age appropriate safeguards, content moderation systems, and data privacy compliance before approval. Unregulated or anonymous AI chat platforms are restricted on school devices and networks to reduce exposure to unsafe environments.
We utilize secure filtering systems that block harmful or inappropriate content in real time. These safeguards are continuously updated to reflect emerging technologies and digital risks.
Students are explicitly taught how artificial intelligence works, its limitations, and how to critically evaluate digital interactions. Research shows that structured digital literacy instruction significantly improves students’ ability to assess online credibility and risk. [7] Our approach moves beyond basic internet safety to include AI awareness and responsible usage.
Students are regularly reminded that they can report uncomfortable or concerning online experiences to trusted adults without fear of punishment. The goal of this policy is to make children more likely to disclose digital risk rather than hide it.
Our educators receive ongoing professional development focused on AI tools, digital trends, and online risk patterns. This ensures that staff members are informed, prepared, and equipped to intervene when concerns arise.
Digital safety extends beyond the classroom. We provide families with guidance, updates, and resources so that expectations remain consistent between school and home. When schools and parents collaborate, students experience stronger protection and clearer boundaries.
Our commitment is not simply to restrict technology but to teach students how to navigate it responsibly. By combining secure infrastructure, education, and open communication, we aim to create an environment where innovation supports learning without compromising safety.
Parental monitoring strategies are consistently associated with reduced online risk exposure. Effective approaches include:
Instead of asking, “Did anything bad happen online?” parents might ask, “Have you come across anything online that made you uncomfortable or confused?”
Research shows that supportive monitoring, rather than overly restrictive control, leads to stronger trust and better long term safety outcomes. [6]
The goal is not to create fear around technology. AI can support learning when used responsibly. The objective is to teach discernment. Children should understand:
When students are equipped with knowledge and critical thinking skills, they become active participants in their own digital safety.
At Windermere Preparatory School, student well being extends beyond the classroom walls. We are committed to implementing secure technology practices, educating students on responsible AI use, and partnering with families to ensure consistent expectations both at school and at home.
As technology continues to advance, our commitment to safeguarding students remains constant. By working together, we can provide children with the tools, awareness, and confidence they need to navigate the digital world safely and responsibly.
Interested in learning more about our school?
Get in touch with one of our Admissions Officers.
Sources:
1: Steinberg, L. 2008. A Social Neuroscience Perspective on Adolescent Risk Taking. Developmental Review.
2: National Institute of Mental Health. The Teen Brain: 7 Things to Know.
3: Reeves, B., and Nass, C. 1996. The Media Equation: How People Treat Computers, Television, and New Media Like Real People and Places.
4: American Psychological Association. 2015. Technical Report on Violent Media.
5: UNICEF. 2017. The State of the World’s Children: Children in a Digital World.
6: Livingstone, S., and Helsper, E. 2008. Parental Mediation of Children’s Internet Use. Journal of Broadcasting and Electronic Media.
7: OECD. 2021. 21st Century Readers: Developing Literacy Skills in a Digital World.