
The failure of an AI security system to distinguish a bag of chips from a gun has sparked a heated debate about the role of technology in schools.
Story Overview
- An AI system misidentified a Doritos bag as a weapon, leading to a police response.
- Student Taki Allen was handcuffed and searched by police.
- The incident has raised concerns about racial bias and AI accuracy.
- Baltimore County officials are reviewing the use of AI in schools.
AI Misstep Leads to Police Response
On October 20, 2025, an AI-powered gun detection system at Kenwood High School in Baltimore County, Maryland, mistakenly identified a student’s crumpled Doritos bag as a firearm. The incident occurred around 7:00 p.m. when Taki Allen, a student, was waiting outside the school after football practice. The AI system’s error led to a heavy police response, resulting in Allen being handcuffed and searched before authorities realized the object was merely a bag of chips.
The rapid deployment of AI systems in schools aims to enhance safety but has come under scrutiny due to incidents like this. The deployment of AI technology in schools has been driven by a desire to prevent school shootings; however, the potential for false positives raises significant concerns. These systems, often reliant on computer vision, must balance the identification of threats with the risk of over-policing and racial bias, particularly in minority communities.
Armed officers held a student at gunpoint after an AI gun detection system mistakenly flagged a Doritos bag as a firearm
"They made me get on my knees, put my hands behind my back, and cuff me" pic.twitter.com/eSU0y5r1Yy
— Dexerto (@Dexerto) October 23, 2025
Community Response and Concerns
The incident has sparked public outcry, with Allen and his family describing the event as traumatic. Allen expressed his fear, stating, “The first thing I was wondering was, was I about to die? Because they had a gun pointed at me.” Baltimore County officials have announced a review of the AI gun detection system’s use at the school, as the community questions the system’s accuracy and the decision to rely on such technology.
Concerns have been raised about the racial bias inherent in AI systems, which may disproportionately affect minority students. The broader Baltimore County school community is now engaged in discussions about the appropriateness of police presence and AI technology in educational settings. These debates highlight the tension between ensuring safety and protecting civil rights.
Potential Impacts on Policy and Technology
The fallout from this incident could lead to significant policy changes regarding the use of AI technology in schools. There may be a reevaluation of police response protocols to avoid similar situations in the future. Additionally, the incident could spark legal or regulatory actions against the technology vendors responsible for these AI systems. The financial implications of reviewing and possibly replacing these systems are considerable, as communities demand transparency and accountability.
Industry experts and civil rights advocates call for more rigorous testing and oversight before deploying AI in schools, emphasizing the need for improved datasets to prevent bias. This event may deter other institutions from adopting similar technologies without substantial evidence of their reliability and fairness. The AI industry faces increasing pressure to address these concerns to maintain public trust and avoid a chilling effect on technology adoption.



























