How do AI security products being sold to schools really work?
Even without a deep understanding of technology, school administrators can ask some basic questions to evaluate how AI models make predictions.
Part 3 of an exclusive series for subscribers with the questions that school officials should be asking about security products, policies, and procedures. If you missed it, Part 1: Questions about 'safe rooms' and Part 2: What's the 'Use of Force Continuum' for armed teachers?.
AI and security are two of the hottest topics in education. Imagine if you asked a Secret Service agent or a Google executive to run a large school district. Without formal training and experience working in education, how could these security and tech experts know how to run a school system?
The inverse of this situation is happening at schools across the country. School administrators are being asked to be expert evaluators of AI, security, and AI+security.
School security went from a niche to a multi-billion dollar industry because administrators are under intense pressure to make campuses safer as shootings at schools have increased 10x in the last decade.
Without expert knowledge, school administrators often rely on the claims made by security tech vendors. For example, the Federal Trade Commission is currently investigating the marketing practices of a vendor that has multi-million dollar contracts with school systems.
Here are a few simple questions that I would ask a security vendor about their AI software if I was a school administrator:
How was the AI model trained?
What’s the probability threshold and error rate?
Can I try using it?
Can I change something during the demo?
Can I study a copy of the raw data from the demo?
Is the military and government using it?
An AI system that predicts sales might work well at 85% accuracy because little harm is caused by sending a coupon to the 15% of customers who aren’t interested. What if software that is making life safety decisions (e.g., identifying a weapon) is wrong 15% of the time?
Before the questions, here are some basics of AI
When a vendor is selling AI technology that powers a smart metal detector, classifies objects in images, identifies the sound of gunshots, or recognizes faces, it’s important to understand what the AI model is doing.
AI models are programs that are trained to detect specific patterns, and then generate an action or recommendation based on recognizing that pattern.
For an AI metal detector, the hardware that scans each student creates new data that an algorithm compares to the patterns from when weapons were present in the training data. The software then makes a series of ‘yes’ or ‘no’ choices.
A simple example is predictive software that has two classification options—“Cat” or “Not Cat”. There are four possible outcomes from this:
In two of the four possible outcomes, the AI software correctly predicted “cat” and “not cat”. In the other two, the software incorrectly predicted “cat” or “not cat”.
When each image is analyzed, a predictive algorithm doesn’t know the right answer and its choice is actually a probability. It’s up to the creator of an AI model to decide what the probability threshold will be based on the performance and error rate.
This is not a hypothetical scenario. Dr. Fei-Fei Li, a professor of computer science at Stanford’s Human Centered A.I. Institute, created a “computer vision” AI model to accurately identify images of cats. It took +1 billion images and a team of +10,000 people around the world working for three years to label test data images.
The implication of this for campus security is that school administrators need to know the probability and accuracy of the software for all four outcomes. In this video posted online, the probability assigned by a predictive algorithm is shown as a percentage above the object being classified.
In this demo posted on LinkedIn (note: public video and I have no relationship with this company), when the weapon is held still against a contrasting background, the predictive algorithm assigns a 99.7% probability of classification as a weapon. When part of the weapon is obscured, the probability drops to 87%.
This is important for school officials because if a system has a threshold set at 90% for sending emergency alerts, you need to understand what is being omitted. If the threshold for sending an alert was 99%, note how many in this demo fall below it.
These are high stakes decisions for school officials because a $5 million/year contract for AI security software is the same cost as hiring 50 social workers and teachers.
Question 1: How was the AI model trained?
Dr. Fei-Fei Li’s computer vision project at Stanford needed 1 billion images to accurately predict cats. It’s important for school officials to ask security tech vendors questions about their training data because weapons can have thousands of different variations. Questions to ask include:
What’s the training data and how closely does it match real-world requirements?
What’s the error rate on training data versus testing data? (more about error rate in next section)
How well does the model perform on real data versus training data? (beware of the curse of dimensionality when a predictive AI model performs perfectly on training data but doesn’t work in the real world)
What’s the error rate for real world data? (cats labeled not cat and dogs labeled cat)
If a company does not want to disclose the error rate, this could be a red flag that the error rate is higher than customers would be comfortable with (IPVM reported earlier this month that a large AI security vendor’s real world error rate is ~20%).
Question 2: What’s the probability threshold and error rate?
A probability threshold in AI software refers to a predefined cutoff value for accepting or rejecting a prediction of “Cat” or “Not Cat”. It acts as a decision boundary, allowing the software to filter out less reliable predictions and prioritize those with higher confidence scores.
Why is this content only available to premium members? It takes me a lot of time to update the school shooting database every day and write 2-3 articles each week. Each paid subscription helps support my work and I’m grateful for everyone who has upgraded.
Keep reading with a 7-day free trial
Subscribe to School Shooting Data Analysis and Reports to keep reading this post and get 7 days of free access to the full post archives.