The Surveillance Future
Connor Hsuan
Professor Horgan
HST 401
7 March 2026
I pledge my honor that I have abided by the Stevens Honor System.
The Surveillance Future
When it comes to my AI usage, I do often use some Large Language Models, like ChatGPT or Claude, for general references. Most recently though, I have been researching and training object detection and facial recognition software for my senior design project. While my team did encounter several issues while working on our project, we are making steady progress with our machine learning model. However, through my research, I started learning about how this technology was being used at larger scales, such as being implemented with existing surveillance technology. While I understand some of the potential benefits to this technology, such as crime prevention or suspect identification, there are also a massive amount of concerns that come with this. With the continuous growth of the AI industry, many people have raised concerns about privacy and government overreach, not to mention the current issues with the technology.
With how often AI is used as a buzzword currently, it is important to establish how the technology actually works so that it can be used effectively. Face detection begins with an algorithm using techniques like a Histogram of Oriented Gradients or Central Neural Networks to scan an image or video to isolate a human face. From there, various methods can be used to extract key facial features and convert them to a mathematical representation of these features, known as feature vectors (GeeksforGeeks). This already brings up a huge issue when it comes to personal privacy. While most companies use this kind of data for biometric identification, like with unlocking a phone with a face scan, this data can be dangerous if the wrong people get to it. This is entirely possible now with the current prevalence of data breaches. Going forward, proper ethical guidelines need to be put in place to regulate the deployment of this technology.
While large scale surveillance infrastructure has existed for a while in countries such as China, AI has only made surveillance easier. According to CNN, the Chinese government has been using AI tools to “automate censorship, enhance surveillance and pre‑emptively suppress dissent” (Yeung). The Chinese government has invested billions of dollars into AI related businesses and is now seeing use in their criminal justice system. China’s current system of cameras manages to oversee China’s 1.4 billion population, with the ratio of cameras to people being 3 to 7. AI is also being used in courts, with a Shanghai AI system being used to “recommend whether judges and prosecutors should arrest or grant suspended sentences to criminal suspects and defendants”. AI is even being implemented into prisons, with “smart prisons” being able to monitor a person's facial expressions to monitor their mood (Yeung). The use of AI in China shows a near complete integration into the policing and monitoring of its people. With the government continuously investing money and Chinese AI companies growing even further, there is absolutely no sign this will stop at all.
While the United States is not nearly as far along as China is with AI integration, that doesn’t mean AI isn’t being used for surveillance and verification. Most recently, AI is being used to verify the ages of users for certain software. According to CNBC, roughly half of all U.S. states have enacted or are trying to enact age verification requirements for gaming services and social media apps. Oftentimes, the implementation of age verification is handled by third party verification vendors. One of these vendors, Socure, claims to not sell verification data and does not keep data used for age estimation. However, for scans that require an ID, certain adult verification data can be stored for up to 3 years (Booth). Many issues arise when it comes to companies keeping this kind of information. If that company were to suffer a data breach, all of your biometric data would be out there for anyone to use without your consent or knowledge.
The use of AI seems to not be stopping anytime soon, especially in the security field. While there are definitely benefits to this new technology, there are also risks that need to be addressed before they can be used at such wide scales. The future of security is likely to involve AI in some capacity, so it is important that we decide what direction it will take in the present.
Work Cited
Booth, Barbara. “Social Media Child Safety Laws Could Turn the Internet Into an AI Surveillance System.” CNBC, 8 Mar. 2026, https://www.cnbc.com/2026/03/08/social-media-child-safety-internet-ai-surveillance.html.
GeeksforGeeks. “How Do Facial Recognition Systems Work?” GeeksforGeeks, 23 July 2025, https://www.geeksforgeeks.org/how-do-facial-recognition-systems-work/.
Yeung, Jessie. “China’s Censorship and Surveillance Were Already Intense. AI Is Turbocharging Those Systems.” CNN, 4 Dec. 2025, https://www.cnn.com/2025/12/04/china/china-ai-censorship-surveillance-report-intl-hnk.
Comments
Post a Comment