In an age where privacy and free expression are increasingly scrutinized, the U.S. government seems to be drawing closer to George Orwell’s vision of a society stifled by an omnipotent government. Recent reports reveal that Customs and Border Protection (CBP), an agency under the Department of Homeland Security (DHS), has spent millions on artificial intelligence (AI) software developed by tech firm Fivecast. This AI-enabled tool purportedly scans and deciphers “sentiment and emotion” in social media posts and flags them for law enforcement scrutiny.
Fivecast’s mission statement suggests their technology is “used and trusted by leading defense, national security, law enforcement, corporate security, and financial intelligence organizations around the world.” The company’s software boasts impressive capabilities, including object recognition in images and videos and detection of “risk terms and phrases” across multiple languages. Furthermore, the AI tool can analyze social media activity over time, charting the emotions such as “anger,” “disgust,” “fear” and “sadness.”
Go Figure, U.S. Gov’t Now Using AI To Detect ‘Thought Criminals’ on Social Mediahttps://t.co/8un33PehQB
— The 𝗠𝗔𝗚𝗔 ░O░U░T░L░A░W░🇺🇸 (@Mar50cC5O) August 28, 2023
At first glance, praising such advancements as cutting-edge tools for keeping society safe might be tempting. CBP claims its technology is aimed at “analyzing open source information related to inbound and outbound travelers who the agency believes may threaten public safety, national security, or lawful trade and travel.” Yet, the devil is in the details.
While our first concern could be the potential inaccuracy of AI in assessing human emotions, a deeper worry is how this technology could infringe on our constitutional rights. Patrick Toomey, deputy director of the ACLU’s National Security Project, warns that CBP should not be “relying on junk science to scrutinize people’s social media posts” and identify purported ‘risks.'”
The deployment of such AI-enabled tools paves the way for governmental censorship, not merely of what you say but how you say it. If certain emotions become red flags for scrutiny, freedom of thought becomes a relic of the past. These Orwellian technologies could also open the door to courtroom arguments where the alleged emotion behind a statement could be used to impugn an individual’s motives or intent, creating a dystopian precedent.
The risk of misuse isn’t theoretical. Yahoo News reported in 2021 that CBP’s Counter Network Division gathered data on journalists. A supervisor in that division admitted, “We are pushing the limits, and so there is no norm, there are no guidelines, we are the ones making the guidelines.”
Ironically, the agency says, “The Department of Homeland Security is committed to protecting individuals’ privacy, civil rights, and civil liberties.” But when the government itself interprets and flags individual emotions for possible risks, can it really claim to protect civil liberties and free expression?
It’s crucial to balance technological advancement with ethical considerations, particularly when liberties we hold dear are at stake. Overreliance on AI tools like the one from Fivecast could transform protective agencies into Big Brother incarnate, undermining the freedoms they claim to defend.