On Thursday, Joe Biden’s Commerce Secretary Gina Raimondo expressed her “very worried” stance on the potential of artificial intelligence (AI) to disrupt the upcoming 2024 election. The Commerce Department is initiating a collaboration between the federal government, multiple private sector actors, and academics to create a new regulatory framework over AI’s influence on American elections.
However, beneath the surface of these efforts lies a complex debate. Raimondo’s alarm over AI’s capacity to mislead and manipulate is a legitimate concern, given recent incidents such as an AI-generated robocall mimicking Joe Biden to spread misinformation. Yet, the administration’s sudden zeal to address these issues raises questions about timing and intent, particularly in our highly polarized political climate.
The U.S. government is working "extensively" to keep artificial intelligence from disrupting the 2024 election, according to Commerce Secretary Gina Raimondo, who says she is "very worried" about the technology being used in a nefarious manner. https://t.co/ymGTgu1wbR
— NEWSMAX (@NEWSMAX) February 10, 2024
Critics argue that the Biden administration’s focus on AI’s potential dangers, while objectively necessary, also serves a dual purpose: positioning itself as a protector of democracy and, by extension, potentially leveraging the narrative to its advantage. As Raimondo suggests, the emphasis on collaboration with AI companies underscores a belief in the private sector’s willingness to align with government efforts. Still, it skirts around the broader question of regulatory efficacy and the balancing act between innovation and control.
The Federal Communications Commission’s (FCC) recent move to outlaw AI-generated robocalls further highlights the government’s aggressive regulatory stance. The new rule provides penalties against entities utilizing AI in voice calls.
Globally, the challenge of AI in elections is not unique to the United States. Countries like Pakistan, Indonesia, and India are grappling with the implications of AI-generated content, from deep fake videos to AI-driven campaign strategies. This global perspective emphasizes the universal struggle to safeguard electoral integrity in the digital age, suggesting that the U.S. is part of a broader international conversation.
Yet, for all the genuine concern, there’s an undercurrent of skepticism about the administration’s motivations. The urgency to act against AI threats, while aligned with democratic principles, is also conveniently timed as the election cycle heats up. This alignment begs the question: Are these efforts purely in the interest of protecting the electoral process, or do they also serve to craft a narrative favorable to the current administration?