Donald Trump is prepared to assume the presidency again. His platform includes overseeing artificial intelligence development, a potentially transformative technology. The president-elect has pledged to reduce excessive rules and enlisted tech magnate Elon Musk, a fellow critic of government oversight, to guide this initiative. Specifically, the Republican Party's election manifesto stated its intention to overturn a comprehensive executive order enacted by President Joe Biden, which outlined measures to mitigate AI's national security risks and curb discrimination by AI systems, amongst other objectives. The Republican document characterized the executive order as containing 'radical leftist ideologies' that stifle progress.
Sandra Wachter, a professor specializing in technology and regulation at Oxford University's Internet Institute, is closely monitoring developments. AI presents substantial risks demanding immediate action via robust regulation, she informed CNN. Here are some of the dangers of unregulated AI.
For years, AI systems have shown a capacity to replicate societal biases – for instance, regarding race and gender – as these systems are trained on data reflecting past human actions, many exhibiting such prejudices. When AI is used in hiring or mortgage approvals, the outcome is frequently discriminatory. "Bias is inherent in these technologies because they analyze historical data to predict the future… they learn who has been hired previously, who has been incarcerated," Wachter explained. "Therefore, these decisions are very often, almost always, biased." Without sufficient safeguards, she added, "those problematic past decisions will be perpetuated."
The use of AI in predictive policing serves as one illustration, according to Andrew Strait, an associate director at the Ada Lovelace Institute, a London-based non-profit dedicated to AI safety and ethics research. He noted that some US police forces utilize AI-driven software trained on historical crime data to forecast future crime hotspots. Since this data often reflects over-policing of specific communities, Strait stated, the resulting predictions lead law enforcement to focus on these same communities, reporting more crimes there while other areas, potentially with similar or greater crime levels, receive less attention.
AI can produce deceptive images, audio, and videos that can be used to fabricate events or statements. This, in turn, could influence elections or create counterfeit pornographic material for harassment, amongst other potential misuses. AI-generated images spread widely on social media before the recent US presidential election, including fabricated images of Kamala Harris, even re-shared by Musk. In May, a CNN-obtained Department of Homeland Security bulletin distributed to state and local officials indicated that AI would likely present foreign actors and domestic extremists with "enhanced opportunities for interference" during the election. In January, over 20,000 New Hampshire residents received a robocall – an automated phone message – using AI to mimic Biden's voice, urging them to abstain from the presidential primary. The perpetrator, as he admitted, was Steve Kramer, who worked for Rep. Dean Phillips' long-shot Democratic primary campaign against Biden. Phillips' campaign denied involvement in the robocalls. Over the past year, victims of AI-generated, non-consensual pornography have ranged from prominent figures like Taylor Swift and Rep. Alexandria Ocasio-Cortez to high school girls.
AI researchers and industry leaders have highlighted even more significant risks. These range from ChatGPT's provision of readily accessible, comprehensive instructions for criminal activities, such as exporting weapons to sanctioned nations, to AI escaping human control. "AI can be used to create highly sophisticated cyberattacks, automate hacking, and even create autonomous weapon systems capable of global harm," Manoj Chaudhary, CTO of Jitterbit, a US software company, told CNN. In March, a US State Department-commissioned report warned of "catastrophic" national security threats posed by rapidly advancing AI, calling for "emergency" regulatory measures alongside other actions. The report stated that the most advanced AI systems could, in the worst-case scenario, "pose an extinction-level threat to humankind." A related document noted that AI systems could be used to execute "high-impact cyberattacks capable of crippling essential infrastructure," amongst a multitude of other risks.
In addition to Biden's executive order, his administration also secured pledges from 15 leading technology firms last year to improve the safety of their AI systems, although all commitments are voluntary. Democrat-led states such as Colorado and New York have also passed their own AI legislation. For example, in New York, any company using AI in recruitment must engage an independent auditor to verify the system's impartiality. A "patchwork of (US AI regulation) is emerging, but it's very fragmented and insufficient," Strait of the Ada Lovelace Institute stated. It's "too early to say" whether the incoming Trump administration will expand or curtail these regulations, he noted. However, he is concerned that a repeal of Biden's executive order would dismantle the US government's AI Safety Institute. The order established this "incredibly crucial organization," Strait told CNN, charging it with examining risks from advanced AI models before public release.
It's possible that Musk will advocate for stricter AI regulations, as he has previously. He is slated for a significant role in the next administration as co-leader of a new "Department of Government Efficiency," or DOGE. Musk has repeatedly voiced his concern that AI poses an existential threat to humanity, even though one of his companies, xAI, is itself developing a generative AI chatbot. Musk was "a strong supporter" of a now-rejected California bill, Strait pointed out. The bill aimed to prevent some of AI's most devastating consequences, such as those from systems potentially becoming uncontrollable. California's Democratic Governor Gavin Newsom vetoed the bill in September, citing the risk to innovation. Musk is "very worried about the catastrophic risk of AI. It's possible that this would be the subject of a future Trump executive order," Strait said. But Trump's inner circle is not limited to Musk and includes JD Vance. The incoming vice-president stated in July that he was concerned about "preemptive overregulation efforts" in AI, as they would "entrench existing tech giants and hinder new entrants from generating the innovation that will fuel the next generation of American economic growth." Musk's Tesla (TSLA) could be considered one of those established tech giants. Last year, Musk impressed investors with Tesla's AI investments and, in its most recent earnings report, the company stated it remained focused on "making significant investments in AI projects" amongst other priorities.