Forward of the U.S. election, some analysts fearful that synthetic intelligence might imperil election integrity. Though AI didn’t find yourself disrupting the vote, consultants aren’t writing off the dangers it poses to democracy.
“I believe it might be foolhardy to say: ‘Effectively, there’s been no main catastrophe but, so we’re okay right here,’” Gary Marcus, a scientist and AI knowledgeable, not too long ago told FP’s Rishi Iyengar. “That’d be like saying we made a bunch of steamships, so this one’s invincible, and whoops, you hit an iceberg.”
Forward of the U.S. election, some analysts fearful that synthetic intelligence might imperil election integrity. Though AI didn’t find yourself disrupting the vote, consultants aren’t writing off the dangers it poses to democracy.
“I believe it might be foolhardy to say: ‘Effectively, there’s been no main catastrophe but, so we’re okay right here,’” Gary Marcus, a scientist and AI knowledgeable, not too long ago told FP’s Rishi Iyengar. “That’d be like saying we made a bunch of steamships, so this one’s invincible, and whoops, you hit an iceberg.”
On this version of Flash Factors, FP contributors take into account the methods AI might endanger democratic societies and the way policymakers would possibly face down these threats.
AI’s Alarming Trend Toward Illiberalism
Left ungoverned, the expertise opens pathways to undermine democracy, Ami Fields-Meyer and Janet Haven write.
What AI Will Do to Elections
Depleted tech platforms, AI-enabled misinformation, and international locations going to the polls, FP’s Rishi Iyengar studies. What might go flawed?
The Science of AI Is Too Important to Be Left to the Scientists
Concerted worldwide motion would require political will, Hadrien Pouget writes.
How Africa’s War on Disinformation Can Save Democracies Everywhere
African leaders can’t afford to attend for Large Tech. By taking motion, the continent might spare future generations from the scourge of adversarial AI, Abdullahi Alim writes.
Red Teaming Isn’t Enough
Researchers want way more data to grasp AI’s true dangers, Gabriel Nicholas writes.