AI is the new factor in US election fraud—and there’s still 101 days to go
It is 101 days until the US presidential election and, as in previous years, misinformation has once again become a problem. But this year, AI chatbots are also entering the equation.
A recent example: After President Joe Biden dropped out of the race on Sunday, screenshots circulating on social media said nine states, including Minnesota, had “locked and loaded” their ballots ahead of the Nov. 5 election.
That’s not true, and Minnesota Secretary of State Steve Simon’s office said the spread of this misinformation can be traced back to Grok, according to a report by Minneapolis Star Tribune. The productive AI chatbot was developed by xAI and is only available to X premium users, both owned by Elon Musk.
While a tough election year will understandably mean there’s more focus on misinformation right now, these risks persist, Shannon Raj Singh, principal and founder of Athena Tech & Atrocities Advisory, says. Fast company. Another concern is the cumulative effect of low levels of misinformation and misinformation—content that doesn’t cross the threshold of violating various platforms’ content policies—and the effect that people tend to hear false or misleading stories.
“Social media and other digital public spaces continue to be part of political debate and discourse,” said Singh, who was then a human rights adviser on Twitter. “It’s important to our discussions, so monitoring those risks is important.”
One reason for optimism, he says, is that because the US presidential election falls later this year, election-trust groups will have more time and knowledge to deal with misinformation shared on social media about other elections around the world. . That said, the power of new manifestations – such as deepfake audio – is among the things that keep him up at night, he says, pointing to the example of a deepfake in Moldova targeting Western president Maia Sandu.
Singh warns that companies must remain vigilant even after the election due to the risk of post-election violence. Furthermore, not all political candidates or leaders face equal risks when it comes to AI-generated errors, he says, adding that women leaders and people of color face greater risks.
On the other hand, Musk has raised the power of the community and relies on a system called Public Notes, which allows X users to write fact-checking labels and vote whether they are helpful or not. He has removed many content moderation rules since buying the social media platform in 2022.
Meanwhile, Meta has made changes that CEO Mark Zuckerberg hopes will mean Facebook and Instagram “play less of a role in this election than before,” he told Bloomberg. in a recent interview.
Still, mass layoffs in technology, including teams once tasked with fighting disinformation, are a major concern this election cycle, Singh noted. But he says that there should be more awareness about the problem we are facing. “Focusing on corporate responsibility and learning from the past will be very important.”
Source link