This election in a small town in Wyoming can give us a glimpse of the future of AI in politics
Welcome to AI is recorded, Fast companyA weekly newsletter that analyzes the most important issues in the world of AI. You can sign up to receive this weekly newsletter here.
The Wyoming mayoral candidate who wants AI to run the city may be a pioneer, not a joke
In Cheyenne, Wyoming, a mayoral candidate named Victor Miller is basing his position on the idea of letting an AI bot run local government. He says the bot, called Vic (Virtual Integrated Citizen), can digest mountains of data and make unbiased decisions, and can propose new solutions to previously intractable resource distribution problems. Miller says he will make sure the bot’s actions are “conducted legally and realistically.” Experts say Miller’s AI-powered election is the first of its kind in the US
It sounds crazy, until you consider the context. According to a 2023 Gallup poll, Americans’ trust in government is at an all-time low. Some of that erosion of hope is due to government abandonment and administrative failure (see: Woes of Veteran Administrations). Perhaps AI can streamline operations and restore some of that trust.
To be sure, AI is already helping the government. Washington Post AI journalist Josh Tyrangiel revealed in May that when the government really needed to create and distribute a vaccine for COVID, it turned to the AI platform Palantir to speed up the process by organizing and analyzing data. The intelligence community relies heavily on Palantir to make sense of vast amounts of intelligence data from around the world. The US Agency for International Development (USAID) recently said it will use OpenAI’s ChatGPT Enterprise to help new aid agencies and local partners. Tyrangiel says the government could use AI bots to answer public questions about taxes, health care benefits, and student loans.
Much of the public’s pessimism in government, according to a Gallup survey, stems from the belief that government priorities are dictated by big money and partisan interests. Nowhere is that apathy more evident than in our congressional districts, which are routinely redrawn by state parties for party benefit. AI can be used to create fair district maps that more closely reflect the state’s demographic realities, based on data from the census and other sources. Neutral AI-generated maps can quickly create competitive districts, not just gimmes for whichever candidate the dominant party chooses in the primary.
However, there can be all sorts of pitfalls associated with involving AI in political decisions. For example, when states sought to eliminate regional bias through independent district drawing boards, the battle became about the political leanings of the board members and who elected them. In the future, disputes may arise over the type of AI bot and what training data to provide. But Miller is unlikely to be the last local politician to talk about AI on the campaign trail. As AI tools improve, candidates at the state and federal levels can make AI a bigger part of the story they tell voters.
The authors claim that Anthropic used the passed-down books to train Claude
San Francisco-based Anthropic, a close rival of OpenAI and Google, is being sued by three authors — Andrea Bartz, Charles Graeber, and Kirk Wallace Johnson — who say the company trained its Claude AI models using their books, along with “hundreds of thousands of” of others, without permission. The complaint, filed in federal district court in San Francisco, could become a class action if other writers believe their work was used to train Claude to sign.
“It is no exaggeration to say that the Anthropic model seeks to profit by mining human speech and the intelligence that supports those activities,” the authors’ lawyers wrote in the complaint. They point out that Anthropic presents itself as a company that benefits society—expecting to bring in $850 million in revenue by the end of this year. “It is inconsistent with basic human values or the public interest to download hundreds of thousands of books from a known illegal source,” the lawyers wrote.
“We are aware of the case and are reviewing the complaint,” an Anthropic spokesperson said in an email. Fast company. “We cannot comment on the proceedings.”
The lawsuit, which follows similar ones taken by AI competitors such as OpenAI and Meta, says Anthropic admitted to using a third-party dataset called “the heap,” a large set of open-source training data hosted and made available online by a nonprofit organization called by saying. EleutherAI. Pile’s creators say the data set contains 196,640 books, or all of “Bibliotik,” one of the few places the complaint calls an “anonymous library” that hosts “huge collections of counterfeit books.”
The AI disinformation landscape is evolving rapidly during elections
AI deepfakery is playing a role in the 2024 election, but not in the way many feared—at least not yet. Image and video AI isn’t good enough to cross the uncanny valley and create a really bad facsimile of legitimate content. Most of the current AI-generated images look glossy and cartoony, including a photo of Democratic nominee Kamala Harris speaking in front of a giant hammer and sickle banner and a botched photo of Taylor Swift dressed as Uncle Sam endorsing Republican nominee Donald Trump. . (Trump himself reposted both of these photos to his Public Truth account, alongside a fake clip of him dancing next to billionaire Elon Musk.)
Perhaps the most disturbing thing we’ve seen so far is the depth of the Kamala Harris ad shared by Elon Musk that replaced the actual audio to make Harris call herself “unfit.” No sane person would believe that Harris would call himself incompetent, but what if it was something more subtle, like Harris implying a sudden reversal of an important policy idea, like raising the corporate income tax rate? Something like that could have a negative impact on this race, especially in the final weeks of the campaign.
More AI inclusions from Quick company:
- How AI tools are helping students—and their professors—in academic research
- Big Tech’s push against California’s AI security bill turns to Gavin Newsom
- How deepfakes are changing the creator’s landscape
- This app completely rejects AI. Good!
Looking for exclusive reporting and trend analysis on technology, business innovation, the future of work, and design? sign up for Fast company Premium.
Source link