Business News

How the new Trump administration will handle the emerging AI industry

Welcome to AI is recorded, Fast companyA weekly newsletter covering the most important news in the world of AI. You can sign up to receive this weekly newsletter here.

How the Trump administration will, or won’t, oversee AI

A second Trump administration will likely reduce the government’s role in overseeing the growing AI industry. Past statements and events provide clues as to what a Trump administration might look like.

When he was last in the White House, Trump issued a few orders to AI. The first came in 2019 with the American AI Initiative, a vague and ambitious document that calls on government agencies to study and invest in AI solutions, create programs to retrain the nation’s workforce for AI jobs, establish safety standards, and collaborate with international countries. community in the development of safe AI. A second executive order issued the following year urged the government to build public trust by adopting a “principled” approach to its implementation.

But that was all before ChatGPT’s public debut, which came when Joe Biden was in office. The release of OpenAI’s marquee chatbot prompted Biden’s October 2023 executive order outlining the security framework for AI developers to use, and gave the National Institute of Standards and Technology a major role in ending mitigation standards. Biden’s order also urged AI companies to share the results of security tests with the government, invoking the Defense Production Act.

Trump, for his part, called the order “dangerous” and vowed to reverse it on the first day of his new term. Vice President-elect JD Vance, who worked as a VC before entering politics and may lead the AI ​​policy in the new Trump administration, said that AI regulation will only lead to the dominance of the AI ​​market by a few large players, closing the startup. (XAI owner Elon Musk, a Trump campaigner and potential cabinet member, has made a similar argument.)

Although the Trump camp has denied any involvement or endorsement of Project 2025, it is worth mentioning the controversial policy blueprint. The 900-page document argues for protecting the ability of US tech companies to develop AI to compete with China, and advises that US tech companies be barred from providing technology that could advance China’s goals. Project 2025, however, does not address the dangers of AI including copyright infringement on training data, AI’s conflict with human values, job losses, new information (deepfakes), or excessive energy consumption of data centers.

None of this bodes well for Congress, which has failed to pass any binding regulations seeking to reduce the risks of large-scale AI models, and to pass this regulation. Also, the Department of Justice and the Federal Trade Commission (FTC) will be less likely to sue to stop monopolies in the AI ​​space or take steps to protect the rights of publishers who have seen their content hacked to train AI models. It is unclear whether the FTC’s ongoing investigation into OpenAI’s training data collection practices, and its financial arrangement with Microsoft, will continue into the Trump administration. But it’s almost certain that FTC chairwoman Lina Khan, who brought the actions, will be replaced.

Perhaps the scariest part of the new Trump administration is its handling of Taiwan, which currently produces about 90% of the advanced chips used by the tech industry, including the GPUs that train and run AI models. Trump said Taiwan “stole” America’s semiconductor business, and that the island should pay more for defense. The risk, of course, is that China will reclaim the island, which could mean a $10 trillion hit to the global economy, and America’s loss of control over its AI future.

AI companies are on the defensive

Big AI labs are starting to get a share of the nearly billion dollar budget the US spends on defense each year.

The US Africa Command (AFRICOM) says it intends to purchase OpenAI services for the Microsoft Azure cloud, which the command already uses. In the purchase document received last week by Intercept’s Sam Biddle, AFRICOM says it wants to use OpenAI models for “search, natural language processing, and integrated data processing analytics.” Biddle points out that the Pentagon began using OpenAI services last year, but that the AFRICOM contract marks the first time that the technology will support direct combat.

Until the beginning of this year, OpenAI’s usage policy prohibited its technology from being used in military operations, but it softened the wording earlier this year to allow use in ways that are removed from direct killing or destruction.

That’s not all OpenAI is getting into the action. Meta said in a blog post this week that it will make its Llama models open for use by the US government, including defense and intelligence agencies. Meta will partner with a number of big name integrators (Booz Allen, Accenture), cloud and IT providers (Microsoft, Oracle), data partners (Scale AI), and defense contractors (such as Anduril and Lockheed Martin) to deliver Llama models in government institutions.

Meta says that training data company Scale AI is fine-tuning Llama models to support national security agencies in “planning operations and identifying adversary threats,” and adds that Lockheed Martin uses Llama models to “generate codes, analyze data, and improve business. procedures.”

The Biden administration said in its AI summit last year that all federal government agencies should explore ways to use AI to improve their efficiency and effectiveness. This includes the use of AI “to protect the country and protect critical infrastructure.” The Pentagon and intelligence agencies have been using AI for years, but, with the exception of the Army, they have been slow to deploy new AI models in production.

Physical Intelligence raises $400 million, knocks out “ordinary” robot brains

Meta’s Yann LeCun has it said that as smart as today’s generative AI models are, they cannot develop a complex understanding of the world like children can. He says a child can learn to clear the table and put the dishes in the dishwasher, but a robot struggles without specific training in this task. Companies like Covariant, Figure, and Tesla are working on robots with “normal” skills.

A new entrant into that space appeared this week, when a startup called Physical Intelligence announced it had raised $400 million from some high-profile VCs like Thrive Capital and Lux ​​Capital, as well as Jeff Bezos. The company has published a research paper explaining how its AI system, π0 (“pi-zero”), is trained to handle common tasks such as folding clothes, wiping the table, flat boxes, and other things. Robot videos have received wide attention in X. Unlike Figure and Tesla, Physical Intelligence does not create a single robot brain. Its AI system is designed to provide brains to any robot.

And the company says it’s just getting started. It compares its current system to OpenAI’s GPT-1, which was promising, but very limited, and led to more intelligent systems.

More AI inclusions from Quick company: 

  • What Donald Trump’s victory means for Silicon Valley
  • I’m the CEO of an AI company, and this is the BS behind AI
  • The case for OpenAI being a B Corp
  • Ideogram is a great AI tool for making quick—and free—visualizations

Looking for exclusive reporting and trend analysis on technology, business innovation, the future of work, and design? sign up for Fast company Premium.




Source link

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button