How the new Anthropic protocol can rapidly expand the reach of AI
Welcome toAI is recorded,Fast companyA weekly newsletter that analyzes the most important issues in the world of AI. You can sign up to receive this weekly newsletter here.
Anthropic’s new MCP protocol quickly connects AI assistants with data and dev tools
We develop technical standards to simplify and simplify the common ways of transmitting information on the Internet. Email protocols (SMTP, POP3, and IMAP) allow different email servers and clients to communicate. The Bluesky protocol allows users to move their content and communication between social networks. As AI models emerge as a central part of the information ecosystem, we will also need standardized ways to move information to and from them.
On Monday Anthropic provided such a standard to the online world. The open source Model Context Protocol (MCP) allows developers to easily connect AI assistants (conversations and agents) to knowledge bases (ie knowledge bases or business intelligence graphs) or tools (ie coding assistants and dev scenarios). Currently, developers must create new connectors for each device.
“[E]Even the most advanced models are hindered by their isolation from data—locked behind silos of information and legacy systems,” Anthropic wrote in a blog post on Monday. MCP can be used to connect any type of AI application with any data store or tool, as long as both support the standard. During the preview period, developers can use MCP to connect an instance of Anthropic’s Claude chat running on their computer to files and data stored on the same machine. They can also connect the chatbot to services including Google Drive, Brave Search, and Slack, via an API. The protocol will eventually allow developers to connect AI applications to remote servers that can serve an entire organization, Anthropic said.
We are long past the days of chatbots spitting out text based solely on their training data (mainly, content pulled from the public web). Their performance (and accuracy) was limited. The MCP protocol makes it easy to equip AI applications with highly differentiated and reliable information. The protocol would allow developers to create “virtual” AI applications—that is, applications that can move between various tools and data sources, working through the steps necessary to produce the desired output.
Microsoft researchers show that “supersizing” model training works in robot brains, too
The language magic of ChatGPT happens when other researchers highly recommend the large language model and its training data. But language models are not the only game in town. Some types of AI can also grow more intelligent with increased training. Microsoft researchers recently showed that scaling can lead to intelligent forms of hybrid AI—that is, AI that physically interacts with the world, such as robots and self-driving cars.
One of the biggest challenges in training a robot arm, for example, is teaching it to predict the possible outcomes of its next move. One way to do that is “world modeling,” where an AI robot’s brain analyzes images, audio, and video recordings of actions in its environment to create an internal model of the physical dynamics of the space. Another method, called “behavioral cloning,” involves training AI by having it see human observers performing certain tasks within the environment.
In their study, Microsoft researchers focused on gameplay within a complex multiplayer video game called Bleeding Edgewhere players strategize and use “well-analyzed active controls” during combat. They found that the AI got better at modeling the world and inferring behavior as it was given more video data to play with and more computing power to process it. The researchers noted that the rate of improvement resulting from adding more data and calculations closely matched the progress seen in the training of large language models.
People who build and train AI models that power robots and self-driving cars can take a lesson from LLM training when making decisions about model size and training resources. Research suggests that the transformer models invented at Google in 2017 have a unique ability to grow in intelligence with increased early training, whether the model architecture is used to generate language or image generation or other types of AI.
Trump is reportedly considering an “AI king” for the White House
The incoming Trump administration is seriously considering adding an AI chief to the White House staff, Axios reports. One can advise government agencies on their use of AI over the next four years, and can influence government policy regarding AI in the private sector.
Elon Musk and Vivek Ramaswamy, who will lead Trump’s Department of Government Efficiency, will reportedly help in selecting the governor. That’s because incoming administrations believe AI could be used to help detect and eliminate government waste and fraud, including entitlement fraud.
Bloomberg reported last week that Trump’s team is also looking for a cryptocurrency governor in the White House, and that Trump’s transition team has been evaluating cryptocurrency executives for the role.
More AI inclusions from Quick company:
- How Scale became the leading AI training company
- 10 ways AI can make farmers’ jobs easier
- This TikTok user claims that the popular YouTube music channel is actually an AI
- 5 ways filmmakers are using AI to create new beauty
Looking for exclusive reporting and trend analysis on technology, business innovation, the future of work, and design? sign up for Fast company Premium.
Source link