Gadgets

How To Steal An AI Model Without Hacking Anything

Artificial intelligence models can be surprisingly stolen—provided you somehow manage to sniff out the model’s electronic signature. Although they repeatedly emphasize that they do not, in fact, want to help people attack neural networks, researchers from North Carolina State University describe such a process in a new paper. All they needed was an electromagnetic probe, a few pre-trained, open-source AI models, and the Google Edge Tensor Processing Unit (TPU). Their method involves analyzing electromagnetic radiation while the TPU chip is actively working.

“It’s very expensive to build and train a neural network,” said the study’s lead author and NC State Ph.D. reader Ashley Kurian on the phone with Gizmodo. “It’s intellectual property owned by a company, and it takes a lot of time and computer resources. For example, ChatGPT is made of billions of parameters, which is a kind of secret. If someone steals it, ChatGPT is theirs. You know they don’t have to pay for it and they can sell it.”

Theft is already a top problem in the AI ​​world. However, it is often the other way around, as AI developers train their models on copyrighted works without permission from their human creators. This amazing pattern sparks lawsuits and even tools to help artists fight “poisoning” with art generators.

“The electromagnetic data from the sensor actually gives us a ‘signature’ of the AI’s behavior,” Kurian explained in a statement, calling it “the easy part.” But to understand the model’s hyperparameters—its structure and defining details—they had to compare the magnetic field data with data taken while other AI models were running on the same type of chip.

By doing so, “they were able to determine the structures and certain features—known as layer details—we would need to make a copy of the AI ​​model,” explained Kurian, who added that they could do so “with 99.91% accuracy.” To pull this off, the researchers had physical access to the chip for both tests. and using other models.They also worked directly with Google to help the company determine how vulnerable its chips are.

Kurian speculated that capturing models that work on smartphones, for example, would also be possible — but their glossy design would make it difficult to monitor electronic signals.

“Side-channel attacks on edge devices are nothing new,” Mehmet Sencan, an AI standards security researcher at the nonprofit Atlas Computing, told Gizmodo. But this particular way of “extracting every hyperparameter model of the architecture is important.” Because AI hardware “does inference in plaintext,” Sencan explains, “anyone running their models at the edge or on any physically unsecured server will have to assume that their properties can be extracted by extensive testing.”


Source link

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button