Researchers Reveal AI Model Theft via Electromagnetic Side‑Channel
A team led by KAIST has demonstrated that artificial‑intelligence models can be reverse‑engineered by capturing faint electromagnetic emissions from GPUs during normal operation. Using a small antenna hidden in a bag, the researchers collected traces from as far as six meters away, even through walls, and reconstructed key architectural details of AI systems with high accuracy. The technique, called ModelSpy, highlights a new physical‑layer vulnerability that bypasses traditional software and network defenses, raising concerns for companies that consider AI model designs as core intellectual property.