Hackers flooding Gemini AI with prompts to copy model capabilities
- By Web Desk -
- Feb 13, 2026

Google disclosed in a new Threat Tracker report on Thursday that hackers are increasingly engaging in sophisticated “distillation attacks” on the Gemini AI model to steal its underlying technology.
The company revealed one specific instance where adversaries used over 100,000 generated AI prompts to systematically probe the model, a technique known as “model extraction.” Google attributes this rise in AI-based espionage to adversaries based in countries such as China, Russia, and North Korea.
These distillation attacks involve using legitimate access to flood a mature machine learning model with queries, effectively extracting enough data to replicate its capabilities in new models, often in different languages.
The report clarified that while this activity does not pose a direct security threat to everyday users, it presents a significant danger to service providers and model developers by compromising their intellectual property.
John Hultquist, chief analyst for the Google Threat Intelligence Group, warned that this is just the beginning. “We’re going to be the canary in the coal mine for far more incidents,” Hultquist told NBC News, suggesting that while Google may be the first target, other major AI developers will inevitably face similar theft attempts.
This disclosure comes amid increasing global competition in artificial intelligence. Chinese companies like ByteDance are launching advanced video generation tools, challenging U.S. dominance.
Last year, Chinese firm DeepSeek disrupted the industry by introducing a model that rivaled top American technologies, leading OpenAI to accuse them of using similar extraction methods to train their AI.
Google’s latest report confirms that such tactics are becoming standard weapons in the escalating race for AI dominance.