LIVE TV

Hacker Seized Laptop Through AI App With Zero Clicks

A critical vulnerability was recently discovered in “Orchids,” a rapidly expanding “vibe-coding” tool that allows users to create applications just by typing instructions into a chatbot. This isn’t theoretical: a hacker managed to exploit the flaw to instantly seize control of a BBC journalist’s laptop.

The breach, which happened in seconds, required zero downloads, no clicks, and provided absolutely no warning, demonstrating a serious security issue with the popular AI coding platform.

Cybersecurity researcher Etizaz Mohsin showcased a vulnerability by targeting a test project on the reporter’s spare machine. He made a small modification to the AI-generated code, which the platform accepted, causing the laptop to comply. Moments later, a file appeared on the desktop, and the wallpaper changed to a skull with a robot design, displaying the message: “You are hacked.”

A Zero-Click Threat

Mohsin utilized a “zero-click” exploit that operated entirely within the trusted AI project, bypassing the need for traditional methods like malicious links or file downloads. This allowed him to gain remote access to the machine. Once accessed, he could view files, install surveillance software, or even potentially activate the machine’s cameras and microphones.

“The whole proposition of having the AI handle things for you comes with big risks,” Mohsin said. He reported the issue weeks ago. Orchids, a startup founded in 2025 with approximately one million users, later stated it may have missed earlier warnings due to its small team being overwhelmed.

New security threats are posed by AI agents, as experts warn. Ulster University Professor Kevin Curran implied that AI-generated projects often lack strict testing. This can lead to hidden vulnerabilities being spread across numerous builds. However, because “agentic AI” carries out complex commands on a user’s device with the tiniest oversight,  a single defect can potentially compromise the entire system.

Practical Advice

As AI coding tools increase, security controls must move to mitigate the unseen risks associated with effortless creation. Security experts warn users to take precautionary measures: run experimental AI tools on isolated machines, utilize limited or disposable accounts, and meticulously review all permissions before granting an AI complete system access.