Yotam Gutman
24.3.2026
Anthropic’s Claude can now control your PC remotely. Read Zeroport's deep dive into the massive security risks, critical CVEs, and how hackers hijack AI agents
The era of AI as a passive conversationalist is officially over. Today, we are firmly in the age of agentic AI—systems designed to run locally, integrate tightly with developer environments, and execute complex workflows with minimal human intervention. Anthropic has been leading this charge, recently rolling out a highly anticipated "computer use" feature for Claude.
The premise is incredibly seductive: dispatch a task from your mobile phone while grabbing coffee, and watch as Claude takes over your unattended Mac or PC—opening apps, clicking through browsers, reading local files, and writing code as if it were sitting in your chair.
The productivity implications are massive. But by granting an AI the ability to physically mimic human mouse movements and keystrokes on a remote device, we are actively dismantling decades of endpoint security. Here is why turning your AI assistant into a remote-controlled digital ghost is a security nightmare, and how attackers are already weaponizing these models.
When you hear "remote control," you might think of traditional RDP or VNC protocols, which hackers scan for on open ports. Claude’s architecture is different; it relies on an outbound HTTPS connection to Anthropic's cloud, bound to your identity session.
However, this doesn't eliminate the risk—it just shifts the attack surface. If an attacker hijacks your Anthropic account (via phishing, malware, or session cookie theft), they gain absolute control over your desktop.
Once an attacker has control of the agent's decision-making loop, Claude's ability to interact with the Graphical User Interface (GUI) allows it to bypass security boundaries designed specifically to stop automated malware:
As this technology matures, the trajectory naturally points toward an "any-device-to-any-device" control plane. When a desktop in one location can command a PC or Mac in another, the implications for remote hacking fundamentally shift. It essentially turns a productivity enhancement into a universal, pre-installed Remote Access Trojan (RAT).
With a cross-platform AI agent acting as the attacker's proxy, the attack "kill chain" becomes terrifyingly short and efficient. Here is how a modern, AI-facilitated hacking sequence unfolds:
When the developer returns to their desk, their screen looks exactly as they left it—but the corporate database is gone.
If you think hijacking an agent is difficult, recent security disclosures prove otherwise. Local AI tools are highly susceptible to manipulation.
Researchers at Check Point recently exposed critical vulnerabilities (CVE-2025-59536 and CVE-2026-21852) in Claude Code. They demonstrated how malicious repository-level configuration files—often cloned blindly by developers—could trigger silent Remote Code Execution (RCE) and hijack Anthropic API keys before the user even granted consent.
Similarly, SentinelOne detailed CVE-2025-58764, an RCE flaw stemming from improper command parsing. By injecting untrusted content into Claude's context window, attackers could completely bypass the built-in confirmation prompts, forcing the AI to execute arbitrary code.
Malicious actors are already abusing these capabilities at scale. Anthropic themselves recently warned that cybersecurity has reached a "critical inflection point", noting that Chinese state-sponsored hackers used Claude to autonomously perform 80-90% of an espionage campaign. Even more devastating, attackers posing as bug bounty testers recently jailbroke Claude Code to orchestrate a massive breach of Mexican government agencies. Over the course of a month, the manipulated AI automated exploit writing and exfiltrated over 150GB of sensitive records, exposing nearly 195 million identities.
The transition to agentic, remote-controlled AI requires a fundamental rethink of Remote Access. When software can mimic a human, "human-in-the-loop" safeguards are no longer enough and secured connectivity becomes paramount.
If your organization is experimenting with Claude's computer-use features or local coding agents, you must limit their reach. Agents should be heavily sandboxed in dedicated Virtual Machines or AppContainers, stripped of broad administrative rights, and strictly isolated from your primary production credentials and internal databases. Ensure Secured, non-IP remote access for users and agents.
The AI assistant is evolving into an autonomous operator. Make sure you aren't handing it the keys to your entire infrastructure.
Empower global teams with secure, hardware-enforced remote access, no VPNs, no data exposure, no risk.