Anthropic on Monday launched the most ambitious consumer AI agent to date, giving its Claude chatbot the ability to directly control a user's Mac — clicking buttons, opening applications, typing into fields, and navigating software on the user's behalf while they step away from their desk.
The update, available immediately as a research preview for paying subscribers, transforms Claude from a conversational assistant into something closer to a remote digital operator. It arrives inside both Claude Cowork, the company's agentic productivity tool, and Claude Code, its developer-focused command-line agent. Anthropic is also extending Dispatch — a feature introduced last week that lets users assign Claude tasks from a mobile phone — into Claude Code for the first time, creating an end-to-end pipeline where a user can issue instructions from anywhere and return to a finished deliverable.
The move thrusts Anthropic into the center of the most heated competition in artificial intelligence: the scramble to build agents that can act, not just talk. OpenAI, Google, Nvidia, and a growing swarm of startups are all chasing the same prize — an AI that operates inside your existing tools rather than beside them. And the stakes are no longer theoretical. Reuters reported Sunday that OpenAI is actively courting private equity firms in what it described as an "enterprise turf war with Anthropic," a battle in which the ability to ship working agents is fast becoming the decisive weapon.
The new features are available to Claude Pro subscribers (starting at $17 per month) and Max subscribers ($100 or $200 per month), but only on macOS for now.
Inside Claude's computer use: How Anthropic's AI agent decides when to click, type, and navigate your Mac
The computer use feature works through a layered priority system that reveals how Anthropic is thinking about reliability versus reach.
When a user assigns Claude a task, it first checks whether a direct connector exists — integrations with services like Gmail, Google Drive, Slack, or Google Calendar. These connectors are the fastest and most reliable path to completing a task, according to Anthropic's documentation. If no connector is available, Claude falls back to navigating the Chrome browser via Anthropic's Claude for Chrome extension. Only as a last resort does Claude interact directly with the user's screen — clicking, typing, scrolling, and opening applications the way a human operator would.
This hierarchy matters. As Anthropic's help center documentation explains, "pulling messages through your Slack connection takes seconds, but navigating Slack through your screen takes much longer and is more error-prone." Screen-level interaction is the most flexible mode — it can theoretically work with any application — but it is also the slowest and most fragile.
When Claude does interact with the screen, it takes screenshots of the user's desktop to understand what it's looking at and determine how to navigate. That means Claude can see anything visible on the screen, including personal data, sensitive documents, or private information. Anthropic trains Claude to avoid engaging in stock trading, inputting sensitive data, or gathering facial images, but the company is candid that "these guardrails are part of how Claude is trained and instructed, but they aren't absolute."
There is nothing to configure. No API keys, no terminal setup, no special permissions beyond what the user grants on a per-app basis. As Ryan Donegan, who handles communications for Anthropic, put it in a press briefing: "Download the app and it uses what's already on your machine."
Claude Dispatch turns your iPhone into a remote control for AI-powered desktop automation
The real strategic play may not be computer use itself but how Anthropic is pairing it with Dispatch.
Dispatch, which launched last week for Cowork and now extends to Claude Code, creates a persistent, continuous conversation between Claude on your phone and Claude on your desktop. A user pairs their mobile device with their Mac by scanning a QR code, and from that point forward, they can text Claude instructions from anywhere. Claude executes those instructions on the desktop — which must remain awake and running the Claude app — and sends back the results.
The use cases Anthropic envisions range from mundane to ambitious: having Claude check your email every morning, pull weekly metrics into a report template, organize a cluttered Downloads folder, or even compile a competitive analysis from local files and connected tools into a formatted document. Scheduled tasks allow users to set a cadence once — "every Friday," "every morning" — and let Claude handle the rest without further prompting.
Anthropic's blog post frames the combination of Dispatch and computer use as something of a paradigm shift. "Claude can use your computer on your behalf while you're away," the company wrote, offering examples like creating a morning briefing while a user commutes, making changes in an IDE, running tests, and submitting a pull request.
One early user on social media captured the broader ambition succinctly. Gagan Saluja, who describes himself as working with Claude and AWS, wrote: "combine this with /schedule that just dropped and you've basically got a background worker that can interact with any app on a cron job. that's not an AI assistant anymore, that's infrastructure."
First hands-on tests reveal Claude's computer use works about half the time — and that may be the point
Anthropic is calling this a research preview for a reason. Early hands-on testing suggests the feature works well for information retrieval and summarization but struggles with more complex, multi-step workflows — particularly those that require interacting with multiple applications.
John Voorhees of MacStories, the Apple-focused publication, published a detailed hands-on evaluation of Dispatch the same day as the announcement. His results were mixed. Claude successfully located a specific screenshot on his Mac, summarized the most recent note in his Notion database, listed notes saved that day, added a URL to Notion, summarized his most recently received email, and recalled a screenshot from earlier in the session. But it failed to open the Shortcuts app on his Mac, send a screenshot via iMessage, list unfinished Todoist tasks (due to an authorization error), list Terminal sessions, display a food order from an active Safari tab, or fetch a URL from Safari using AppleScript.
Voorhees' verdict was measured: Dispatch "can find information on your Mac and works with Connectors, but it's slow and about a 50/50 shot whether what you try will work." He added that it is "not good enough to rely on when you're away from your desk" but called it "a step in the right direction."
Meanwhile, on GitHub, users are already surfacing technical issues. One bug report filed against Claude Code describes a scenario where the Read tool attempts to process multiple large PDF files in a single turn without checking whether the combined payload exceeds the 20MB API limit, causing the request to fail outright. The issue, which has been tagged as a bug specific to macOS, highlights the kinds of rough edges that come with shipping an early preview of a complex agentic system.
OpenClaw, NemoClaw, and the startup swarm: Why Anthropic is racing to ship AI computer use now
Anthropic's timing is not accidental. The company is shipping computer use capabilities into a market that has been rapidly reshaped by the viral rise of OpenClaw, the open-source framework that enables AI models to autonomously control computers and interact with tools.
OpenClaw exploded earlier this year and proved that users wanted AI agents capable of taking real actions on their computers — and that they were willing to tolerate rough edges to get them. The framework spawned an entire ecosystem of derivative tools — what the community calls "claws" — that turned autonomous computer control from a research curiosity into a product category almost overnight. Nvidia entered the fray last week with NemoClaw, its own framework designed to simplify the setup and deployment of OpenClaw with added security controls. Anthropic is now entering a market that the open-source community essentially created, betting that its advantages — tighter integration, a consumer-friendly interface, and an existing subscriber base — can compete with free.
Smaller startups are also pushing into the space. Coasty, which offers both a desktop app and browser-based AI agent for Mac and Windows, markets itself as providing "full browser, desktop, and terminal automation with a native experience." One user on social media directly pitched Coasty in the replies to Anthropic's announcement, claiming it offers "much better user experience and more accurate" results — a sign of how crowded and competitive the computer-use agent space has become in a matter of months.
The competitive dynamics extend beyond just computer use. Reuters has reported that OpenAI is sweetening its pitch to private equity firms amid what the wire service described as an "enterprise turf war with Anthropic." The two companies are locked in an escalating battle for enterprise customers, and the ability to offer agents that can actually operate within a company's existing software stack — not just chat about it — is increasingly the differentiator.
Prompt injection, screenshot surveillance, and the unsolved security risks of letting AI control your desktop
If the competitive pressure explains why Anthropic shipped this feature now, the safety caveats explain why the company is hedging its bets.
Computer use runs outside the virtual machine that Cowork normally uses for file operations and commands. That means Claude is interacting with the user's actual desktop and applications — not an isolated sandbox. The implications are significant: a misclick, a misunderstood instruction, or a prompt injection attack could have real consequences on a user's live system.
Anthropic has built several layers of defense. Claude requests permission before accessing each application. Some sensitive apps — investment platforms, cryptocurrency tools — are blocked by default. Users can maintain a blocklist of applications Claude is never allowed to touch. The system scans for signs of prompt injection during computer use sessions. And users can stop Claude at any point.
But the company is remarkably forthright about the limits of these protections. "Computer use is still early compared to Claude's ability to code or interact with text," Anthropic's blog post states. "Claude can make mistakes, and while we continue to improve our safeguards, threats are constantly evolving."
The help center documentation goes further, explicitly warning users not to use computer use to manage financial accounts, handle legal documents, process medical information, or interact with apps containing other people's personal information. Anthropic also advises against using Cowork for HIPAA, FedRAMP, or FSI-regulated workloads.
For enterprise and team customers, there is an additional wrinkle. Cowork conversation history is stored locally on the user's device, not on Anthropic's servers. But critically, enterprise features like audit logs, compliance APIs, and data exports do not currently capture Cowork activity. This means that organizations subject to regulatory oversight have no centralized record of what Claude did on a user's machine — a gap that could be a dealbreaker for compliance-sensitive industries.
One user flagged this concern on social media with particular precision. NomanInnov8 wrote: "when the agent IS the user (same mouse, keyboard, screen), traditional forensic markers won't distinguish human vs AI actions. How are we thinking about audit trails here?"
The question is not academic. As AI agents gain the ability to take real-world actions — sending emails, modifying files, interacting with financial systems — the ability to distinguish between human and machine actions becomes a foundational requirement for governance, liability, and compliance. Anthropic has not yet answered it.
From excitement to anxiety: How users are reacting to Claude's new power over their machines
The social media reaction to the announcement split roughly into three camps: those excited about the productivity implications, those concerned about the security risks, and those frustrated that they cannot yet use it.
The enthusiasm was genuine and widespread. "Legit just got the update and used it with dispatch — exactly the feature I wanted," wrote one X user. Mike Joseph called the speed of Anthropic's feature releases "fantastic." Another X user noted the significance for non-technical users: "Very exciting for non-tech folks who don't want or know how to set up OpenClaw."
But the security concerns were equally pointed. One user, posting as Profannyti, wrote: "Granting that kind of control over your personal device doesn't sit right. It's almost like letting someone you barely know take the wheel and trusting everything will be fine."
As Engadget reported, experts have warned that one major concern with agentic AI is that "it can take major, sometimes dramatic actions quickly and with little warning," and that such tools "can also be hijacked by malicious actors."
Several users flagged practical frustrations as well. Windows users — excluded from the macOS-only research preview — expressed predictable dismay. Others reported that the new features were consuming their usage quotas at alarming rates. One Max 20x subscriber paying $200 per month complained that Dispatch was "eating my quota like crazy," consuming 10% of their allowance in a single prompt. Another user linked to the GitHub bug report about the 20MB payload issue, calling the situation "quite urgent."
Anthropic's enterprise playbook: Plugins, pricing tiers, and the bet that AI agents can replace entire workflows
The pricing structure reveals where Anthropic sees the real market. While individual Pro users get access to Cowork, the company notes that agentic tasks "consume more capacity than regular chat" because "Claude coordinates multiple sub-agents and tool calls to complete complex work." Heavy users are nudged toward Max plans at $100 or $200 per month.
For teams, the pricing starts at $20 per seat per month for groups of five to 75 users. Enterprise pricing is custom and includes admin controls to toggle Cowork on or off for the organization.
The plugin architecture is where Anthropic's enterprise ambitions become clearest. Plugins bundle skills, connectors, and sub-agents into a single install that turns Claude into a domain specialist — for legal work, finance, brand voice management, or other functions. Anthropic already lists plugins for legal workflows (contract review, NDA triage), finance (journal entries, reconciliation, variance analysis), and brand voice (analyzing existing documents to enforce guidelines). The company is betting that the combination of computer use, Dispatch, scheduled tasks, and domain-specific plugins will create an agent capable enough to justify enterprise procurement.
The testimonials Anthropic has gathered suggest the pitch is landing with at least some organizations. Larisa Cavallaro, identified as an AI Automation Engineer, described connecting Cowork to her company's tech stack and asking it to identify engineering bottlenecks. Claude, she said, returned "an interactive dashboard, team-by-team efficiency analyses, and a prioritized roadmap." Joel Hron, a CTO, offered a more philosophical framing: "The human role becomes validation, refinement, and decision-making. Not repetitive rework."
The AI industry's defining tension: Shipping fast enough to win, slow enough to be safe
Anthropic is shipping these capabilities at a moment of extraordinary velocity in the AI industry — and extraordinary uncertainty about what that velocity means.
The company's own research quantifies the transformation underway. Its economic index, published in March 2026, tracks how AI is reshaping labor markets and productivity across sectors. The data suggests that AI adoption is accelerating unevenly, with knowledge workers in technology, finance, and professional services seeing the most dramatic shifts.
Anthropic is also navigating significant external pressures beyond the product arena. Recent reporting has highlighted scrutiny from Senator Elizabeth Warren regarding Anthropic's defense and supply chain relationships — a reminder that the company's ambitions to build powerful autonomous agents exist within an increasingly complex political and regulatory environment.
For now, the computer use feature remains early and imperfect. Complex tasks sometimes require a second attempt. Screen interaction is meaningfully slower than direct integrations. The audit trail gap for enterprise users is a genuine liability. And the fundamental tension between giving an AI agent enough access to be useful and limiting that access enough to be safe remains unresolved.
But Anthropic is not waiting for perfection. The company is building in public, shipping capabilities it openly describes as incomplete, and betting that users will tolerate a 50 percent success rate today in exchange for the promise of something transformative tomorrow. It is a calculation that only works if the failures remain minor — a missed click, a stalled task, an unread email. The moment a failure isn't minor, the calculus changes entirely.
The AI industry has spent the last three years proving that machines can think. Anthropic is now asking a harder question: whether humans are ready to let them act. The answer, for the moment, is a provisional yes — hedged with permissions dialogs, blocklists, and the quiet hope that nothing important gets deleted before the technology catches up to the ambition.
