Anthropic has launched an “auto mode” for Claude Code, a new tool that lets AI make permissions-level decisions on users’ behalf. The company says the feature offers vibe coders a safer alternative between constant handholding or giving the model dangerous levels of autonomy.
Anthropic’s Claude Code gets ‘safer’ auto mode
The feature is a middle-ground between cautious handholding and dangerous levels of autonomy.
The feature is a middle-ground between cautious handholding and dangerous levels of autonomy.


Claude Code is capable of acting independently on users’ behalf, a useful but risky feature as it can also do things users don’t want, like deleting files, sending out sensitive data, and executing malicious code or hidden instructions. Auto mode is designed to prevent this, flagging and blocking potentially risky actions before they run and offering the agent a chance to try again or ask a user to intervene.
Right now, auto mode is only available as a research preview for Team plan users. Anthropic says access will expand to include Enterprise and API users in “the coming days.”
Anthropic warns the tool is experimental and “doesn’t eliminate” risk entirely, recommending developers use it in “isolated environments.”
Most Popular
- Anthropic’s most dangerous AI model just fell into the wrong hands
- Sony’s PlayStation 5 is $200 off for the first time since December
- The unraveling of Dan Crenshaw
- Elon Musk admits that millions of Tesla vehicles won’t get unsupervised FSD
- Framework is building a better couch keyboard because everyone hates the Logitech one











