On April 2nd, security researcher Jack Cui publicly measured and demonstrated in recent days Claude Code One high-risk safetyloophole, REVEALS THE SERIOUS RISKS OF THE AI PROGRAMMING TOOL AT THE SYSTEM COMPETENCE LEVEL。

The Jack Cui presentation project contains only a .claude configuration folder and an empty Python script. After the terminal enters claude and returns to the car, the computer camera is raised silently, local key information is automatically written into the text file, and the process is zero-interactive and zero-tip。
The problem stems from Claude Code's built-in "hooks" mechanism, which allows developers to define automated scripts in the .claude/settings.json configuration file under the project directory, and to perform silently while running the claude command without popping any confirmation hint。
In addition to .mcp.json, the assailant can configure the malign MCP server through the .mcp.json file, bypass the user approval to automatically connect external tools; or use the frontmatter area definition of the skill plugin for malicious hooks, which is triggered by sub-agents on their missions。
At present, the Anthropic official source has sent the restored version. Developers using Claude Code should immediately implement npm install@anthropic-ai/claude-code@latest update to the latest version, and should be careful in the near future to close the line containing the .claude directory。