Anthropic unveils Mythos cybersecurity model weeks after Claude Code leak exposed security lapse
Crypto Briefing
4h ago
Ai Focus
Anthropic launched Mythos and Project Glasswing days after a Claude Code leak exposed source files and caused a GitHub takedown mess.
Helpful
No.Help

Author:Crypto Briefing

Anthropic on Tuesday unveiled Claude Mythos Preview, a new frontier AI model built for cybersecurity work, and launched Project Glasswing, a partner program that gives a small group of major technology and infrastructure organizations early access to the system for defensive security tasks.

Anthropic said the unreleased model has already found thousands of high-severity vulnerabilities across major operating systems and web browsers, and described the effort as an urgent push to put advanced cyber capabilities in defenders’ hands before attackers gain similar tools.

Project Glasswing includes AWS, Apple, Broadcom, Cisco, CrowdStrike, Google, JPMorganChase, the Linux Foundation, Microsoft, Nvidia, and Palo Alto Networks, with more than 40 additional organizations also getting access. Anthropic said it is committing up to $100 million in usage credits and $4 million in donations to open source security groups as part of the program.

Anthropic said the model can identify vulnerabilities and develop related exploits with little or no human steering, and highlighted examples including a 27-year-old OpenBSD flaw, a 16-year-old FFmpeg bug, and chained Linux kernel vulnerabilities that could escalate ordinary user access into full machine control. Anthropic’s announcement says the relevant bugs have already been patched.

The Mythos launch lands barely a week after Anthropic created its own security mess by accidentally exposing Claude’s code source files through a packaging error in version 2.1.88 of its software. The mistake exposed nearly 2,000 files and more than 500,000 lines of code, then spiraled further when Anthropic’s takedown effort accidentally hit around 8,100 GitHub repositories before the company reversed most of the notices.

Anthropic is now presenting Mythos as a model so capable and potentially dangerous that it will not be released broadly for now. In the Project Glasswing materials, the company says it does not plan to make Mythos Preview generally available and instead wants to develop safeguards first so Mythos class models can eventually be deployed more safely at scale.

A leaked internal document described Mythos as the company’s most capable model to date and as a meaningful step change in reasoning, coding, and cybersecurity. Anthropic has also been discussing the model’s offensive and defensive cyber implications with US government officials.

Disclosure: This article was edited by Estefano Gomez. For more information on how we create and review content, see our Editorial Policy.
Tip
$0
Like
0
Save
0
Views 216
CoinMeta reminds readers to view blockchain rationally, stay aware of risks, and beware of virtual token issuance and speculation. All content on this site represents market information or related viewpoints only and does not constitute any form of investment advice. If you find sensitive content, please click“Report”,and we will handle it promptly。
Submit
Comment 0
Hot
Latest
No comments yet. Be the first!
Related
Anthropic has announced the creation of a legendary model: Claude Mythos, whose code and hacking skills far surpass those of Opus 4.6, but it will not be made available to the public!
Anthropic launched "Project Glass Wing," uniting 12 organizations including Amazon, Google, and Microsoft to conduct cybersecurity defense operations around the new model Claude Mythos Preview. This model's code and reasoning capabilities surpass Claude Opus, and it has scanned mainstream systems and discovered thousands of zero-day vulnerabilities, including several high-risk vulnerabilities that have existed for over a decade, all of which have been patched.
Wall Street CN
·2026-04-08 05:55:44
372
Anthropic Spots 'Emotion Vectors' Inside Claude That Influence AI Behavior
Anthrophic researchers say internal emotion-like signals shape how AI large language models make decisions.
Decrypt
·2026-04-04 21:23:17
356
OpenAI, Anthropic, Google Unite to Combat Model Copying in China
Rivals OpenAI, Anthropic PBC, and Alphabet Inc.’s Google have begun working together to try to clamp down on Chinese competitors extracting results from cutting-edge US artificial intelligence models to gain an edge in the global AI race.
Bloomberg
·2026-04-07 05:08:58
885
Claude Code's update backfired, with its depth of thought plummeting by 67%, making it "untrustworthy for handling complex engineering tasks!"
Based on quantitative analysis of 6,852 session logs, AMD's AI Director, Stella Laurenzo, publicly accused Claude Code on GitHub of systematic degradation since February: a 67% drop in thinking depth, a 70% decrease in file read rate before code modifications, a surge of 173 instances of malicious behavior triggers, and a 122-fold increase in API costs. The official response stated that this was due to a default lowered thinking level, but user feedback indicated that the problem persisted even after manually increasing it, leading to a serious crisis of trust and a significant loss of users.
Wall Street CN
·2026-04-07 16:08:19
871
Anthropic blocks "lobster": Starting at 3 PM on April 4th, Claude subscription account credits cannot be used for OpenClaw.
Anthropic stated that OpenClaw ("Lobster") remains available through additional usage plans (currently discounted) or the Claude API key. Affected users will receive compensatory credits and can apply for a refund. Analysts believe this adjustment signals a shift in Anthropic's business model towards pay-as-you-go pricing, and may also be a strategic move to guide users towards its own products.
Wall Street CN
·2026-04-04 11:54:35
334