Anthropic has announced the creation of a legendary model: Claude Mythos, whose code and hacking skills far surpass those of Opus 4.6, but it will not be made available to the public!
Wall Street CN
2h ago
Ai Focus
Anthropic launched "Project Glass Wing," uniting 12 organizations including Amazon, Google, and Microsoft to conduct cybersecurity defense operations around the new model Claude Mythos Preview. This model's code and reasoning capabilities surpass Claude Opus, and it has scanned mainstream systems and discovered thousands of zero-day vulnerabilities, including several high-risk vulnerabilities that have existed for over a decade, all of which have been patched.
Helpful
No.Help

Author:Wall Street CN

Anthropic announced a project today: Project Glasswing. The reason for launching this project is that Anthropic has trained a brand new super-powerful model, Claude Mythos Preview, which is actually the model mentioned in the CC source code leak a few days ago.

The project was jointly launched by 12 organizations, including Amazon AWS, Apple, Broadcom, Cisco, CrowdStrike, Google, JPMorgan Chase, the Linux Foundation, Microsoft, Nvidia, Palo Alto Networks, and Anthropic itself.

In layman's terms, this model is so powerful that it requires a secure testing mode, allowing only approved organizations to use it internally and not releasing it to the public. How powerful is it? Just look at the data; its code and reasoning capabilities far surpass Opus 4.6.

Code:

reasoning:

Search and Computer Use

Opus literally means masterpiece, and Mythos literally means myth. The CEO of Anthropic and a host of big names from its partners have come out to endorse this project.

Anthropic has explicitly stated that it does not intend to make Claude Mythos Preview publicly available. However, its long-term goal is to enable users to safely use models with equivalent capabilities. To this end, they plan to first develop and validate relevant security mechanisms on the upcoming Claude Opus model, iterating under controlled risk conditions, and then gradually expanding the rollout. A new version of Opus may be released soon to provide the corresponding capabilities.

Let's take a closer look at what Project Glasswing actually is.

What did this model discover?

Over the past few weeks, Anthropic has used Claude Mythos Preview to scan the world’s major operating systems, browsers and other important software.

Results: Thousands of previously undiscovered zero-day vulnerabilities were discovered, many of which were classified as high-risk.

Several specific examples:

A vulnerability in OpenBSD that has existed for 27 years. OpenBSD is known for its security and is used to run critical infrastructure such as firewalls. This vulnerability allows attackers to remotely crash a target machine simply by connecting to it.

A vulnerability in FFmpeg that has existed for 16 years. FFmpeg is used for video encoding and decoding in countless software programs. The line of code that the model found the vulnerability had been scanned 5 million times by automated testing tools before it was ever discovered.

In the Linux kernel, the model autonomously discovered and linked multiple vulnerabilities, enabling attackers to escalate from ordinary user privileges to complete control of the entire machine.

All of the above vulnerabilities have been reported to the relevant software maintainers and have now been fixed. For the remaining vulnerabilities, Anthropic has released the encrypted hash values; further details will be released after the vulnerabilities are fixed.

Why do this?

Anthropic's assessment is that AI models have surpassed everyone except a few top human experts in their ability to discover and exploit software vulnerabilities.

The spread of this capability is a matter of time, not a question of whether it will happen.

Global cybercrime causes an estimated $500 billion in economic losses annually. Attacks targeting healthcare systems, energy infrastructure, and government agencies have caused substantial damage and pose a continued threat to civilian and military infrastructure.

AI has significantly reduced the cost, barriers to entry, and level of expertise required to launch such attacks.

Anthropic's logic is: rather than waiting for others to use this capability offensively first, it's better to proactively use it defensively.

How exactly should the plan be implemented?

Project Glasswing currently comprises two layers.

The first tier consists of 12 founding partners who will gain access to Claude Mythos Preview to scan and patch vulnerabilities in their core systems, with a focus on areas such as local vulnerability detection, binary black-box testing, endpoint security, and penetration testing.

The second tier consists of over 40 other organizations that build or maintain critical software infrastructure, who will also gain access to the model to scan their own and open-source systems.

Anthropic has committed to providing up to $100 million in model usage credits. After the research preview period, Claude Mythos Preview will offer commercial access to participants, priced at $25/$125 per million input/output tokens, with access via the Claude API, Amazon Bedrock, Google Cloud Vertex AI, and Microsoft Foundry.

In addition, Anthropic donated $2.5 million through the Linux Foundation to Alpha-Omega and OpenSSF, and $1.5 million to the Apache Software Foundation, totaling $4 million, to support open-source software maintainers in addressing this new situation. Open-source software maintainers can apply for access through the Claude for Open Source project.

Next steps

Regarding information sharing, partners will share information and best practices to the greatest extent possible. Anthropic has committed to publicly releasing a research progress report within 90 days, including the number of vulnerabilities discovered, issues fixed, and improvements that can be disclosed.

In terms of policy recommendations, Anthropic will collaborate with major security organizations to develop practical recommendations in the following areas: vulnerability disclosure processes, software update processes, open source and supply chain security, secure software development lifecycle, regulated industry standards, scaling and automation of vulnerability classification, and patch automation.

This article is sourced from:

AI Cambrian

Tip
$0
Like
0
Save
0
Views 372
CoinMeta reminds readers to view blockchain rationally, stay aware of risks, and beware of virtual token issuance and speculation. All content on this site represents market information or related viewpoints only and does not constitute any form of investment advice. If you find sensitive content, please click“Report”,and we will handle it promptly。
Submit
Comment 0
Hot
Latest
No comments yet. Be the first!
Related
Anthropic unveils Mythos cybersecurity model weeks after Claude Code leak exposed security lapse
Anthropic launched Mythos and Project Glasswing days after a Claude Code leak exposed source files and caused a GitHub takedown mess.
Crypto Briefing
·2026-04-08 02:28:50
218
Anthropic Spots 'Emotion Vectors' Inside Claude That Influence AI Behavior
Anthrophic researchers say internal emotion-like signals shape how AI large language models make decisions.
Decrypt
·2026-04-04 21:23:17
356
OpenAI, Anthropic, Google Unite to Combat Model Copying in China
Rivals OpenAI, Anthropic PBC, and Alphabet Inc.’s Google have begun working together to try to clamp down on Chinese competitors extracting results from cutting-edge US artificial intelligence models to gain an edge in the global AI race.
Bloomberg
·2026-04-07 05:08:58
886
Trump has not made a public appearance since Wednesday! Uncharacteristically, he has kept a low profile, working non-stop over the weekend and even skipping his Mar-a-Lago resort.
With the 48-hour ultimatum to Iraq fast approaching and a US pilot missing, Trump unusually canceled his weekend plans at Mar-a-Lago and has not made a public appearance since Wednesday. The White House stated that he is "working tirelessly" in the Oval Office. Despite his tough stance on social media, his silence on the search and rescue progress has fueled widespread speculation about US military action and negotiating strategies.
Wall Street CN
·2026-04-05 10:25:07
426
Claude Code's update backfired, with its depth of thought plummeting by 67%, making it "untrustworthy for handling complex engineering tasks!"
Based on quantitative analysis of 6,852 session logs, AMD's AI Director, Stella Laurenzo, publicly accused Claude Code on GitHub of systematic degradation since February: a 67% drop in thinking depth, a 70% decrease in file read rate before code modifications, a surge of 173 instances of malicious behavior triggers, and a 122-fold increase in API costs. The official response stated that this was due to a default lowered thinking level, but user feedback indicated that the problem persisted even after manually increasing it, leading to a serious crisis of trust and a significant loss of users.
Wall Street CN
·2026-04-07 16:08:19
871