OpenAI, Anthropic, Google Unite to Combat Model Copying in China
Bloomberg
23h ago
Ai Focus
Rivals OpenAI, Anthropic PBC, and Alphabet Inc.’s Google have begun working together to try to clamp down on Chinese competitors extracting results from cutting-edge US artificial intelligence models to gain an edge in the global AI race.
Helpful
No.Help

Author:Bloomberg

Rivals OpenAI, Anthropic PBC, and Alphabet Inc.’s Google have begun working together to try to clamp down on Chinese competitors extracting results from cutting-edge US artificial intelligence models to gain an edge in the global AI race.

The firms are sharing information through the Frontier Model Forum, an industry nonprofit that the three tech companies founded with Microsoft Corp. in 2023, to detect so-called adversarial distillation attempts that violate their terms of service, according to people familiar with the matter.

The rare collaboration underscores the severity of a concern raised by US AI companies that some users, especially in China, are creating imitation versions of their products that could undercut them on price and siphon away customers while posing a national security risk. US officials have estimated that unauthorized distillation costs Silicon Valley labs billions of dollars in annual profit, according to a person familiar with the findings who described them on condition of anonymity.

OpenAI confirmed it’s part of the information sharing effort on adversarial distillation through the Frontier Model Forum and pointed to a recent memo it sent to Congress on the practice, where it accused Chinese firm DeepSeek of trying to “free-ride on the capabilities developed by OpenAI and other US frontier labs.” Google, Anthropic, and the Frontier Model Forum declined to comment.

Distillation is a technique where an older “teacher” AI model is used to train a newer, “student,” model that replicates the capabilities of the earlier system — often at a much lower cost than producing an original model from scratch. Some forms of distillation are widely accepted and even encouraged by AI labs, such as when companies create smaller, more efficient versions of their own models, or allow outside developers to use distillation to build non-competitive technologies.

Read More: OpenAI Claims DeepSeek Distilled US Models to Gain an Edge

Yet distillation has been controversial when used by third parties — particularly in adversary nations like China or Russia — to replicate proprietary work without authorization. Leading US AI labs have warned that foreign adversaries could use the technique to develop AI models stripped of safety guardrails, such as limits that would prevent users from creating a deadly pathogen.

Most models made by Chinese labs are open weight, meaning that parts of the underlying AI system are publicly available for users to freely download and run on their own platforms, and therefore cheaper to use. That poses an economic challenge for US AI companies that have kept their models proprietary, betting that customers will pay for access to their products and help offset the hundreds of billions of dollars they’ve spent on data centers and other infrastructure.

Distillation first drew significant scrutiny in January 2025 in the weeks after DeepSeek’s surprise release of the R1 reasoning model that took the AI world by storm. Soon after, Microsoft and OpenAI investigated whether the Chinese startup had improperly exfiltrated large amounts of data from the US firm’s models to create R1, Bloomberg previously reported.

In February, OpenAI warned US lawmakers that DeepSeek had continued to use increasingly sophisticated tactics to extract results from US models, despite heightened efforts to prevent misuse of its products. OpenAI claimed in its memo to the House Select Committee on China that DeepSeek was relying on distillation to develop a new version of its breakthrough chatbot.

Information-sharing by US AI companies about adversarial distillation echoes a standard practice in the cybersecurity industry, where firms regularly swap data on attacks and adversaries’ tactics as a way to strengthen network defenses. By working together, the AI firms are similarly seeking to more effectively detect the practice, identify who’s responsible and try to prevent unauthorized users from succeeding.

Read More: Anthropic Says DeepSeek, MiniMax Distilled AI Models for Gains

Trump administration officials have signaled their openness to fostering information sharing among AI companies to rein in adversarial distillation. The AI Action Plan unveiled by President Donald Trump last year called for the creation of an information sharing and analysis center, in part for this purpose.

For now, information sharing on distillation remains limited due to AI companies’ uncertainty about what can be shared under existing antitrust guidance to counter the competitive threat from China, according to people familiar with the matter. The firms would benefit from greater clarity from the US government, the people said.

Distillation has ranked as a top concern among American AI developers since DeepSeek rattled global markets in early 2025 with its R1 release. Highly capable open-source models continue to proliferate in China, and many in the industry are watching closely for a major upgrade to DeepSeek’s model.

Read More: Anthropic Clamps Down on AI Services for Chinese-Owned Firms

Last year, Anthropic blocked Chinese-controlled companies from using its Claude chatbot model, and in February it identified three Chinese AI labs — DeepSeek, Moonshot, and MiniMax — as illicitly extracting the model’s capability via distillation. This year, Anthropic said the threat “extends beyond any single company or region” and poses a national security risk, since distilled models often lack safety guardrails designed to prevent bad actors from using AI tools for malicious activities.

Google has published a blog saying it identified an increase in model extraction attempts. The three US AI labs have not yet provided evidence showing how much of China’s model innovation is reliant on distillation, but they note that the prevalence of attacks can be measured based on volumes of large-scale data requests.

Tip
$0
Like
0
Save
0
Views 884
CoinMeta reminds readers to view blockchain rationally, stay aware of risks, and beware of virtual token issuance and speculation. All content on this site represents market information or related viewpoints only and does not constitute any form of investment advice. If you find sensitive content, please click“Report”,and we will handle it promptly。
Submit
Comment 0
Hot
Latest
No comments yet. Be the first!
Related
OpenAI vs Anthropic -- What are the financial reports of the "strongest AI"?
The AI arms race is heating up, with OpenAI and Anthropic accelerating their IPO pushes. However, a rare financial document reveals a harsh reality: OpenAI projects its computing power expenditure will reach $121 billion by 2028, with losses reaching $85 billion that year, potentially setting a new record for the largest loss in history. Break-even is not expected until 2030. While Anthropic's annualized revenue has soared to $30 billion, it is also deeply mired in computing power costs. With revenue and losses rising simultaneously, the path to profitability remains long.
Wall Street CN
·2026-04-07 09:13:47
334
Anthropic unveils Mythos cybersecurity model weeks after Claude Code leak exposed security lapse
Anthropic launched Mythos and Project Glasswing days after a Claude Code leak exposed source files and caused a GitHub takedown mess.
Crypto Briefing
·2026-04-08 02:28:50
213
Anthropic's revenue surpassed $30 billion, and it signed a 3.5 gigawatt computing power contract with Google and Broadcom.
In the AI computing power arms race, long-term computing agreements are becoming a competitive factor as important as funding and technology.
TechFlow
·2026-04-07 15:24:00
530
Anthropic Tops $30 Billion Run Rate, Seals Deal With Broadcom
Anthropic PBC said its revenue run rate has now topped $30 billion, up from $9 billion at the end of 2025, and confirmed plans to work with Broadcom Inc. and Google to power its burgeoning operations.
Bloomberg
·2026-04-07 06:10:45
705
Anthropic Spots 'Emotion Vectors' Inside Claude That Influence AI Behavior
Anthrophic researchers say internal emotion-like signals shape how AI large language models make decisions.
Decrypt
·2026-04-04 21:23:17
356