Community Articles

via Decrypt · By Decrypt Editorial

US Government Will Vet Pre-Release AI Models From Google, xAI and Microsoft

XAI

XAI/USDT

$0.01167
+2.19%
24h Volume

$2,709,181.87

24h H/L

$0.0118 / $0.01121

Change: $0.000590 (5.26%)

Funding Rate

-0.0003%

Shorts pay

Data provided by COINOTAG DATALive data
XAI
XAI
Daily

$0.01167

2.28%

Volume (24h): -

Resistance Levels
Resistance 3$0.0132
Resistance 2$0.0125
Resistance 1$0.0120
Price$0.01167
Support 1$0.0114
Support 2$0.0110
Support 3$0.0106
Pivot (PP):$0.01156
Trend:Uptrend
RSI (14):65.2
DE
Decrypt Editorial
(04:21 PM UTC)
4 min read
MR
Approved byMichael Roberts
556 views
0 comments

In brief

  • Google, Microsoft, and xAI agreed to give the U.S. government early access to frontier models for national security testing.
  • President Donald Trump is reportedly considering an executive order regarding AI oversight, per a report.
  • Anthropic's new Claude Mythos model, which excels at finding cybersecurity vulnerabilities, sparked government concern.

Some of the biggest players in the AI world have agreed to give the U.S. government early access to their frontier models to test them ahead of public release, with the announcement coming one day after a report that President Donald Trump’s administration is weighing an executive order on the matter.

On Tuesday, the Commerce Department's Center for AI Standards and Innovation announced that Google, Microsoft, and xAI have already agreed to provide the government with pre-release access to assess the systems' capabilities.

"Independent, rigorous measurement science is essential to understanding frontier AI and its national security implications," said Chris Fall, the center's director, in a statement. "These expanded industry collaborations help us scale our work in the public interest at a critical moment."

On Monday, the New York Times reported that the Trump administration is considering an executive order to create a working group that would review advanced AI models before public release. White House officials discussed the oversight plans with executives from Anthropic, Google, and OpenAI in meetings last week, per the report.

The executive order discussions were prompted partly by Anthropic's announcement last month that its breakthrough Claude Mythos model was adept at finding weak points in cybersecurity defenses, raising concerns among officials about national security implications.

Rather than launch Claude Mythos to the public and potentially unleash a frenzied surge to both break and fix software like web browsers and operating systems, Anthropic has instead provided access to a limited number of startups and organizations. Mozilla said it was able to find and patch 271 vulnerabilities in its Firefox web browser using Mythos.

Users on Myriad—a prediction market platform operated by Decrypt's parent company, Dastan—don't believe that Anthropic will release Claude Mythos broadly by June 30, penciling in just a 13% chance as of this writing.

Separately, the administration has clashed with Anthropic over model access. The Trump administration and Anthropic entered a contract dispute in February after Anthropic declined a request for unrestricted access to its AI models.

Defense Secretary Pete Hegseth subsequently said he would designate Anthropic as a supply chain risk to national security. A federal appeals court refused to temporarily pause the designation while the lawsuit proceeds. Last week, however, Axios reported that the White House is weighing whether to reverse course and resume its partnership with Anthropic.

The potential executive order marks a sharp reversal from Trump's earlier stance on AI regulation, as he’s advocated for minimal oversight of the industry.

"We're going to make this industry absolutely the top, because right now it's a beautiful baby that's born,” he said last July. “We have to grow that baby and let that baby thrive. We can't stop it. We can't stop it with politics. We can't stop it with foolish rules and even stupid rules.”

Since returning to office in 2025, Trump rolled back Biden-era regulatory requirements that asked AI developers to perform safety evaluations and report on models with potential military applications. On his first day in office, he revoked a 2023 executive order signed by former President Joe Biden that required developers of AI systems posing risks to share safety test results with the government before public release.

More recently, the Trump administration has proposed a national framework for AI regulation, which would set national standards for the technology without creating a new regulator. The federal push comes amid state-led efforts to regulate AI, some of which have received pushback from Trump’s agencies—notably a Colorado law over “algorithmic discrimination.”

Add COINOTAG as a Preferred Source

Add COINOTAG to your preferred sources in Google News and Search to see our coverage first.

Add on Google

Source

Decrypt Editorial · Decrypt

Read original →

Comments
Comments
Other Community Articles