If the Pentagon carries out its threat to blacklist Anthropic’s Claude AI platform, it could be three months or even longer before the U.S. military regains access to such a powerful tool on its classified networks, according to multiple sources familiar with the fight between the Defense Department and the AI maker.


If the Pentagon does designate the San Francisco-based AI startup as a supply-chain risk, it would touch off a lengthy and likely expensive series of protective measures, the people familiar said.

Operators would have to reconfigure data inputs that they are feeding into models, re-examine how to share data in real-time with the intelligence community which also uses Claude widely, and re-validate that replacement models were functioning as the military expected it to, they said.

In July, Anthropic received a $200 million contract to provide its frontier-model tools to the Pentagon, as did the other three U.S. makers of such products: OpenAI, Google, and xAI.

  • WesternInfidels@feddit.online
    link
    fedilink
    English
    arrow-up
    0
    ·
    4 days ago

    How come we’re using AI mostly to kill people? If AI is of such strategic value, why can’t we let AI loose on the Epstein files?

    Why is AI always a way for the government to crush little people, never a tool for double-checking and illuminating the government?