It’s impossible not to have Artificial Intelligence, AI, touch your world anymore. How much so may be up in the air, but like it or not, tech’s fear of missing out has caused it to embrace AI, wanted or not. To a large extent, it’s background noise for many of us, with Google AI offering insipid, often irrelevant, replies to queries. Maybe it’s right. Maybe it’s having hallulus. Either way, it’s not a life or death matter, and so relying on it isn’t the end of the world.
Unless, of course, you’re a lawyer using AI to do your research or write your papers, in which case you’ve cheated your client out of competent representation in favor of easy money. Or a newspaper editor who wants quick, inexpensive, easily digestible stories, without regard to substance or accuracy. So what if the result is mediocre at best. Mediocrity is close enough for some, and better than many can produce by themselves.
But what happens when the Pentagon, which now pretends to be the Department of War to make itself seem more macho, grabs hold of AI? If you’re an AI company, this is huge money given the Pentagon’s budget. And let’s face facts, kids. Big Tech isn’t in it for the betterment of humanity, but to be Masters of the Universe. The Pentagon, under the auspices of its deep-thinking secretary, not only wants to be on the cutting edge of AI, but wants to be able to use if for whatever purposes Hegseth deems lethal.
The Pentagon is considering severing its relationship with Anthropic over the AI firm’s insistence on maintaining some limitations on how the military uses its models, a senior administration official told Axios.
Why it matters: The Pentagon is pushing four leading AI labs to let the military use their tools for “all lawful purposes,” even in the most sensitive areas of weapons development, intelligence collection, and battlefield operations. Anthropic has not agreed to those terms, and the Pentagon is getting fed up after months of difficult negotiations.
- Anthropic insists that two areas remain off limits: the mass surveillance of Americans and fully autonomous weaponry.
Rarely has anything so brutally naive been considered. Does Anthropic, or any AI provider, really believe it’s in charge of anything the Pentagon does? Does anyone believe that either Hegseth, or his overlord, gives a damn about paying more than lip service to “all lawful purposes” as it bombs boats in the Caribbean, murders whoever happens to be in them, to sate the blood lust of their disaffected fans? To be fair to Hegseth, it’s not as if he’s concealed his adoration of lethality and utter disdain for “legal niceties.”
The big picture: The senior administration official argued there is considerable gray area around what would and wouldn’t fall into those categories, and that it’s unworkable for the Pentagon to have to negotiate individual use-cases with Anthropic — or have Claude unexpectedly block certain applications.
- “Everything’s on the table,” including dialing back the partnership with Anthropic or severing it entirely, the official said. “But there’ll have to be an orderly replacement [for] them, if we think that’s the right answer.”
- An Anthropic spokesperson said the company remained “committed to using frontier AI in support of U.S. national security.”
To the extent AI contributes to the support of national security, whether by assimilating huge amounts of information in the blink of an eye or providing mediocrity that’s better than the mediocrity already running the Pentagon, it would be crazy not to utilize it in the defense of national security. And when decisions have to be made quickly, even immediately, it is untenable to have to ask the AI provider’s permission before using it in service of a nation.
But then, will there be anything that can’t be framed as a “gray area” or otherwise excused when the Pentagon wants or needs it? Won’t national security also imply that it can’t reveal it needs and uses to some rando company like Anthropic? After all, who elected Anthropic to be in control of national security? And if those who were elected, and those who were appointed to served their masters, decide that we need bioweapons or autonomous AI run drones, what AI Tech company gets to superimpose its vision of AI morality atop Trump’s?
If the naivete here is too obvious for words, consider that Pandora’s Box has already been opened and the AI evils came out along with the benefits. Can Anthropic, or any other AI contractor to the Department of Death, decide to pull the plug and stop Hegseth from ordering mass AI death to his enemies? Probably. But then, as long as one is willing to do the dirty work, or the administration might take some sort of action like seizing the assets of the company and its top officers as domestic terrorists or some such nonsense, it will get whatever it wants, even if that flies in the face of whatever controls AI Tech believes it has over the abuse of its ugly baby.