For the last couple of months all the top models have been from the US. I don't expect that to last - or even if it does the gap will gradually diminish to the point that "top" is largely irrelevant outside of marketing.
But at the moment I must use a US model for the best results for complex queries. So I'm glad that there's one company I'm at least somewhat ok with supporting. I'm not even that picky. All I want is a reasonable guarantee that I'm not supporting a company who's tools are used for autonomous drone warfare in American wars, and a few other basic things like that.
I guess someone might feel moved to respond to this by pointing out all the other companies outside of AI that I should be avoiding too. Please do! I'm actively trying to be more mindful of the companies I support rather than just chasing the lowest bills. I'm in the process of migrating my company away from MS 365 to Nextcloud on Hetzner, which is going slow but well.
Claude Code seems to be the best at programming right now. I think if Anthropic can maintain or increase their lead they'll have no shortage of customers. I imagine Anthropic's business is driven by business customers rather than individual paying customers at this point.
It’s the best at everything. OpenAI models are dangerously stupid enough as it is. Not much can phase me these days, but a sycophantic ChatGPT in a kill chain is nightmare fuel.
With the price of tokens I think mass surveillance with AI is not a realistic use case.
There already is a mass surveillance. Presumably most electronic communication is monitored. I guess LLMs can likely do a somewhat better job but probably not worth the cost for the marginal benefit over existing technologies?
Similarly for "Terminators" or other AI killing machines... Isn't it cheaper to use a human? We have autonomous weapons already, like cruise missiles... Other than the movies what does a reality with LLMs pulling triggers look like? Cars are also "killing machines" and we're letting computers drive them...
Unfortunately if these things do start making sense for whatever reason they're probably going to happen. Private companies in general have no way to prevent their technology from being used for "defense" applications. Once that genie is out of the bottle it's not going back in.
those systems will be built regardless. That type of boycott being asked from companies is essentially asking companies to not make profit where there's profit to be made, when those doing the asking is not also taking in any sacrifices for this boycott.
Instead of asking companies to be altruistic, those wanting such systems to be illegal should be using the civic system we have today to make it so - yes, this costs effort, resources and time. Like all hard things.
Yeah it's a shitty statement "we're totally fine accidently targeting foreigners but come on, not 'mericans" because it's well known that once you have the capability it will be aimed at everyone anyway.
It's called political correctness. There is a longstanding undercurrent in American politics of treating Constitutional rights (aka natural rights) as only applicable to Americans [0]. Framing the issue in terms of lofty universal ideals would be politically suicidal. And with the current precarious situation, giving more energy to overly-simplistic jingoist chants is not what anybody needs.
[0] this seems to be a bit of proto-fascism that helped set the stage for the overt dynamic we've now got
They got at least one more subscriber as of about twenty minutes ago since I just canceled my ChatGPT Pro subscription and moved to Anthropic.
Sam Altman immediately capitulating to the Trump administration after bragging like four hours ago about he wouldn't shows a distinct lack of integrity. It's not like ChatGPT is categorically better than Claude, I just didn't bother change to Claude before purely out of my previous inertia with ChatGPT.
Europe is a great market.
To be fair, given Dario’s nationality, we should make a massive offer for Anthropic to relocate somewhere in Europe like San Marino or such. Levying taxes and letting them have all they need.
(Joking, but to a point)
Does Anthropic make money yet, or like a lot of AI are they selling dollars for fifty cents each? Can they keep going without a lot of investment from administration-aligned oligarchs like the Saudis, or without these circular stock-for-compute deals?
As far as I understand, this is about banning the use of Anthropic for autonomous weapons and domestic surveillance. And while the idea of building one fully controlled, nationwide AI system may sound tempting, in reality it’s still just a fantasy and wouldn’t be very useful in practice.
Claude is the best of these models/services out there in my experience so far, so no surprise there. It's a company with leadership that hold principals they stand up for -- huge disappointment to see the government going out of it's way to hurt it. So bizarre, and deeply anti-American.
Too bad government devs will have a much harder time using some of the best tools out there.
Despite the complete Archie Bunker energy in that post here is something interesting from the post:
"Anthropic better get their act together, and be helpful during this phase out period, or I will use the Full Power of the Presidency to make them comply, with major civil and criminal consequences to follow."
with major civil and criminal consequences to follow.
I’d love to get the reaction if all the VC investors who “reluctantly” had to support Trump because Lina Kahn was just too meddlesome, and they had to support a candidate who would be hands off with big tech.
That was in the cards since Hegseth's initial announcement, when he brandished the option for Anthropic to be put on the same level as China and Russia, i.e. the same level as State enemies of the US. Got to give it tho Anthropic's CEO, I had first though that he'd give in faster, looks like he's still clinging on.
Also see what happened to Joseph Nacchio in the early 2000s [1]:
> He was convicted of 19 counts of insider trading in Qwest stock on April 19, 2007[2] – charges his defense team claimed were U.S. government retaliation for his refusal to give customer data to the National Security Agency in February, 2001.
Unfortunately, I think the same thing is in the cards now for Anthropic's CEO, that is if he doesn't choose to play ball.
The part of Dario's response I found funniest was pointing out the inherent contradiction in the DoW threats. Somehow, they could both be a national security threat as well as a national security necessity. Not by doing something differently - at the same time, to the same people, in the same context.
They won’t be brandished as a supply chain risk because the supply chain members are good at lobbying and they like Claude.
As an example, Amazon is a defense contractor and uses Claude heavily internally for development. They are also major investors in Anthropic. Amazon would not want Claude to be banned from use on developing AWS services that may be cross sold to the government. Multiply this by every defense company that uses Claude (eg anduril and Palantir).
They could totally try and punish Anthropic executives of course. That seems likely.
This shouldn't surprise anybody at this point. This is how Trump behaves on a daily basis. He thinks that he can direct the federal government to do literally anything he wants and operates on pure retribution.
So many people wrote think pieces about how Trump couldn't possibly be a fascist because fascism involves state takeover of corporate power. Hm...
I am honestly surprised he is being this nice given he could use the war powers act and eminent domain to just seize the company and conscript their employees. I am sure someone will say that is not legal but when has that stopped anyone.
What I don't get is why they really want it. I can't think of a worse platform in general to do anything with combat that is, anything in a data-center. I would take the Quake 3 Arena engine on a new ultra insane mode combined with a tiny self hosted model to detect humans, uniforms, vehicle make and model plus a simple Friend or Foe over all the big AI platforms any day. Add an optional feed from an encrypted meshtastic like network to sync nodes using pre-defined Ostiary like commands. Ultra-fast, light-weight on all resources, decentralized.
The enemy could neutralize all the big platforms in one day by simply activating a few dozen of the sleeper agents in the US or a couple High-Altitude Electromagnetic Pulse (HEMP) deployed by a few stratospheric balloons as China flew over every major military installation in the US. Adding to this having a network dependency to the big platforms from autonomous weapons is an extreme vulnerability. Any design that depends on a central command is a single point of success.
> I am sure someone will say that is not legal but when has that stopped anyone.
it has routinely stopped them as the courts have already struck down countless of nonsense by this administration, and they rely exactly on this bluster every time they try something else.
The issue is that even though courts work slower than a president with a smartphone eventually it will all get sorted out and they know this, which is why some people falling for this shock and awe behavior is so silly.
The issue is that even though courts work slower than a president
So ... not stopped the president. Make a move, eventually ruled naughty, shift to another move, ruled a no-no, take an alternate path, rinse repeat. How does one fix the courts or is it working as intended?
Why the fuck does the president have authority to set specific IT policy for the government. He might not even have it, he loves making policies he doesn’t actually have the authority to make.
Because the POTUS is the chief executive. His literal job is to manage the executive branch of the government. Unless his policies go against the law, there isn't anyone who can legally dispute his policies for the executive. And if he does something illegal, the Senate can impeach him.
Something interesting is going on around Trump's rants. Some White House staffers are directing lower-level staff to ignore them and focus on the economy. James Blair [2] seems to be leading this. Blair was in charge of political strategy for Trump's campaign, and he won, so he's probably not going to be fired.
There have been presidents in decline who were semi-captured by their staffs. Biden, Reagan, and Roosevelt all were. It may be that Trump gets trotted out now and then to deliver his standard speech (his speeches all have roughly the same content, regardless of subject or venue), but the work of the White House involves him less.
Watch to see which threats get followed up with action, and which ones don't.
Good fucking on anthropic, I've long held that they seem like the AI company best handling existential risk, and this elevates that opinion. If you don't want terminator, not helping people invent terminator is a good first step.
I'm sure Alex Karp and Palantir are already charging into the breach, promising to deliver things they don't have the capability to deliver! (Otherwise known as just another day for them)
I would think it pretty clear that they aren't shy about making their displeasure loudly and concretely known when denied. I can imagine an executive order in the research or draft state that'll make it so any entity that continues to deal with Anthropic is automatically put at a disadvantage. Something similar in spirit and effect to the sanctions on Cuba.
That depends on how aggressively the current administration follows through on the threats to punish them. The SEC could deny IPO registration, for example.
Given the waning influence of American economic hegemony, this threat looks to be less impactful as time goes on. A stick that is shrinking can still be painful; it will be an interesting decision for the company's directors to make.
You could ask this about every user of every large cloud service provider, which is why they all refuses to implement E2E, or store the keys [4].
The government has their hands in all of them, using "national security" as the justification, with threats if they don't comply [1][2], with the alternative being to shut down [3].
Also, implicit in the government's requirements is that they require mass domestic surveillance capabilities. Imagine a large government tool that for each citizen there is an antagonist OpenClaw-like set of agents surveilling and potentially acting against every public interaction and occasionally hallucinating.
> "Anthropic better get their act together, and be helpful during this phase out period, or I will use the Full Power of the Presidency to make them comply, with major civil and criminal consequences to follow," Trump added.
The Al Capone executive.
No surprise most CEOs are either licking his butt or staying silent, there's clearly a climate of fear.
Also I wonder what's happening with democracy in the US, with this level of presidential control to mess up everything, is US moving towards dictatorship?
Oh the US crossed that threshold the day he started he won the elections the second time.
Most Americans seem to think new elections will solve it. But for germans it was a world war that brought back democracy. There were no elections anymore. That is also the most likely case for the US now.
The US will survive this, despite being damaged, because Trump and his team are incompetent and he’s in a bubble and no one tells him what is actually going on. The GOP is going to get crushed in the midterms. From that standpoint, Anthropic may simply be playing the long game, and playing it well.
Really hope so. Only concern is what desperate measures the Trump admin will be willing to employ to stop the midterms from giving Democrats control of Congress (or at least the House).
> Also I wonder what's happening with democracy in the US, with this level of presidential control to mess up everything, is US moving towards dictatorship?
I mean maybe, but not because the president can decide to cancel a contract with a vendor. That seems to be a pretty reasonable power for the president to have.
Sounds like they also have the same limitations. And my guess is that thats non-negotiable as that would likely cause some of their staff to quit if they did lift those limitations.
They'll probably try, but the problem is everyone actually using these models knows Grok sucks: [0].
> Demand from other agencies to use Grok has been anemic, people familiar with the matter said, except in a few cases where people wanted to use it to mimic a bad actor for defensive testing.
February 23, 2026: The Pentagon confirmed a new agreement allowing Grok use in classified systems. Defense Secretary Pete Hegseth announced it would go live soon on unclassified and classified networks, alongside other models, as part of feeding military data into AI.
Elon mentioned they will be releasing a cli coding tool for grok similar to Claude Code, will be interesting to see how it performs given they have their own datacenter (largest in the world and building another larger one)
The function of fully autonomous AI weaponry is to offload the responsibility for making kill decisions away from the soldier.
Whether the model works accurately or inaccurately doesn’t matter. In some ways, having a trigger-happy model may serve the US military’s interests better than a discerning one.
All American institutions are hemorrhaging talent and knowledge, their foundations intentionally being rotted out. The kleptocrats running the departments and military don't care if we are competent or if their job is done - because their real job is to steal every last drop of wealth they can.
So this whole thing screams of a charade to give Elon Musk more of our hard-earned money as a favor from Trump.
For anybody who thinks it's about Trump vs other administration: it's not, both AI surveillance of all people and using it for automatic fight was just bound to happen.
The only question is whether the safety of the models were really done well enough to protect the people and be a net positive force in the world.
I guess if they would be safely trained to do more good than bad (how Dario and SamA said), there wouldn't even be a need for the contract terms.
It would/will be extremely irresponsible to put non-deterministic and fallible models in charge of weapons. We are not close to having solved the problem of ensuring AI pursues good outcomes
I agree completely. Anybody who uses the models extensively know it can do something amazing for a prompt and something awful for another. But I also know that wars are unfortunately real and there are real enemies between countries and they don't want a limited model.
Probably drones targeting and automatically killing Russian people by a thinking model guessing if its Russian on Ukrainian person is a red line.
Elon Musk already denied Starlink for being used for remote killing, but at some point all these technologies will be nationalized, as they are too important not to be.
Now you're worried? Come on. He is using the Bully Pulpit to try and pressure other companies to toeing the line. At least someone had the balls to tell them to get fucked instead of kowtowing.
He is also clearly in the throes of dementia, as his father was. It’s a common symptom of dementia patients to become rude and violent as their facilities slip away
If only we had a constitutional process for removing presidents from office as they become obviously unfit for the office…
this reads like someone who hasn't seen dementia up close. I don't see his behavior as much different than term 1. simply more malevolent.
there's no obvious word searching. he's always been simplistic and unencumbered by the need for logical consistency. he was never a word smith. he has his stock phrases (eg, "many people are saying...<insert lie>"), which he uses as a crutch, but also to great effect.
as someone who HAS seen dementia from a to z, I don't see it here.
Yeah I don’t see it either. What I see is a guy openly admitting he’s a dictator because he thinks that’s what’s needed. A guy that knows he doesn’t have much to lose and wants to do whatever crazy shit comes into his head
I mean him being in the throes of dementia is certainly possible; but, more absolutely, he's a fucking asshole. The problem isn't just him, however; it's his entire administration, Congress, and the SCOTUS he stacked that further enable the insanity.
yes, but this is his administration; full stop neogestapo. Hes surrounded and politically floated by white Christian nationalists. His dementia just lubricates the existing harm vectors.
The only two things anthropic ask is that AI cannot be used for:
- domestic mass surveillance,
- autonomous kill decision.
That's it. The reason for the first one is clear: it violate the spirit of the fourth amendment at least.
The reason for the second is that if a kill decision is taken, let's say by an ICE agent who just got told 'im not mad at you' or something similar that would surely enrage him, he is responsible in front of the law. If it's an autonomous drone that shoots on political opponent/protestors, no one is responsible.
I will add that Google and anthropic made their AI play wargames. 93% of the time, their models escalate to the nuclear option.
> President Trump on Friday ordered all federal agencies to stop using artificial intelligence technology made by Anthropic, an order that could vastly complicate intelligence analysis and defense work.
> Writing on Truth Social, Mr. Trump used harsh words for Anthropic calling it a “radical Left AI company run by people who have no idea what the real World is all about.”
> Still, Mr. Trump announced a “Six Month phase out” for the Pentagon and some other agencies, a period of time that could allow for more extended negotiations between Anthropic and the Defense Department. Calling the company “Leftwing nut jobs,” he said they had made a mistake trying to strong-arm the Pentagon.
> Mr. Trump’s statement came as the Pentagon and Anthropic were, despite an escalating war of words, continuing to negotiate a compromise. While some current and former American officials had expressed hope of some sort of deal before the Pentagon’s 5:01 p.m. deadline, Mr. Trump’s comments will undoubtedly complicate matters.
If OpenAI and Google stay in sync with Anthropic on this, will Trump try to ban all of them from the federal government? What alternatives would they turn to?
They might de facto take them over via the defense production act, board demands, or shut them down, and then put the screws on Google who they can already control via their shareholders.
There is a whole situation with dealing with military contract during Trump 1, and it didn't go well for Google, I doubt Sundar will go the same route once again.
> THE UNITED STATES OF AMERICA WILL NEVER ALLOW A RADICAL LEFT, WOKE COMPANY TO DICTATE HOW OUR GREAT MILITARY FIGHTS AND WINS WARS! That decision belongs to YOUR COMMANDER-IN-CHIEF, and the tremendous leaders I appoint to run our Military.
> The Leftwing nut jobs at Anthropic have made a DISASTROUS MISTAKE trying to STRONG-ARM the Department of War, and force them to obey their Terms of Service instead of our Constitution. Their selfishness is putting AMERICAN LIVES at risk, our Troops in danger, and our National Security in JEOPARDY.
> Therefore, I am directing EVERY Federal Agency in the United States Government to IMMEDIATELY CEASE all use of Anthropic’s technology. We don’t need it, we don’t want it, and will not do business with them again! There will be a Six Month phase out period for Agencies like the Department of War who are using Anthropic’s products, at various levels. Anthropic better get their act together, and be helpful during this phase out period, or I will use the Full Power of the Presidency to make them comply, with major civil and criminal consequences to follow.
> WE will decide the fate of our Country — NOT some out-of-control, Radical Left AI company run by people who have no idea what the real World is all about. Thank you for your attention to this matter. MAKE AMERICA GREAT AGAIN!
It's just so great to have a president whose public statements sound like a 3-year-old having a tantrum. Every agency must immediately cease all use of Anthropic's technology, except the agencies that are using Anthropic's products?
Anthropic's statement specified mass domestic surveillance. Not all domestic surveillance. And fully autonomous weapons with today's systems. Not automatic targeting. And not never.
But.. we already have automatic weapons targeting systems and the PATRIOT act, which enjoys bi-partisan support already providing basically limitless domestic surveillance.
Agreed, but the details released so far are pretty sparse on what exactly was being asked. If they're resisting general AI to help missiles find their target better, then I think it's pretty foolish of Anthropic to resist because that's happening come hell or high water. If it's about resisting mass surveileance without a warrant of American citizens, then I'd cheer Anthropic on for pulling out, but we just don't know apparently.
The Department of Defense demanded contracts which would allow any lawful use. Anthropic refused to allow mass domestic surveillance or fully autonomous weapons until fully autonomous weapons could be more reliable.[1]
This should come as no surprise. The administration has no tolerance for anyone with “values” or “scruples” that challenge their power. They fired Judge Advocate Generals (military lawyers) that refused to authorize strikes on civilian boats, for one recent example. If you say ‘no’ for any reason, they fire you.
I think it has also been pretty clear from the beginning of his current term that the primary principle of this administration is to pass whatever has been paid for. E.g. the literally hundreds of executive orders that he started signing en masse the moment he got elected, none of which were written in his style or pen or even possible to write in such volume for a single person in a single day.
because it is a big fucking risk if the tool you rely on can refuse to function at a critical moment, and the vendor publicly brags about refusing to eliminate such refusals. ta-ta, see you in 3 years.
Why (the fuck) would a government of the people, believe they have the rights over free enterprise tools, to take them and use them for nefarious purposes
Have you paid attention during any of the Administrations? In this post-Snowden era we have no excuse for not knowing the "government of the people" has been doing this to companies for a long time. AT&T for example. This Administration just does it out in the open.
Imagine if a private company had developed the nuclear bomb, and said in its terms "This can never be used as a first strike weapon and must only be used as a retaliation against imminent existential attacks on the US homeland". I think a lot of people would have lauded the attempt by a weapon's creators to restrain the destructive potential of their creation.
Separately:
An opinion piece in the NYT suggested that Anthropic should not have restrictions and that "lawful use" provision should properly constrain the government. The fact that we have to hope Anthropic holds to their commitment is a show of no-confidence in the rule of law and the legislature of the United States to protect the people.
Seriously. All they said was "don't do mass surveillance and don't create autonomous killbots" and the president literally frames it as "Anthropic vs. the Constitution" and calls them woke radicals. How any citizen doesn't immediately have their stomach churning is beyond me.
Oh wait, out of [Fox, WSJ, NYT, WaPo, NPR, Newsmax] only Fox doesn't have an article up about it, and Newsmax left out the part about domestic mass surveillance. I am shocked!
Shouldn't this be great news that the government is banned from using Anthropic?
I don't know why suddenly the narrative here turns to spinning this as "Trump is evil" when this is actually keeping the AI company out of the government's reach.
As you can see the president is under a lot of pressure.
He is losing the ICE war, because the protesters used tactics of the civil rights movements of the past pretty effectively and now he is stuck between losing the narrative of protecting Americans by further escalation or losing power as a direct consequence of said escalation and more Americans getting murdered by government agents.
Of the 8 or so wars he pretended to have stopped only two are still in a non-active state. The world is more dangerous than it ever was in our lifetime.
But I’m glad the man got a noble peace prize and probably put it right next to the golden pager he got from jerusalems favorite war criminal.
Let’s not forget the fourth anniversary of putins three day special operation just passed. A special operation by a special boy, that should have been ended on day one of his presidency.
He is losing the war of the public opinion, as he is approaching the midterms with the lowest approval rating and on top of that the whole Epstein thing is sticking to his shoes and gets smellier by the day and he can’t shake it off.
The Supreme Court took away his bonking stick, and he ordered a pretty impressive armada to the front door of Iran but khameni is old and rather wants to die a martyr than on youtube getting hung by the neck and dragged through Teheran behind a dumping truck.
And now those damn woke lunatics at anthropic won’t allow him to spy on his political enemies and every American and if necessary kill them autonomously, probably with the same red button he uses to order fresh BigMacs into the oval office.
The poor man is really not having a good time. But I’m sure he can dry his tears with a few of those crypto billions he made from selling the presidency.
This is such an interesting timeline. In fourty years none of the younger people are going to believe us, we are all going to eat our rat pudding in a trump home for the elderly under constant ai supervision, because they think we’re insane.
(1603ET): An hour before a 5PM ET deadline, President Donald Trump directed 'EVERY Federal Agency in the United States Government to IMMEDIATELY CEASE all use of Anthropic’s technology," posting on Truth Social that "We don't need it, we don't want it, and we will not do business with them again."
Tonight, we reached an agreement with the Department of War to deploy our models in their classified network.
In all of our interactions, the DoW displayed a deep respect for safety and a desire to partner to achieve the best possible outcome.
AI safety and wide distribution of benefits are the core of our mission. Two of > our most important safety principles are prohibitions on domestic mass surveillance and human responsibility for the use of force, including for autonomous weapon systems. The DoW agrees with these principles, reflects them in law and policy, and we put them into our agreement.
We also will build technical safeguards to ensure our models behave as they should, which the DoW also wanted. We will deploy FDEs to help with our models and to ensure their safety, we will deploy on cloud networks only.
We are asking the DoW to offer these same terms to all AI companies, which in our opinion we think everyone should be willing to accept. We have expressed our > strong desire to see things de-escalate away from legal and governmental actions and towards reasonable agreements.
We remain committed to serve all of humanity as best we can. The world is a complicated, messy, and sometimes dangerous place.
Amazing / weird that this sounds like a lot of the stuff Amodei said Anthropic asked for
What this confirms is that it was never about Anthropic's terms. The administration has a bigger issue with Anthropic and that was just an excuse to ditch them. What exactly the issue was I'm not sure. Maybe OAI paid off the right people.
I don't think it does. It could just as well show openai accepted terms which are unacceptable for anthropic.
If they say "we define mass surveillance as flagging terrorist-looking people automatically with the Family Guy approach, but it's ok if a person types a name to look it up" then you can say "we agreed not to have mass surveillance by AI" or "that's still mass surveillance and we disagree with it".
Give Sam Altman's prior experience with worldcoin, I'm inclined to think he doesn't give a dime about mass surveillance.
Idk who would even want them as a client. They'll change course on you every 4 years at minimum potentially in massive ways that might force you to change your product putting all your other users at risk - and they might just do that to you legislatively, anyway!
What to watch: OpenAI CEO Sam Altman said in a memo to staff on Thursday night that the company will uphold the same red lines as Anthropic on surveillance and autonomous weapons, but still hopes to strike a deal with the Pentagon.
UPDATE: Sam Altman just agreed to be the DoD's replacement but says he made them agree to no mass surveillance/autonomous weapons (the same terms that were rejected from Anthropic? Everyone is skeptical in the replies on Twitter)
Reminds me of Google, Apple, Microsoft, and Facebook releasing similarly-worded statements denying that they would share information with NSA PRISM, despite the Snowden docs
Google is the Xerox Parc of AI, but unlike Xerox they decided to enter the ring with Gemini which is now quite good.
(For those who don't know the history, Xerox Parc and SRI before them invented the modern PC GUI and a lot of other modern PC stuff in the late 70s and early 80s, which Apple copied and then everyone else copied from Apple. Xerox paid a lot for R&D but never used it at all since it would cannibalize their copier business, classic innovators dilemma.)
I wouldnt want an AI model to hold my military hostage. DOD should have strong requirements when making contracts with AI companies, including having the models be on their own infrastructure, fully locked down with a master prompt they can see and update.
Nobody would have an issue with that. The problem here is DoD insisting on that kind of control, then threatening and retaliating against Anthropic for declining to participate rather than simply walking away and engaging with a competitor.
Well, thats good news!!! hopefully the full government contract can be cancelled and Anthropic can actually live by their own words, and not just apply it to the US!
But at the moment I must use a US model for the best results for complex queries. So I'm glad that there's one company I'm at least somewhat ok with supporting. I'm not even that picky. All I want is a reasonable guarantee that I'm not supporting a company who's tools are used for autonomous drone warfare in American wars, and a few other basic things like that.
I guess someone might feel moved to respond to this by pointing out all the other companies outside of AI that I should be avoiding too. Please do! I'm actively trying to be more mindful of the companies I support rather than just chasing the lowest bills. I'm in the process of migrating my company away from MS 365 to Nextcloud on Hetzner, which is going slow but well.
Dont allow systems to be built with your AI that automate mass surveillance or automate kill decisions.
There already is a mass surveillance. Presumably most electronic communication is monitored. I guess LLMs can likely do a somewhat better job but probably not worth the cost for the marginal benefit over existing technologies?
Similarly for "Terminators" or other AI killing machines... Isn't it cheaper to use a human? We have autonomous weapons already, like cruise missiles... Other than the movies what does a reality with LLMs pulling triggers look like? Cars are also "killing machines" and we're letting computers drive them...
Unfortunately if these things do start making sense for whatever reason they're probably going to happen. Private companies in general have no way to prevent their technology from being used for "defense" applications. Once that genie is out of the bottle it's not going back in.
those systems will be built regardless. That type of boycott being asked from companies is essentially asking companies to not make profit where there's profit to be made, when those doing the asking is not also taking in any sacrifices for this boycott.
Instead of asking companies to be altruistic, those wanting such systems to be illegal should be using the civic system we have today to make it so - yes, this costs effort, resources and time. Like all hard things.
I wonder if a US company has ever wholesale emigrated before?
[0] this seems to be a bit of proto-fascism that helped set the stage for the overt dynamic we've now got
Sam Altman immediately capitulating to the Trump administration after bragging like four hours ago about he wouldn't shows a distinct lack of integrity. It's not like ChatGPT is categorically better than Claude, I just didn't bother change to Claude before purely out of my previous inertia with ChatGPT.
https://x.com/WhiteHouse/status/2027497719678255148
https://xcancel.com/WhiteHouse/status/2027497719678255148
Not a hard sell, but just "These are the two demands the government gave us, and here's why we did not agree to them"
It makes the company so attractive to most of the public - not just those who dislike Trump, but also those don't trust government.
Six-month phase out period, that's something. I guess the DoW's been using Claude quite a bit.
Too bad government devs will have a much harder time using some of the best tools out there.
"Anthropic better get their act together, and be helpful during this phase out period, or I will use the Full Power of the Presidency to make them comply, with major civil and criminal consequences to follow."
with major civil and criminal consequences to follow.
WTF !!!
Also see what happened to Joseph Nacchio in the early 2000s [1]:
> He was convicted of 19 counts of insider trading in Qwest stock on April 19, 2007[2] – charges his defense team claimed were U.S. government retaliation for his refusal to give customer data to the National Security Agency in February, 2001.
Unfortunately, I think the same thing is in the cards now for Anthropic's CEO, that is if he doesn't choose to play ball.
[1] https://en.wikipedia.org/wiki/Joseph_Nacchio
As an example, Amazon is a defense contractor and uses Claude heavily internally for development. They are also major investors in Anthropic. Amazon would not want Claude to be banned from use on developing AWS services that may be cross sold to the government. Multiply this by every defense company that uses Claude (eg anduril and Palantir).
They could totally try and punish Anthropic executives of course. That seems likely.
So many people wrote think pieces about how Trump couldn't possibly be a fascist because fascism involves state takeover of corporate power. Hm...
What I don't get is why they really want it. I can't think of a worse platform in general to do anything with combat that is, anything in a data-center. I would take the Quake 3 Arena engine on a new ultra insane mode combined with a tiny self hosted model to detect humans, uniforms, vehicle make and model plus a simple Friend or Foe over all the big AI platforms any day. Add an optional feed from an encrypted meshtastic like network to sync nodes using pre-defined Ostiary like commands. Ultra-fast, light-weight on all resources, decentralized.
The enemy could neutralize all the big platforms in one day by simply activating a few dozen of the sleeper agents in the US or a couple High-Altitude Electromagnetic Pulse (HEMP) deployed by a few stratospheric balloons as China flew over every major military installation in the US. Adding to this having a network dependency to the big platforms from autonomous weapons is an extreme vulnerability. Any design that depends on a central command is a single point of success.
it has routinely stopped them as the courts have already struck down countless of nonsense by this administration, and they rely exactly on this bluster every time they try something else.
The issue is that even though courts work slower than a president with a smartphone eventually it will all get sorted out and they know this, which is why some people falling for this shock and awe behavior is so silly.
So ... not stopped the president. Make a move, eventually ruled naughty, shift to another move, ruled a no-no, take an alternate path, rinse repeat. How does one fix the courts or is it working as intended?
At least, so goes the theory.
There have been presidents in decline who were semi-captured by their staffs. Biden, Reagan, and Roosevelt all were. It may be that Trump gets trotted out now and then to deliver his standard speech (his speeches all have roughly the same content, regardless of subject or venue), but the work of the White House involves him less.
Watch to see which threats get followed up with action, and which ones don't.
[1] https://www.theatlantic.com/politics/2026/02/trump-gop-repub...
[2] https://en.wikipedia.org/wiki/James_Blair_(political_advisor...
This will be a death by thousand cuts.
The government has their hands in all of them, using "national security" as the justification, with threats if they don't comply [1][2], with the alternative being to shut down [3].
Does it prevent harm? Probably.
[1] https://sg.news.yahoo.com/yahoo-ceo-fears-defying-nsa-could-...
[2] https://lieu.house.gov/media-center/in-the-news/report-yahoo...
[3] https://www.crn.com/news/security/240159745/two-email-provid...
[4] https://www.forbes.com/sites/thomasbrewster/2026/01/22/micro...
Also, implicit in the government's requirements is that they require mass domestic surveillance capabilities. Imagine a large government tool that for each citizen there is an antagonist OpenClaw-like set of agents surveilling and potentially acting against every public interaction and occasionally hallucinating.
The Al Capone executive.
No surprise most CEOs are either licking his butt or staying silent, there's clearly a climate of fear.
Also I wonder what's happening with democracy in the US, with this level of presidential control to mess up everything, is US moving towards dictatorship?
https://ratical.org/ratville/CAH/fasci14chars.html
I mean maybe, but not because the president can decide to cancel a contract with a vendor. That seems to be a pretty reasonable power for the president to have.
Ilya seems to think this is the case as well: https://x.com/ilyasut/status/2027486969174102261 though Zvi's point that any stands from OpenAI are ~2 orders of magnitude smaller seems right: https://x.com/TheZvi/status/2027493723269992661
> Demand from other agencies to use Grok has been anemic, people familiar with the matter said, except in a few cases where people wanted to use it to mimic a bad actor for defensive testing.
[0]: https://www.wsj.com/politics/national-security/elon-musk-xai...?
Elon mentioned they will be releasing a cli coding tool for grok similar to Claude Code, will be interesting to see how it performs given they have their own datacenter (largest in the world and building another larger one)
Whether the model works accurately or inaccurately doesn’t matter. In some ways, having a trigger-happy model may serve the US military’s interests better than a discerning one.
So this whole thing screams of a charade to give Elon Musk more of our hard-earned money as a favor from Trump.
The only question is whether the safety of the models were really done well enough to protect the people and be a net positive force in the world.
I guess if they would be safely trained to do more good than bad (how Dario and SamA said), there wouldn't even be a need for the contract terms.
Elon Musk already denied Starlink for being used for remote killing, but at some point all these technologies will be nationalized, as they are too important not to be.
If only we had a constitutional process for removing presidents from office as they become obviously unfit for the office…
there's no obvious word searching. he's always been simplistic and unencumbered by the need for logical consistency. he was never a word smith. he has his stock phrases (eg, "many people are saying...<insert lie>"), which he uses as a crutch, but also to great effect.
as someone who HAS seen dementia from a to z, I don't see it here.
He is more incoherent and demented.
He is gonna get slaughtered in midterms, be hamstrung his last two years and then go away.
- domestic mass surveillance,
- autonomous kill decision.
That's it. The reason for the first one is clear: it violate the spirit of the fourth amendment at least.
The reason for the second is that if a kill decision is taken, let's say by an ICE agent who just got told 'im not mad at you' or something similar that would surely enrage him, he is responsible in front of the law. If it's an autonomous drone that shoots on political opponent/protestors, no one is responsible.
I will add that Google and anthropic made their AI play wargames. 93% of the time, their models escalate to the nuclear option.
Text (it is short):
> President Trump on Friday ordered all federal agencies to stop using artificial intelligence technology made by Anthropic, an order that could vastly complicate intelligence analysis and defense work.
> Writing on Truth Social, Mr. Trump used harsh words for Anthropic calling it a “radical Left AI company run by people who have no idea what the real World is all about.”
> Still, Mr. Trump announced a “Six Month phase out” for the Pentagon and some other agencies, a period of time that could allow for more extended negotiations between Anthropic and the Defense Department. Calling the company “Leftwing nut jobs,” he said they had made a mistake trying to strong-arm the Pentagon.
> Mr. Trump’s statement came as the Pentagon and Anthropic were, despite an escalating war of words, continuing to negotiate a compromise. While some current and former American officials had expressed hope of some sort of deal before the Pentagon’s 5:01 p.m. deadline, Mr. Trump’s comments will undoubtedly complicate matters.
The Truth Social post, which is, well, you can decide for yourself: https://truthsocial.com/@realDonaldTrump/posts/1161445529692...
There is a whole situation with dealing with military contract during Trump 1, and it didn't go well for Google, I doubt Sundar will go the same route once again.
https://truthsocial.com/@realDonaldTrump/posts/1161445529692...
> THE UNITED STATES OF AMERICA WILL NEVER ALLOW A RADICAL LEFT, WOKE COMPANY TO DICTATE HOW OUR GREAT MILITARY FIGHTS AND WINS WARS! That decision belongs to YOUR COMMANDER-IN-CHIEF, and the tremendous leaders I appoint to run our Military.
> The Leftwing nut jobs at Anthropic have made a DISASTROUS MISTAKE trying to STRONG-ARM the Department of War, and force them to obey their Terms of Service instead of our Constitution. Their selfishness is putting AMERICAN LIVES at risk, our Troops in danger, and our National Security in JEOPARDY.
> Therefore, I am directing EVERY Federal Agency in the United States Government to IMMEDIATELY CEASE all use of Anthropic’s technology. We don’t need it, we don’t want it, and will not do business with them again! There will be a Six Month phase out period for Agencies like the Department of War who are using Anthropic’s products, at various levels. Anthropic better get their act together, and be helpful during this phase out period, or I will use the Full Power of the Presidency to make them comply, with major civil and criminal consequences to follow.
> WE will decide the fate of our Country — NOT some out-of-control, Radical Left AI company run by people who have no idea what the real World is all about. Thank you for your attention to this matter. MAKE AMERICA GREAT AGAIN!
There's been a concerted effort to hide the fact that the US President is posting insane shit every day for years.
[1] https://www.anthropic.com/news/statement-department-of-war
Rarely see a company tell the government no though.
I think we'll see many many dupes. Hopefully the comment threads will be merged eventually.
Separately:
An opinion piece in the NYT suggested that Anthropic should not have restrictions and that "lawful use" provision should properly constrain the government. The fact that we have to hope Anthropic holds to their commitment is a show of no-confidence in the rule of law and the legislature of the United States to protect the people.
Seriously. All they said was "don't do mass surveillance and don't create autonomous killbots" and the president literally frames it as "Anthropic vs. the Constitution" and calls them woke radicals. How any citizen doesn't immediately have their stomach churning is beyond me.
Oh wait, out of [Fox, WSJ, NYT, WaPo, NPR, Newsmax] only Fox doesn't have an article up about it, and Newsmax left out the part about domestic mass surveillance. I am shocked!
I don't know why suddenly the narrative here turns to spinning this as "Trump is evil" when this is actually keeping the AI company out of the government's reach.
The level of cognitive dissonance here is unreal.
He is losing the ICE war, because the protesters used tactics of the civil rights movements of the past pretty effectively and now he is stuck between losing the narrative of protecting Americans by further escalation or losing power as a direct consequence of said escalation and more Americans getting murdered by government agents.
Of the 8 or so wars he pretended to have stopped only two are still in a non-active state. The world is more dangerous than it ever was in our lifetime.
But I’m glad the man got a noble peace prize and probably put it right next to the golden pager he got from jerusalems favorite war criminal.
Let’s not forget the fourth anniversary of putins three day special operation just passed. A special operation by a special boy, that should have been ended on day one of his presidency.
He is losing the war of the public opinion, as he is approaching the midterms with the lowest approval rating and on top of that the whole Epstein thing is sticking to his shoes and gets smellier by the day and he can’t shake it off.
The Supreme Court took away his bonking stick, and he ordered a pretty impressive armada to the front door of Iran but khameni is old and rather wants to die a martyr than on youtube getting hung by the neck and dragged through Teheran behind a dumping truck.
And now those damn woke lunatics at anthropic won’t allow him to spy on his political enemies and every American and if necessary kill them autonomously, probably with the same red button he uses to order fresh BigMacs into the oval office.
The poor man is really not having a good time. But I’m sure he can dry his tears with a few of those crypto billions he made from selling the presidency.
This is such an interesting timeline. In fourty years none of the younger people are going to believe us, we are all going to eat our rat pudding in a trump home for the elderly under constant ai supervision, because they think we’re insane.
https://x.com/sama/status/2027578652477821175?s=20
Tonight, we reached an agreement with the Department of War to deploy our models in their classified network.
In all of our interactions, the DoW displayed a deep respect for safety and a desire to partner to achieve the best possible outcome.
AI safety and wide distribution of benefits are the core of our mission. Two of > our most important safety principles are prohibitions on domestic mass surveillance and human responsibility for the use of force, including for autonomous weapon systems. The DoW agrees with these principles, reflects them in law and policy, and we put them into our agreement.
We also will build technical safeguards to ensure our models behave as they should, which the DoW also wanted. We will deploy FDEs to help with our models and to ensure their safety, we will deploy on cloud networks only.
We are asking the DoW to offer these same terms to all AI companies, which in our opinion we think everyone should be willing to accept. We have expressed our > strong desire to see things de-escalate away from legal and governmental actions and towards reasonable agreements.
We remain committed to serve all of humanity as best we can. The world is a complicated, messy, and sometimes dangerous place.
Amazing / weird that this sounds like a lot of the stuff Amodei said Anthropic asked for
If they say "we define mass surveillance as flagging terrorist-looking people automatically with the Family Guy approach, but it's ok if a person types a name to look it up" then you can say "we agreed not to have mass surveillance by AI" or "that's still mass surveillance and we disagree with it".
Give Sam Altman's prior experience with worldcoin, I'm inclined to think he doesn't give a dime about mass surveillance.
What to watch: OpenAI CEO Sam Altman said in a memo to staff on Thursday night that the company will uphold the same red lines as Anthropic on surveillance and autonomous weapons, but still hopes to strike a deal with the Pentagon.
https://www.axios.com/2026/02/27/anthropic-pentagon-supply-c...
https://xcancel.com/sama/status/2027578508042723599
(For those who don't know the history, Xerox Parc and SRI before them invented the modern PC GUI and a lot of other modern PC stuff in the late 70s and early 80s, which Apple copied and then everyone else copied from Apple. Xerox paid a lot for R&D but never used it at all since it would cannibalize their copier business, classic innovators dilemma.)