I think the big secret is that AI is just software. In the same way that a financial firm doesn't all of sudden make a bunch of money because Microsoft shipped an update to Excel, AI is inert without intention. If there's any major successes in AI output it's because a person got it to do that. Claude Code is great, but it will also wipe out a database even though it's instructed not to (I can confirm from experience). The idea that there's some secret innovation that will come out any minute doesn't change the fact that it's software that requires human interaction to work.
Yes, and it has been said since day one of LLMs that all we need to do is keep things that way - no action without human intervention. Just like it was said that you should never grant AI direct access to change your production systems. But the stories of people who have done exactly that and had their systems damaged and deleted show that people aren't trying to even keep such basic safety nets in place.
AI is getting strong enough that if people give some general direction as well as access to production systems of any kind, things can go badly. It is not true that all implementations of agentic AI requires human intervention for all action.
I think the market isn't for anyone but other businesses. We're all ants trying to understand how AI is going to eradicate the lower levels of society.
> doesn't change the fact that it's software that requires human interaction to work.
Have you ever seen Claude Code launch a subagent? You've used it, right? You've seen it launch a subagent to do work? You understand that that is, in fact, Claude Code running itself, right?
I don't think subagents are representative of anything particularly interesting on the "agents can run themselves" front.
They're tool calls. Claude Code provides a tool that lets the model say effectively:
run_in_subagent("Figure out where JWTs are created and report back")
The current frontier models are all capable of "prompting themselves" in this way, but it's really just a parlor trick to help avoid burning more tokens in the top context window.
It's a really useful parlor trick, but I don't think it tells us anything profound.
Maybe. But probably not. It doesn't matter if it's AGI though. If those other apps and tools do simple things that are predictable, then we can be pretty sure what will happen. If those tools can modify their own configuration and create new cron jobs, it becomes much harder to say anything about what will happen.
Does your Linux server decide what processes it should launch at what time with a theory of what will happen next in order to complete a goal you specified in natural language? If so yes, I reckon you sure have!
Claude does not have a "theory" of anything, and I'd argue applying that mental model to LLM+Tools is a major reason why Claude can delete a production database.
Well, humans also routinely accidentely delete production databases. I think at this point arguing that LLMs are just clueless automatons that have no idea what they are doing is a losing battle.
All AI requires steering as the results begin to decohere and self-enshittify over time.
AI in the hands of an expert operator is an exoskeleton. AI left alone is a stooge.
Nobody has built an all-AI operator capable of self-direction and choices superior to a human expert. When that happens, you'd better have your debts paid and bunker stocked.
We haven't seen any signs of this yet. I'm totally open to the idea of that happening in the short term (within 5 years), but I'm pessimistic it'll happen so quickly. It seems as though there are major missing pieces of the puzzle.
For now, AI is an exoskeleton. If you don't know how to pilot it, or if you turn the autopilot on and leave it alone, you're creating a mess.
This is still an AI maximalist perspective. One expert with AI tools can outperform multiple experts without AI assistance. It's just got a much longer time horizon on us being wholly replaced.
This is my own take, directly related to this that I posted a little while back. The one thing that I think the article missed is the geopolitical angle they’re also working:
* We need to completely deregulate these US companies so China doesn't win and take us over
* We need to heavily regulate anybody who is not following the rules that make us the de-facto winner
* This is so powerful it will take all the jobs (and therefore if you lead a company that isn't using AI, you will soon be obsolete)
* If you don't use AI, you will not be able to function in a future job
* We need to lineup an excuse to call our friends in government and turn off the open source spigot when the time is right
They have chosen fear as a motivator, and it is clearly working very well. It's easier to use fear now, while it's new and then flip the narrative once people are more familiar with it than to go the other direction. Companies are not just telling a story to hype their product, but why they alone are the ones that should be entrusted to build it.
"The race to build smarter-than-human AI is a race with no winners."
And specifically about the point on China, several people in power in China have also expressed the need to regulate AI and put international structures of governance in place to make sure it will benefit mankind:
"you will all lose your jobs and it will wipe out half of humanity."
If you lead with this, people will stop questioning why their sprint velocity hasn't increased 10 fold. Managers start asking leads, instead of hiring more devs can we add Agent.md to our repos?
The Apocalypse sells. They are afraid that you'll find out that AI is just another useful tool. That's the real threat, not to humanity, but to their hype.
My read is not so much "if we say this is dangerously powerful, it will make people want to buy our product", but rather that there is a significant segment of AI researchers for whom x-risk, AI alignment, etc. is a deal-breaker issue. And so the Sam Altmans of the world have to treat these concerns as serious to attract and retain talent. See for example OpenAI's pledge to dedicate 20% of their compute to safety research. I don't get the sense that Sam ever intended to follow through on that, but it was very important to a segment of his employees. And it seems like trying to play both sides of this at least contributed to Ilya's departure.
On the other hand, it seems like Dario is himself a bit more of a true believer.
Yeah I just don't buy that it would somehow help AI companies for everyone to be existentially afraid of their technology. It seems much more reasonable to think that they really believe the things they're saying, than that it's some kind of 4d chess.
Additionally Dario has just been really accurate with his predictions so far. For instance in early 2025 he predicted that nearly 100% of code would be written with AI in 2026.
It helps with sales because they position it as “we can give you the power to end the world.” There’s plenty of people who want to wield that sort of power. It doesn’t have to be 4D chess. Maybe they are being genuine. But it is helping sales.
Isn't it more: "We can give you the power to eliminate the people in your organization you dont like" and expands into basically dismantling all government & business for the benefit of the guy with the largest wallet?
It's hard to see as anything but a button anyone with enough money can press and suddenly replace the people that annoy them (first digitally then likely, into flesh).
I think if you just look at what people like e.g. Sam Altman are doing it's clear that they don't believe everything that they're saying regarding AI safety.
> nearly 100% of code would be written with AI in 2026
I feel like this is kind of a meaningless metric. Or at least, it's very difficult to measure. There's a spectrum of "let AI write the code" from "don't ever even look at the code produced" to "carefully review all the output and have AI iterate on it".
Also, it seems possible as time goes on people will _stop_ using AI to write code as much, or at least shift more to the right side of that spectrum, as we start to discover all kinds of problems caused by AI-authored code with little to no human oversight.
Does anyone have good estimates of what percent of real production code is currently being written by LLMs? (& presumably this is rather different for your typical SaaS backend vs. frontend vs. device drivers vs. kernel schedulers...)
Really? In my bubble of internet news it seems the sheer number of companies that have formed and shipped LLM code to production has already surpassed existing companies. I've personally shipped dozens of (mediocre) human months or years worth of code to "production", almost certainly more than I've ever done for companies I've worked at (to be fair I've been a lot more on the SRE side for a few years now).
Depends on your reference class. There's a lot of companies and teams where it's literally 100%, and I would be surprised if there were any top company where it's below 75%. I wouldn't be terribly surprised if the industry-wide percentage were a lot lower, although I also have no idea how you'd measure that.
it pushes the idea that these programs are super amazing and powerful to people who are non-technical. It also allows them to control the narrative of how exactly AI is dangerous to society. Rather than worry about the energy consumption of all these new datacenters, they can redirect attention to some far-off concern about SHODAN taking over Citadel Station and turning the inhabitants into cyber-mutants or whatever.
> It seems much more reasonable to think that they really believe the things they're saying
It seems more reasonable to me to think that they know it's bullshit and it's just marketing. Not necessarily marketing to end users as much as investors. It's very hard to take "AGI in 3 years" seriously.
To my mind, "if we don't say this is dangerously powerful, we will not be able to hire the talent we need to build this product" is the supply-side version of "if we do say this is dangerously powerful, it will make people want to buy our product".
Maybe Altman specifically is only paying lip service to this stuff, but when a company like Anthropic is like "BRO MYTHOS IS TOO DANGEROUS BRO WE CANT EVEN RELEASE IT BRO JUST TRUST US BRO", my bullshit detector is beeping too loud to ignore. It's very obviously a publicity stunt, because if it were actually that dangerous you wouldn't be making such a press release, you'd be keeping your mouth shut and working to make it safe.
> According to critics, it benefits AI companies to keep you fixated on apocalypse because it distracts from the very real damage they're already doing to the world.
Am I not allowed to be concerned about _both_?
I do not believe that Sam Altman and other AI company execs believe that the singularity is imminent. If they did, they wouldn't behave so recklessly. Even if they don't care about the rest of humanity, there's too much risk to themselves if they actually believe what they're saying.
But I think it's correct to be worried about a potential future AI apocalypse. Personally I doubt that LLMs will scale to full sentience, but I believe we'll get there eventually. And whether it's in 2 years or 200 years I'm worried about it. Plenty of smart people who aren't working for AI companies (and thus have no motive to use it as hype or distraction) hold this belief and it really doesn't seem that crazy.
But yeah, obviously let's focus primarily on the real harms AI is causing in our society right now.
> Why do AI companies want us to be afraid of them? ... According to critics, it benefits AI companies to keep you fixated on apocalypse because it distracts from the very real damage they're already doing to the world.
People seem unable to make up their mind if AI is very dangerous or is it not. I think what the AI companies and this author agree on, is that this technology is potentially extremely dangerous. AI impacts labor markets, the environment, warfare, mental health, etc... It's harder now to find things which it will not impact.
So if we agree that AI is potentially dangerous, it makes the title question moot: Both AI companies and this author want people to be aware of the dangers that AI poses to society. The real question is what do we do about it?
The nuance here is that AI can be incredible positive as well. It's like the invention of fire, you can use it for good or bad, and there will be many unintended consequences along the way.
We could legislate and ban AI tech. People have proposed this seriously, yet this feels completely unrealistic. If the US bans AI research, then this research will move elsewhere. I think it is like trying to ban fire because it's dangerous: some groups will learn to work with fire and they will get an extreme advantage over those groups that don't. (or they will destroy themselves in the process).
So maybe instead of demonizing the AI companies, we have a nuanced debate about this tech and propose solutions that our best for our society?
I have never heard of "Heidy Khlaaf, chief AI scientist at the AI Now Institute", but the sentiment in this article is diametrically opposite that of the vulnerability research scene.
There is contention among vulnerability researchers about the impact of Mythos! But it's not "are frontier models going to shake up vulnerability research and let loose a deluge of critical vulnerabilities" --- software security people overwhelmingly believe that to be true. Rather, it's whether Mythos is truly a step change from 4.7 and 5.5.
For vulnerability researchers, the big "news" wasn't Mythos, but rather Carlini's talk from Unprompted, where he got on stage and showed his dumb-seeming "find me zero days" prompt, which actually worked.
The big question for vulnerability people now isn't "AI or no AI"; it's "running directly off the model, or building fun and interesting harnesses".
Quote from the article: ""AI will probably most likely lead to the end of the world, but in the meantime, there'll be great companies," Altman said in 2015."
Altman wasn't even at OpenAI at that point, so why would that be marketing?
the media is not catching on, they've been looping through 'AI is going to kill us all!' when they want to sell fear and 'Look at all the energy and water AI companies are pointlessly wasting!' when they want to sell anger.
the writers and the editors know exactly what they're doing - spreading FUD and creating controversy out of thin air. some of it is done for-profit, some for-agenda, and all of it with malicious intent.
Another potential reason, not mentioned in the article, is that open source models obviously pose the biggest threat in the labs' ability to monetize their tech. Anthropic especially seems to be very anti open-source. If frontier models start to plateau and don't have capabilities that truly differentiate them, nobody will pay what the labs would want to charge. Posing the tech as a danger is a way for them to make the government regulate open source models.
This is a great point. I'm kind of surprised there isn't a greater proliferation of open source models to do things the public ones won't. I know such things exist, but imagine how many web browsers there would be if all the mainstream ones had the same content restrictions as LLMs.
I guess since training them does take cash that raises the bar for what people will do as a prank or on principle.
Article mentions a book "The AI Con" that argues that much of what is labeled "artificial intelligence" is a misleading term that obscures ordinary automation while concentrating power in a small number of technology firms.
So fear-mongering seems to be just a tool how to get attention and more customers.
You can gauge the quality of the article by seeing Emily Bender quoted, who will insist on stochastic parrots when AI does billions of dollars of economically useful work.
> It's a strange way for any company to talk about its own work. You don't hear McDonald's announcing that it's created a burger so terrifyingly delicious that it would be unethical to grill it for the public.
> Here's one theory.
But the author never gets back to this! It's the main observation the theory has to account for; why don't we see other companies speak this way, if it's such an effective strategy for deflecting non-apocalyptic concerns?
I think they get away with it because it's a dual-use technology. They have this tool that could end the world and people want in on it because they want the power.
The answer to the burger analogy is that it's the wrong analogy. McDonald's is selling you the burger. AI companies are essentially selling you the grill.
The hype works so well because it plays on people's ego and desire for power. They think I have the power to end the world with this technology but I won't because I'm a good person.
If McDonald's food was featured in sci fi movies about being able to end humanity through war, that's when this would apply and they'd cultivate fear of that nonsense to distract from their food being shitty and overpriced and unhealthy.
tbf most companies don't have a potentially world-ending product. only real similar field is defense contractors who typically can't brag about unreleased ideas as they're classified.
I agree, but the experts the author cites do not. Professor Valor believes that AI is a mirror and any existential fears of it are just reflected fear of ourselves; Professor Bender believes that AI is a con and all the people who say it's powerful enough to be world-ending are lying. Anyone who concedes the premise that AI has a genuine potential to be world-ending is, I think, on the AI labs' side of this debate.
Given that his reason for saying GPT-2 was too dangerous to release was that the world needed more time to prepare for the effects of this technology, and given that the following models were basically scaled-up versions of it and killed social media, news reporting and other kinds of communication, I'd say he was right about the dangers of it.
That's true but in reality I think people are far more afraid of AI in terms of how it is being used in warfare and policing. Automatic target detection and deployment of drones, or even how it might simply make their role at work redundant etc
To me, the more interesting divergence in discussion is on its capabilities.
AI industry insiders (including "safety" groups like ControlAI) talk about the dangers only in terms of its power: "Scheming", job loss, breaking containment, the New Cold War with China.
Critics outside the industry talk in terms of its lack of power: Inaccuracy, erroneous translation of user intent, failure to deliver on its promises and investment, environmental cost from the former, and ultimately the danger of people in power (e.g. law enforcement, military officials) treating its output as valid and unbiased, or simply laundering their wishes through it.
> That's true but in reality I think people are far more afraid of AI in terms of how it is being used in warfare and policing. Automatic target detection and deployment of drones, or even how it might simply make their role at work redundant etc
I think the last one should be first on the list: regular people are afraid AI will negatively affect their economic security (i.e. knowledge and service workers will get the rust-belt factory worker treatment).
And the potential of giving knowledge and service workers the rust-belt factory worker treatment is exactly what makes Wall Street excited about AI and has the AI company leaders salivating about the profit they can make.
Warfare, policing, bio-engineered viruses are theoretical and far down the list.
Not to mention that "automatic target detection" was primarily enabled by the ~2016-2020 AI hype/boom around image recognition, not the 2022-current hype/boom around LLMs
100% agreed. That's part of the issue imo, these companies pretend their new models are "too dangerous" to seem like they care about the world, yet they have no qualms deploying existing models in warfare or bragging about impending mass-unemployment.
AI has been used in defense for a while now, a modern tomahawk cruise missile and its associated targeting systems is a good example. I think most people fear AI taking their job and only source of income.
We sadly don't need AI to justify outrageous warfare. You just need to remember when the US invaded Iraq over WMDs, including a full investigation into the WMDs that never found any. We then invaded anyway, to the detriment of everyone except defense contractors.
These were all already very valid concerns long before this era of "AI" or computational power.
The broader public is just now barely beginning to understand because all they have to do is ask a chatbot. AI does not enable new capabilities, but it does aggregate an idea into a rough sketch and do it quickly on-demand.
None of this really means it will play out that way. The devil is in the details. What it does mean is much more nuanced attention on the politics and money because that's where the power always was.
Yes, I love how everyone uses this argument, when what they were saying was among the lines of "GPT-2 would make it too easy to generate spam, deepfakes, content to manipulate opinion..." (not the actual quote but something like that). Turns out it was completely correct if you look at the state of the internet right now.
Obviously, they still overhype and oversell this end of humanity stuff, but this argument regurgitated ad-nauseam is not THAT great of an example when you think about it.
I was going to say.. I think people in general have this weird understanding of the word dangerous. Just because something is not movie level dramatic and/or does not generate over the top violence does not automatically make it less dangerous. In a sense, just the fact that is benign on the surface and allowed to embed in our day to day life is what makes the upcoming rug pull so painful.
And I am saying this as a person who actually likes this tech.
AI is getting strong enough that if people give some general direction as well as access to production systems of any kind, things can go badly. It is not true that all implementations of agentic AI requires human intervention for all action.
Have you ever seen Claude Code launch a subagent? You've used it, right? You've seen it launch a subagent to do work? You understand that that is, in fact, Claude Code running itself, right?
They're tool calls. Claude Code provides a tool that lets the model say effectively:
The current frontier models are all capable of "prompting themselves" in this way, but it's really just a parlor trick to help avoid burning more tokens in the top context window.It's a really useful parlor trick, but I don't think it tells us anything profound.
AI in the hands of an expert operator is an exoskeleton. AI left alone is a stooge.
Nobody has built an all-AI operator capable of self-direction and choices superior to a human expert. When that happens, you'd better have your debts paid and bunker stocked.
We haven't seen any signs of this yet. I'm totally open to the idea of that happening in the short term (within 5 years), but I'm pessimistic it'll happen so quickly. It seems as though there are major missing pieces of the puzzle.
For now, AI is an exoskeleton. If you don't know how to pilot it, or if you turn the autopilot on and leave it alone, you're creating a mess.
This is still an AI maximalist perspective. One expert with AI tools can outperform multiple experts without AI assistance. It's just got a much longer time horizon on us being wholly replaced.
* We need to completely deregulate these US companies so China doesn't win and take us over
* We need to heavily regulate anybody who is not following the rules that make us the de-facto winner
* This is so powerful it will take all the jobs (and therefore if you lead a company that isn't using AI, you will soon be obsolete)
* If you don't use AI, you will not be able to function in a future job
* We need to lineup an excuse to call our friends in government and turn off the open source spigot when the time is right
They have chosen fear as a motivator, and it is clearly working very well. It's easier to use fear now, while it's new and then flip the narrative once people are more familiar with it than to go the other direction. Companies are not just telling a story to hype their product, but why they alone are the ones that should be entrusted to build it.
And specifically about the point on China, several people in power in China have also expressed the need to regulate AI and put international structures of governance in place to make sure it will benefit mankind:
https://nowinners.ai/#s5-china
If you lead with this, people will stop questioning why their sprint velocity hasn't increased 10 fold. Managers start asking leads, instead of hiring more devs can we add Agent.md to our repos?
The Apocalypse sells. They are afraid that you'll find out that AI is just another useful tool. That's the real threat, not to humanity, but to their hype.
On the other hand, it seems like Dario is himself a bit more of a true believer.
Additionally Dario has just been really accurate with his predictions so far. For instance in early 2025 he predicted that nearly 100% of code would be written with AI in 2026.
It's hard to see as anything but a button anyone with enough money can press and suddenly replace the people that annoy them (first digitally then likely, into flesh).
> nearly 100% of code would be written with AI in 2026
I feel like this is kind of a meaningless metric. Or at least, it's very difficult to measure. There's a spectrum of "let AI write the code" from "don't ever even look at the code produced" to "carefully review all the output and have AI iterate on it".
Also, it seems possible as time goes on people will _stop_ using AI to write code as much, or at least shift more to the right side of that spectrum, as we start to discover all kinds of problems caused by AI-authored code with little to no human oversight.
It seems more reasonable to me to think that they know it's bullshit and it's just marketing. Not necessarily marketing to end users as much as investors. It's very hard to take "AGI in 3 years" seriously.
Am I not allowed to be concerned about _both_?
I do not believe that Sam Altman and other AI company execs believe that the singularity is imminent. If they did, they wouldn't behave so recklessly. Even if they don't care about the rest of humanity, there's too much risk to themselves if they actually believe what they're saying.
But I think it's correct to be worried about a potential future AI apocalypse. Personally I doubt that LLMs will scale to full sentience, but I believe we'll get there eventually. And whether it's in 2 years or 200 years I'm worried about it. Plenty of smart people who aren't working for AI companies (and thus have no motive to use it as hype or distraction) hold this belief and it really doesn't seem that crazy.
But yeah, obviously let's focus primarily on the real harms AI is causing in our society right now.
People seem unable to make up their mind if AI is very dangerous or is it not. I think what the AI companies and this author agree on, is that this technology is potentially extremely dangerous. AI impacts labor markets, the environment, warfare, mental health, etc... It's harder now to find things which it will not impact.
So if we agree that AI is potentially dangerous, it makes the title question moot: Both AI companies and this author want people to be aware of the dangers that AI poses to society. The real question is what do we do about it?
The nuance here is that AI can be incredible positive as well. It's like the invention of fire, you can use it for good or bad, and there will be many unintended consequences along the way.
We could legislate and ban AI tech. People have proposed this seriously, yet this feels completely unrealistic. If the US bans AI research, then this research will move elsewhere. I think it is like trying to ban fire because it's dangerous: some groups will learn to work with fire and they will get an extreme advantage over those groups that don't. (or they will destroy themselves in the process).
So maybe instead of demonizing the AI companies, we have a nuanced debate about this tech and propose solutions that our best for our society?
These are not mutually exclusive.
Companies exhibiting shit behavior should be demonized. That's not an indictment of the underlying technology itself.
There is contention among vulnerability researchers about the impact of Mythos! But it's not "are frontier models going to shake up vulnerability research and let loose a deluge of critical vulnerabilities" --- software security people overwhelmingly believe that to be true. Rather, it's whether Mythos is truly a step change from 4.7 and 5.5.
For vulnerability researchers, the big "news" wasn't Mythos, but rather Carlini's talk from Unprompted, where he got on stage and showed his dumb-seeming "find me zero days" prompt, which actually worked.
The big question for vulnerability people now isn't "AI or no AI"; it's "running directly off the model, or building fun and interesting harnesses".
Altman wasn't even at OpenAI at that point, so why would that be marketing?
Lee Vinsel's criti-hype article nailed this 5 years ago, before we even had the chatbot economy we do now: https://sts-news.medium.com/youre-doing-it-wrong-notes-on-cr...
the writers and the editors know exactly what they're doing - spreading FUD and creating controversy out of thin air. some of it is done for-profit, some for-agenda, and all of it with malicious intent.
I guess since training them does take cash that raises the bar for what people will do as a prank or on principle.
So fear-mongering seems to be just a tool how to get attention and more customers.
Hey ma, I use very dangerous tool now. I am OG.
Glad people are finally catching on.
There’s about $1 trillion that needs to be paid off.
> Here's one theory.
But the author never gets back to this! It's the main observation the theory has to account for; why don't we see other companies speak this way, if it's such an effective strategy for deflecting non-apocalyptic concerns?
The answer to the burger analogy is that it's the wrong analogy. McDonald's is selling you the burger. AI companies are essentially selling you the grill.
The hype works so well because it plays on people's ego and desire for power. They think I have the power to end the world with this technology but I won't because I'm a good person.
AI industry insiders (including "safety" groups like ControlAI) talk about the dangers only in terms of its power: "Scheming", job loss, breaking containment, the New Cold War with China.
Critics outside the industry talk in terms of its lack of power: Inaccuracy, erroneous translation of user intent, failure to deliver on its promises and investment, environmental cost from the former, and ultimately the danger of people in power (e.g. law enforcement, military officials) treating its output as valid and unbiased, or simply laundering their wishes through it.
I think the last one should be first on the list: regular people are afraid AI will negatively affect their economic security (i.e. knowledge and service workers will get the rust-belt factory worker treatment).
And the potential of giving knowledge and service workers the rust-belt factory worker treatment is exactly what makes Wall Street excited about AI and has the AI company leaders salivating about the profit they can make.
Warfare, policing, bio-engineered viruses are theoretical and far down the list.
AI shaping warfare Vs. Using AI to justify outrageous warfare
would you like me to list the applicable sections of the Geneva convention?
The broader public is just now barely beginning to understand because all they have to do is ask a chatbot. AI does not enable new capabilities, but it does aggregate an idea into a rough sketch and do it quickly on-demand.
None of this really means it will play out that way. The devil is in the details. What it does mean is much more nuanced attention on the politics and money because that's where the power always was.
Obviously, they still overhype and oversell this end of humanity stuff, but this argument regurgitated ad-nauseam is not THAT great of an example when you think about it.
And I am saying this as a person who actually likes this tech.