The fact that management signed off on measuring AI use through token usage shows how incompetent management really is, including in allegedly technical conmpanies like Amazon. Tokenmaxxing was an entirely expected and rational response. IOW You measure employees in stupid ways, you're going to get stupid behaviour as a consequence.
So my assessment of the current mania is that it’s basically a management variant of Pascal’s wager.
If you as a “leader” refuse to go along with the crowd and you’re right, then after the dust settles you look like someone who guessed right. Oh and now we’re in a recession so you are probably having a bad time regardless. You maybe get one promotion, congratulations.
If you refuse to go along with the crowd and you’re wrong, you look like a Luddite, you probably got fired at some point along the way and your judgement reputation is hurt.
If you do go along with the crowd and the crowd is wrong, you are just in the same boat as everyone else. You are probably about the same as if you went against the crowd and you were right, possibly even better because it can take awhile to be proven right and you could be hurt in the middle.
So, I think, once something like this picks up enough steam, it’s just logical on a per individual basis for everyone to go along with it, regardless of how they feel about it internally.
One argument I have heard in favour of this is that management knew this would be a side effect, but that it's more important to have people engage with AI as much as possible simply to explore what is actually possible. You are effectively knowingly wasting money in the expectation that you might learn something useful that will be more valuable in the long run.
If companies are suddenly willing to spend money on letting their staff experiment, why not let them experiment with what they want to? They probably know more about technology than you do, otherwise you wouldn't need them.
My questions for that approach are: Why treat AI as a special technology that needs enterprise-scale exploration to come up with a useful application? And why not take the alternative approach of identifying the subset of people who have indeed found solid uses and spread their best practices around?
The top-down approach to encouraging (mandating?) AI usage strikes me as infantilizing to the workers, who are perfectly capable of choosing which tools they use and when.
In the early nineties, it was common for experienced electrical engineers to keep on using schematic entry digital design and look down on RTL and synthesis tools, despite that fact the latter was already way more productive. At some point, management had to put their foot down and force everyone to switch to using synthesis.
It's not unreasonable to assume that many people are set in their ways and unwilling to change their behavior without a bit of a push.
There was no synthesis algorithms that would map VHDL or Verilog designs into domino logic elements at the time. I believe that the most work in the synthesis-to-domino-logic area was done at the beginning of current century.
So, DEC's engineers and, I think, Intel's engineers were doing work using schematics well into 21-st century.
I guess the only difference between this and your example is the concrete efficiency gain from RTL and synthesis tools versus dubious applications of AI. I do agree with the second point about pushing people to explore new ways of doing things though.
Leaving aside the ethical aspects of using AI (not because they're not valid, because they're off topic for this discussion), in my line of work, the capabilities and productivity improvement of AI are staggering. Most of it is not writing the new code, which is but a small part of chip design, but everything else.
I can't give a concrete work example, but here is an experiment that I ran a month ago. https://tomverbeure.github.io/2026/04/12/AMIQ-License-Key-Ge.... If it can do that, it's not hard to imaging similar use cases related to root causing complex simulation failures. It is frighteningly good at that.
> use cases related to root causing complex simulation failures.
That's a pretty interesting use case. I assume this is for RTL simulation given the thread, but how do you connect the output of the simulator to the AI?
For a small case, a colleague took a screenshot of waves in the waveform viewer and pasted it into the AI tool. It worked.
But for large cases, use tools to extract all interfaces from the waveform file and save it as a text file, or add $display statements in the Verilog itself to dump the transactions. A SOTA LLM will eat it up. You point it to the RTL, a log file with hundreds of thousands of lines, and give it a few lines to explain how it is supposed to behave. Just tell it "My simulation is hanging. Figure why." Wait 15 minutes and it will tell you why it hangs and which line to change in your code to fix it.
I've done the experiment after the fact: I had spent ~3 days to fix complicated 3 bugs. I then rolled back the code and told it "Here is the spec. Find all the bugs in this code". It found all 3 bugs in around 30 min. That's when I realized that things won't be the same anymore. (And don't get me wrong: I love debugging simulations.)
Have you tried to change your HDL to something more modern like Bluespec System Verilog or, god forbid, anything embedded into Haskell or Scala?
I read that BSV source code is about three times shorter than similar design in Verilog and also has three times smaller defect density (defects per significant line of code). So just by changing the HDL from Verilog to BSV one can have nine (9) times less defects in the design.
>And why not take the alternative approach of identifying the subset of people who have indeed found solid uses and spread their best practices around?
A bottom-up approach has a far better chance of finding those particularly good use cases, and if you lean on the people how found those fits, they're more persuasive than top-down edicts. They actually know what they're talking about. If the point is to leverage AI for better work outcomes, someone with your experience is far more valuable than "here's a dashboard, make the number go up," which seems to be what's going on at Amazon.
It is completely unreasonable to assume that. Tech people are so hungry for productivity gains that they regularly will defy management forbidding them from using a tool, because the tool is so good they feel they have to have it.
If LLMs truly are as good as their proponents say, engineers will use them even if management outright forbade it. The fact that people aren't using them, and have to be forced, is extremely strong evidence that they are not in fact that useful.
> engage with AI as much as possible simply to explore what is actually possible
"Research" isn't part of my job title. If you don't know what's possible then why are you deploying it? You should be telling _me_ what's possible. I mean, you _paid_ for it, how can you possibly not know what you were getting?
> in the expectation that you might learn something useful that will be more valuable in the long run.
"I'll take `what even are profits?' for $200, Alex."
An overly generous steelman in my opinion as well. Have 10% of your employees focus on finding ways to properly leverage the new technology - don’t pressure 100% of your employees with bull shit metrics.
No, it's literally because some dumb manager read a blog where an influencer said that you ain't a real AI native and ain't worth shit unless your developers are spending $XXXX on tokens each day.
It's that simple.
(Never mind that these bloggers are just writing ad copy for cloud providers.)
That still sounds like a dumb strategy. Or, more likely, post hoc rationalization.
You reward me for wasting tokens and punish me for not wasting them, I will maximally waste them and wont "explore hownto make them useful". The latter wastes less tokens and that is punished.
I personally know two people who are doing exactly that after a mandate rolled out at their work, the measurement is "tokens spent" and since they weren't finding many cases that required a lot of tokens they simply started to run agent loops feeding each other.
Absurdly wasteful but Goodhart's Law almost never fails.
Management loves numbers because they’re the only things you can objectively compare as X > Y.
It makes for pretty charts, extrapolations, and projections.
It doesn’t matter if the numbers are not particularly correct. As long as the data gathering step can be justified it’ll do. Though bonus points if making the number bigger is a good thing (v.s. tracking something like number of sev 1 issues).
Yes, but also because management is largely unqualified to be managing the stuff they are hired for. So they regress to numbers because they otherwise cannot participate in anything technical.
If it's stupid and it works then it's not stupid. Sometimes executives have to use blunt instruments to turn around the culture of a hidebound large organization. When Jeff Bezos sent his 2002 API mandate it might have seemed stupid at the time and yet it worked.
I have recently played around with lots of data from measurements and one can totally dump everything into context and let Claude try to analyze data that way. It burns through a lot of tokens. It is smarter to save data to disk and let Claude write scripts that handles/analyzes the data. It’s much faster and the results are much better and you save a lot of tokens. But I guess Amazon prefers the first approach.
I don’t have any specific inside knowledge about Amazon, but I would hazard a guess that the first approach also provides better training material for the LLM.
> You measure employees in stupid ways, you're going to get stupid behaviour as a consequence.
I worked for a healthcare tech startup that made everyone wear fitbits and you got cheaper health insurance premiums if you averaged a higher # of steps every day. People were putting their fitbits on drillbits and whirring them around to log like 20,000 steps a day.
This is Matt Garman, the ultimate MBA. Bonus for sure tied to tokens-per-quarter, which is the 2026 equivalent of measuring engineers by lines of code...
This why AWS is bleeding good engineers for years. What is left is starting to look like Boeing post McDonnell merger...
They took out a quarter of their documentation page limited real estate, with AI doc shorts nobody asked for, nobody needs, and cant disable.
Most productivity metrics are stupid, vain attempts at avoiding doing real management work. If you are actually interfacing with your subordinates regularly, as managers should, it will be obvious who is pulling their weight and who isn't, no need for arbitrary statistics that are easily gamed.
Or maybe they plan to review how effective high usage engineers have been next cycle and the tokenmaxxers will get bit in the ass when they have little to show for all their wasted tokens? Performance metrics can, and do, change on a dime and tokenmaxxing seems short sighted when management can look at old logs.
Senior management let go our localisation staff. Now they want us to use AI to translate. They still want manual review.
We use Github Copilot at work, we get a measly 300 requests with the budget to go over if necessary. Opus 4.7 or GPT 5.5 would eat all of those up in a day. Are we supposed to be using more than the allotted amount, do management see that as a good thing. Or is it best to stick within the allocated amount. Who knows? Management are playing games everywhere it seems.
It's just not with AI though. It's who they get their advise from. One of my friend was cribbing to me about his company management - apparently someone in management discovered that PostgresDB is a real good database and free, and so they authorised the IT department to migrate their application from Oracle Cloud to PostgresDB as it will "save a lot of money" (true, but...). However, they aren't willing to shell out for the commercial solutions (like EnterpriseDB, which would be still a lot cheaper than Oracle), and are insisting that the team also recreate "all and every" feature that Oracle DB has and is used by their application, but is lacking in PostgresDB - after all, "If Oracle can do it, why can't you!?".
Memories of me and my three developer team being told "we need to use Excel, but in a web browser. So just make an app that does everything like Excel"
47 years if you count from the first release. But now you have this super intelligent thing that enables anyone to create a billion dollar business - you have no excuse!
"Hey Opus, create me an fully tested code base for Oracle-like DB from scratch. Don't overcomplicate it, so it should be ready with when I get back from lunch"?
I had a similar experience but with MSSQL, was invited to join some meetings with MS Sales folks. I quickly learned the project was never meant to succeed, but was simply leverage to negotiate a better contract.
Requests are such a weird metric. We have a token limit via Copilot (unless I'm misunderstanding our setup), and most of my "features" burn 1 to 2% of my token limit per month on 4.7. But I don't admin our plan, and I'm unsure what we actually git. Vscode just gives me a percentage of tokens remaining metric.
One of the weirder things about all this is how arbitrary and non objective the billing structure seems. One of the reasons I'm happy to use it at work, but won't ever personally subscribe. It's so opaque.
Copilot is currently based on requests (1 prompt = 1 request, with multipliers for different models). At the beginning of June the billing structure will change to just be normal API cost. Your features are going to start burning 10-20% of your token limit using 4.7
At my employer, everybody who has an opinion that matters is convinced that all of the overages by the high users will be more than made up for by the people who barely use it.
Maybe they’re right. But it’s really hard to see how.
We've raised, trained, hired and promoted generations of business people who push utter nonsense, understand nothing but optimizing for bad metrics, and orient solely around short term results. It's hard to look beyond modern corporate America when looking for causes of the fall in our living standards. This AI tokenmaxxing nonsense is just another rung on the same ladder to hell we've been on for decades.
How you burn 300 requests in a day? From my Copilot usage Opus consumes surprisingly few requests to do a lot of stuff. It isn’t paying by token but instead by prompt or something.
I guess you need automation for that. Run claude with cron to find fulnerabilities, suggest and implement improvements, automatically dig through backlog
Copilot didn't charge for subagents. You could do an insane amount of work with dozens of subagents with a single request and a deep enough prompt to kick it off.
I setup entire virtual teams (Dev, QA, product, reviewers etc with the initiating model just acting as the agent manager to keep it's context minimal) to one-shot some stuff and it kept churning and making progress.
Those days are just about over with the change to token pricing but for a time....
> whoever spent $600 on Anthropic last night, great job leveraging Al!
But to the person who spent $23 on Uber Eats please remember our limit for food is $20 per meal
I work at Amazon (standard disclaimer: just sharing my own experience, not an official spokesperson, etc.)
I can't say that this isn't happening, but at least the parts of the company I get visibility into, what the article describes isn't my experience. There is a lot of interest in using GenAI, but people are mostly getting kudos around creative uses for GenAI, not just for raw amount of tokens. For most scaled GenAI efforts, there is a lot of focus on output metrics (metrics like accuracy, number of findings, number of things fixed, and so on).
I also work at Amazon and my coworkers are playing 20 questions every morning to keep their metrics up. Like anything else there it depends on your org & managers.
I'm surprised how few comments are written with the prior that Amazon managers aren't stupid or uninformed about how incentives work.
My guess would be that someone created the leaderboard without a lot of consultation with managers, and that some employees feel a competitive urge to try to "win" the leaderboard by burning tokens.
Mmm, no, I don’t think it’s equivalent. I think they know that if you make the work hard, some employees will have trouble keeping up and will do things like peeing in bottles. And they’re OK with that, because they think there are enough people who can keep up that they can push the weaker people out. I think they believe that the peeing in bottle is relatively rare. I’m unsure whether that’s right or not. It’s been reported that it happens, but I have no sense whether it’s common.
Amazon is a massive company, your single experience is worse than an anecdote because there is no way verify.
What we can verify is how how Amazon already treats workers, they will surveil anyone within their systems regardless of the futility of said surveillance. Why are we suppose to not believe them using LLM systems as a means to further control their expensive employees from unionizing or seeking out solidarity with fellow workers? All LLMs do is enable tyrannical managers more power to hold over other workers, said workers are forced to engage in self alienation for fear of losing your job or forcing to do meaningless work as that is what's being tracked (and what LLMs excel at producing).
Hardly a good proposition for any worker.
I'm sorry but I fully do not believe you. This is a company that fires workers for taking too long of a bathroom break where said workers piss in bottles for fear of getting fired and you're going "hey guys, it's not too bad. Only some workers get whipped, others don't!"
It is damn fascinating to see just how many (big, serious) organizations are creating unnecessary internal strife over this.
One of my favorite heuristics/quotes applies here: "no matter how good the strategy, occasionally consider the result."
Want to know if AI is working for your org? Ask yourself/employees to "show me the result." That requires judgment and taste (is the result something of value, or just the appearance of work having been done), but it will also save you a ton of stress and disappointment later.
When I was at Amazon, I suggested that promotion to L7 people manager should require that reverse tattooed on your forehead so that you saw it every day. Every time some mandate would come down from on high, it was clear that nobody had thought of the second order effects, malicious compliance, or just outright gaming.
I was thinking about this recently. I tend to run my AI at low context because the documentation states that they degrade with higher context usage.
However I see tons of people on LinkedIn with ways of backing up context, not wanting to lose context, etc.
This seems like another way the system is being misused. Higher context usage also uses more tokens. I suspect you get worse (and slower) output too than a dense detailed context.
I think there are two motivations that get blurred pretty quickly:
a) you find a particular context that executes well and want to preserve parts of it or not have to repeat explanations
b) you want to continue a session so you don't have to rebuild the context from scratch
I think A is something where it's totally reasonable to preserve pieces as part of like a prompt library or equivalent, or directory-specific agent files, that kind of thing.
I think B is much more likely to lead to problems if you do it over a long time, but it can be pretty useful for getting the last drop of juice out of the metaphorical orange.
I think the antipattern (that I've done myself, admittedly) is swapping between different restored contexts for different tasks or roles - at that point you should be either converting it to more durable documentation if warranted, or curating it more specifically than "restore the entire context" even if it's just one-off.
I think the answer for both cases is supposed to be finishing a "good" session with "based on what you've learned about this project, please update the CLAUDE.md/AGENTS.md/README.md files."
Ideally that replaces the back and forth cycle of it's this, no it's that, it's that for reasons XYZ with a single ingestible blob that gets the agent up to speed.
I've actually had mixed results with that without some manual curation - sometimes by the time a session has gone on for a while (heaven forbid it go through multiple compactions), the agent has so much extraneous/incorrect context for docs that it can't write documentation effectively.
Sometimes it's better to dump context incrementally, reinitialize the agent with a subset of the context, or manually prime it, then ask it to write documentation as a focused task.
I think the more you anthropomorphize it the more it feels like "but I don't want to have to start all over getting it up to speed, this instance already knows all the important stuff."
If every exchange is treated as an independent query/response then it's much easier to see how cutting out the fluff using a combination of its summaries and your own helps stay focused.
I don't think you need to win this, you just need to not be near the bottom of the board. But just in case, I spam tokens like it's the Chuck E Cheese roulette game.
i think the best in this case is to be solidly in the middle of the pack. don't want to be near the bottom, don't want to be a tall poppy when the backlash comes
People who don't code(management, leadership) think AI will 10x the company but it's really a 40-60% boost. But engineers have to feign adopting this tools in fear of layoffs
Where? What industry, what kind of projects? The only one where I can imagine it to be true is vulnerability research, and I imagine all the low-hanging fruit to be picked soon
It will spin up a boilerplate uboot or BSP config no problem. I still go in and manually check and add peripherals, but opus 4.7 is terrifyingly smart.
Need to modify or add a new peripheral, it's there no problem. Or in a bare metal project, I can point it at an STM32 cubemx starter repo and ask for a feature (set up the ADC on pins 4 and 7, ask me for parameters) and it's just done. I do in a day what would probably take me 2.
It doesn't help me with reviewing others' work, or planning (I maintain that these are manual tasks). So yeah, I agree with the 40-60%. The parts of my job it helps, it really helps.
Yeah just had Codex/Gemini write me nrf52 bootloader that fit in under 4k flash sector size with OTA and DFU support (well, app does OTA download then the bootloader validates and decompresses the image). Works best if you let them use OpenOCD on a real device, then they can iterate until it starts working.
I didn't even need that bootloader, just didn't like the fact that Adafruit one takes too much space :)
I'm confused, isn't the whole point of using the STM32CubeIDE that all the peripherals, like say setting up an ADC on pins 4 and 7, are checkbox features?
Yeah, the industry has no issues selling $5 bills for $1. Why is this a good thing for society again? That the public subsidizes VC to no shared gains?
I've been hearing "the new model is so much better than the one from 6 months ago" every few months since 2023. It's never been true to date, so please understand why I am skeptical that it suddenly became true this time.
I work on an ETL platform and it definitely is a huge boost in certain things, but a drain in others.
We started working on a new product a few months ago and it's really dangerous up front on an empty code base. It can quickly write more code than you can comfortably understand. The more serious danger is when three people are all doing that at once. I had to bring this up at meetings and try to get a better review culture going.
Now that we're a few months in and changes are more targeted additions to an existing system we're happy with, it's _huge_ (which has been my experience on our existing product). I can drop a brief paragraph I speech-to-texted into my agent, give it a general starting place (where I imagine the issue/feature extension point is), and then tell it to do some research and propose a change. I'd guess it's about 50% of the time that I have to update it's implementation plan. Then I let it run (my favorite is setting this up before a meeting) and come back. Then we have to review the code and go from there.
Definitely a 50%+ speed up in some cases, but not all. It's also great for problems that procrastinating, as it reduces friction so much.
What's funny to me is the seeming lack of AI usage among management despite so much of their work being amenable to AI acceleration.
At my company(big name, AI beneficiary), middle management seems to mostly be concerned with shuffling chairs on the deck of the Titanic while they wait for their stock to fully vest. There is very little interest in improving anything, just an obsession with risk avoidance and performative sideshows whenever upper management wonders why execution is so poor.
At my company middle management is using Gemini to churn out reams of useless documents in lieu of anything approaching "program management" or similar
Yesterday, I had my first experience of a mid-level dev stuck on a problem, coming to me with Codex and Copilot summaries of what those tools thought the problem was, which turned out to be completely off-base.
Codex was pretty sure something was wrong with the response object being returned by the endpoint in question. It turned out there was a conversion method applied to the endpoint response, which mutated its input. This method had been running w/o problems for a while, until the dev put it in a useEffect. At this point, React dev mode's policy of rendering everything twice kicked in, which caused the second pass through the conversion method to fail on the now-mutated input object.
Codex never even hinted that the conversion method mutating the input could be a problem, nor anything about React dev mode rendering everything twice (specifically to catch problems like this). Apparently, neither of those came up much in its training data.
My point is that this dev seems to have lost, in a few short months of writing everything with Codex, the ability to trace an error from its source (the error trace was being swallowed in a Codex-written catch block that spit out a generic error message). He was completely stuck and just kept doubling down on trying to get Codex to solve the problem, even checking with Copilot as a backup. I'm not optimistic about where this is headed.
Yes, eventually. Largely because he would have written all the code that got to that point and had a mental model of the entire flow instead of it being a gray box.
the new bottleneck for development at work is code reviews. devs are creating whole features that would take months in only a couple weeks, but code reviewing that is a slow, painful process
This is why I'm not that excited about vibe coding. The bottleneck has always been understanding what the heck is going on.
In my view you should 1) use AI as a tool to help you learn and 2) write boilerplate you could have easily written yourself. Getting it to think for you is counterproductive (at least until it replaces us entirely).
The most important skill is to not stand out of the crowd. This is how you survive in the Soviet Union, in the army, and clearly also at tech companies.
Corporate emails asking why are you not using the <insert-llm> paid plan ??? came very very rapidly. So naturally, everybody started to use it blindly so that the dashboard metrics are all high.
I have mixed thoughts on this. These thoughts are my own. On the one hand, it’s objectively silly to pretend like we’ve solved the age old problem of measuring developer productivity. Metric-obsessed leadership can also be intolerable, counterproductive, and it’s a good way to paint yourself into a corner undervaluing your best talent and overvaluing your mediocre talent.
That said, I’m kind of having a blast using CC in corporate with all the connectors available at our disposal, and I baffled how little some of my coworkers know about what’s available and what the capabilities are. So it’s clear that perhaps some encouragement is prudent for those who are slower to embrace new technologies, but I’m not sure tokencounting and tokenmaxing are the answer.
A company requires a specific % of code coverage but doesn't give developers enough time to actually write tests. AI can be used to generate the tests needed to get pass the code coverage and avoid being fired for not working fast enough.
Yes, we can crawl our entire internal documentation via LLM. Want to know if someone is already working in the space of your latest idea? Ask Claude, it hits the internal search APIs and finds docs and references directly relevant to your query. There are a lot of separate document stores so this took a lot of effort previously. I can also query Slack, Outlook, etc. I don’t understand the cynicism in your comment.
Not OP, but within Amazon we have pretty good connectors around integrating with our task system (so you can pretty easily ask your GenAI tool "look up the next item in our sprint board, let me know if you have any clarifying questions, but otherwise start implementing it"). We have decent integration with internal wiki and search systems, so it's easier now to figure out the best Amazon way to do some coding task. And Amazon being a big doc-writing company, there are lots of great tools for helping improve all phases of writing.
> That’s my latest joke — that we’ll have to pretend like we used the tools so they can feel validated they’ve spent all this money on hyped up technology. So, yes, it’s em-dashes and “it’s not just this, it’s that …” so they can hopefully leave us alone
Amazon has this Kiro product they are trying to sell and they are using their own employees to improve the product and their own LLM. They are giving uni students 1000 credits/month and running competitions.
I, too, can easily use more tokens to achieve the same task. I can give worse prompts. I can fail to make it clear to the tools where to find the information they need. I can ask them to think hard when the don’t need to ask tell them not to think when they do need to. I can give vague, open ended instructions. I can generate code that sucks and throw it away.
Even if I'm in the middle of using the AI seriously but then want to rename a variable, I can't do that myself because it'll confuse the AI, so I'll tell it to rename. That seems pretty wasteful.
That sounds like too much effort. Better to have the AI write you a 20k word manifesto about how much you love your employer and then include that in the context of every request.
I wish I could do some tokenmaxxing at my company. The only plan available is maxed out for the month after a few days of serious work, but the AI “experts” are declaring that nobody needs that much. It’s really frustrating to constantly have to juggle quota and lower models. All this while the declared goal is to reach 50% of code written by AI.
Each day I send the AI on a fruitless mission like "summarize the entire codebase" while I do my actual work, which involves actually using the AI for real work. Wish I could disable the token cache to make it spend more.
Hunger games in the age of AI - eliminate/automate your colleague's job, until a single software engineer is left (or two if aristocrats will see it as a good PR).
At least for some people I know it’s not necessarily because there’s pressure from leadership, but because it’s funny that the org spends like $15,000/mo writing HP fanfic or whatever
This kind of thing is totally fine if it's being done (it's believable because Meta internally incentivized tokenmaxxing). When you're trying to change the behavior of a large number of people, only blunt instruments are available if you want to get quick outcomes. The edge cases where people Goodhart very hard are all right. You can just human-in-the-loop them away. The opportunity cost for most organizations of not moving to use AI tools as productivity enhancers is currently gauged by them (rightfully, in my opinion) to be too high to allow for osmotic adoption.
Most people look at sea changes come and go. They all have a story of how they "could have bought Bitcoin when it was $100" or whatever. In an org, you don't want to have the story of "we could have done that when nobody else had", so you incentivize adoption of the tool as hard as possible and hope that dipping feet in the water makes people want to swim. If you don't already have a culture of early adoption (and no large company can) then you have to use blunt incentives. I don't think anyone has demonstrated otherwise.
Even if you don't, it's the only way to ensure adoption and most workplaces consider the lack of adoption a greater danger than a Goodharted adoption. Overall, I've observed that the US has very low barrier to starting companies so, considering companies of all sizes are doing this, if it's a mistake those startups will get beaten by the ones doing other things.
> They said the move reflected pressure to adopt the technology after Amazon introduced targets for more than 80 percent of developers to use AI each week, and earlier this year began tracking AI token consumption on internal leader boards.
This measuring of tokenmaxxing as a proxy for something beneficial to the company has got to be the single dumbest thing I have ever heard of in my entire software career.
It would be like some company in the dot com era measuring employee's internet download traffic as a proxy for productivity or internet-pilledness.
Why not just reward employees based on who's submit the largest expenses claims? That might have some correlation to work too, right ?!
In the corporate world it's impossible for any one person to tell what's going on across multiple domains due to the complexity. If I tell you the Zorbulon API is creating 30% more flargs (which is critical for Twiddle operation), I often just have to take your word for it.
Hell, I'm in the bowels of Google as an IC and it's hard to understand what adjacent teams are doing. Even harder for management that never gets their hands on anything.
So while you know engineers are probably bullshitting you with fake work, you can at least turn around and tell your supervisor the numbers. It's all a game of plausible deniability.
They used to measure loc, but this is even dumber than that. The charitable explanation is they just want to make sure nobody is completely avoiding AI use.
Measuring token usage as a productivity metric is like measuring keystrokes. Don't mind me, just over here rolling my face on the keyboard for an hour so I can take Friday off...
...except each keystroke has an associated cost, the sum of which may equal or exceed my salary.
What's nuts is how many intelligent people— people who would say "of course 'LOC written' is a terrible measure of developer productivity, of course only a dysfunctional company run by morons would do that"— have immediately bought into this. Amazon has token use mandates, I've heard Google has token use "leaderboards", friends at startups say they all get graded on tokens used. It's like watching your sensible, levelheaded friend go completely off the rails; collective madness.
Some people respond to incentives. The rest of us are just trying to do our jobs and will probably be fired and then later consumed by the basilisk. We are living in an age of extremophiles.
I think that although we wish to consider ourselves as smart and really intelligent but we run on biological machines and clocks which evolutionary have not much of a difference since 1518 or even the times when we used to hunt and forage for that matter.
Another stupid meme-latching name. Don't normalize these *maxxing nonsense words and just use plain language. Let's see, maybe just say they were optimizing for token count?
There should be an anti leaderboard that highlight people under a threshold. Not trying to learn how to use ai while working at a company like Amazon is almost certainly a bad thing, and cause for looking into why.
tokenmaxxing is silly, but if a developer or manager NEVER uses AI then I do think that's cause for concern as it shows a genuine lack of curiosity... perhaps tokenflooring makes more sense than tokenmaxxing
This is true. The creepy thing is when someone outside your reporting chain is suddenly pushing you to use some new tool, rather than asking you to ship a feature.
Think worst place I worked, you had to install an app like Time Bro and you had to account for all 8 hours of the day, some app logged per minute/hour.
This makes me think of the tulip bubble. Using AI as much as possible just so people think you are productive is like buying tulips so that people think you're affluent.
This reads more like it's a single employees gripe than a real thing that's happening. They're not using the metrics in performance reviews, and it's a new AI tool that AWS probably wants legitimate usage data out of.
That said, if you can't figure out how to use AI in a software job you should look into it. Not using AI at this point is a lot like not using CAD as an architect.
It is being used in performance reviews, source: recent Amazon SWE.
They also use a bunch of dumb metrics like, total PRs submitted, total comments made on PRs, etc. To the point that, there are multiple heavily used internal tools to game these metrics. Eg, auto-comment LGTM on any approved PR. Thus, making the metrics even worse than they would have been prior.
> Amazon has told employees that the AI token statistics would not be used in performance evaluations.
> Managers are discouraged from using token use to measure performance, according to a person familiar with the matter.
Like CAD and architects, if you're not using LLM's while coding it's an issue, but Amazon is very clear that this isn't an official metric. I would believe managers know how many tokens you're using, but it sounds like they just interviewed a disgruntled employee who didn't like AI and published it.
>but Amazon is very clear that this isn't an official metric.
You're replying to an amazon employee who says they are being used in performance reviews, in comment thread on an article where 2 other Amazon employees say that their token usage is being tracked and they feel pressure to maximize token usage.
Do you have first hand knowledge to refute these 3 people with first hand knowledge?
The CAD thing is incredibly weird. I've never known an architect who had their CAD usage minutes tracked.
Btw I'm a big tech company and I know many people who are "token maxing". It's very common.
I have never pushed incorrect code that results in catastrophic failure due to AI, are you sure you're using the tool correctly? If you didn't know what you were doing with CAD software, you could absolutely generate incorrect designs.
It's 90% real where I work, which is not Amazon but is a peer company. They haven't explicitly said that tokens are used to measure performance, but when managers are posting token usage leaderboards each week with no further explanation, we take the hint.
I think it’s real. I’m at a huge SV tech company and at least half the people here are “token maxing”.
AI is genuinely useful for many tasks. But 2x or greater business value from engineering orgs isn’t it. And even if it was business are terrible at measuring value added on an individual basis.
What they can measure though is token use. I’ve heard the same thing from other large companies my friends work for.
It’s bad enough that I’ve moved a significant amount of money out of US large-cap stocks.
"They're not using the metrics in performance reviews" means almost nothing. It doesn't mean managers at every level are not frequently looking at those numbers. Anyone from Amazon will tell you how much "hint" they get from management about using those tools.
Amazon has far more roles than just software. PMs, FC area managers, managers - if your job involves writing anything you're expected to be using AI in some capacity.
I have been not using AI since the beginning and nothing has changed for me. I have only watched my coworkers and the industry get dimmer and get faster at getting dimmer. I have witnessed professionals become total amateurs and form “well the AI generated this unreviewed report” as their basis of knowledge.
No thanks I’ll just watch y’all slip down the slope.
> That said, if you can't figure out how to use AI in a software job you should look into it. Not using AI at this point is a lot like not using CAD as an architect.
When LLMs are capable of actually doing a good job, then it might be like that. We are not there yet, and we may never be.
If you as a “leader” refuse to go along with the crowd and you’re right, then after the dust settles you look like someone who guessed right. Oh and now we’re in a recession so you are probably having a bad time regardless. You maybe get one promotion, congratulations.
If you refuse to go along with the crowd and you’re wrong, you look like a Luddite, you probably got fired at some point along the way and your judgement reputation is hurt.
If you do go along with the crowd and the crowd is wrong, you are just in the same boat as everyone else. You are probably about the same as if you went against the crowd and you were right, possibly even better because it can take awhile to be proven right and you could be hurt in the middle.
So, I think, once something like this picks up enough steam, it’s just logical on a per individual basis for everyone to go along with it, regardless of how they feel about it internally.
Managing a lot of people at scale is messy and you have to use crude solutions. It's impossible to know everything that's going on.
If you were a manager you wouldn't do any better. Out of the crooked timber of humanity, no straight thing was ever made.
The top-down approach to encouraging (mandating?) AI usage strikes me as infantilizing to the workers, who are perfectly capable of choosing which tools they use and when.
In the early nineties, it was common for experienced electrical engineers to keep on using schematic entry digital design and look down on RTL and synthesis tools, despite that fact the latter was already way more productive. At some point, management had to put their foot down and force everyone to switch to using synthesis.
It's not unreasonable to assume that many people are set in their ways and unwilling to change their behavior without a bit of a push.
[1] https://en.wikipedia.org/wiki/Domino_logic
There was no synthesis algorithms that would map VHDL or Verilog designs into domino logic elements at the time. I believe that the most work in the synthesis-to-domino-logic area was done at the beginning of current century.
So, DEC's engineers and, I think, Intel's engineers were doing work using schematics well into 21-st century.
Leaving aside the ethical aspects of using AI (not because they're not valid, because they're off topic for this discussion), in my line of work, the capabilities and productivity improvement of AI are staggering. Most of it is not writing the new code, which is but a small part of chip design, but everything else.
I can't give a concrete work example, but here is an experiment that I ran a month ago. https://tomverbeure.github.io/2026/04/12/AMIQ-License-Key-Ge.... If it can do that, it's not hard to imaging similar use cases related to root causing complex simulation failures. It is frighteningly good at that.
That's a pretty interesting use case. I assume this is for RTL simulation given the thread, but how do you connect the output of the simulator to the AI?
But for large cases, use tools to extract all interfaces from the waveform file and save it as a text file, or add $display statements in the Verilog itself to dump the transactions. A SOTA LLM will eat it up. You point it to the RTL, a log file with hundreds of thousands of lines, and give it a few lines to explain how it is supposed to behave. Just tell it "My simulation is hanging. Figure why." Wait 15 minutes and it will tell you why it hangs and which line to change in your code to fix it.
I've done the experiment after the fact: I had spent ~3 days to fix complicated 3 bugs. I then rolled back the code and told it "Here is the spec. Find all the bugs in this code". It found all 3 bugs in around 30 min. That's when I realized that things won't be the same anymore. (And don't get me wrong: I love debugging simulations.)
I read that BSV source code is about three times shorter than similar design in Verilog and also has three times smaller defect density (defects per significant line of code). So just by changing the HDL from Verilog to BSV one can have nine (9) times less defects in the design.
>And why not take the alternative approach of identifying the subset of people who have indeed found solid uses and spread their best practices around?
A bottom-up approach has a far better chance of finding those particularly good use cases, and if you lean on the people how found those fits, they're more persuasive than top-down edicts. They actually know what they're talking about. If the point is to leverage AI for better work outcomes, someone with your experience is far more valuable than "here's a dashboard, make the number go up," which seems to be what's going on at Amazon.
If LLMs truly are as good as their proponents say, engineers will use them even if management outright forbade it. The fact that people aren't using them, and have to be forced, is extremely strong evidence that they are not in fact that useful.
See my other reply in this subthread. For my line of work, they are in fact ridiculously useful.
You include those only in second round along with guidelines and recommendations on how to use it effectively.
"Research" isn't part of my job title. If you don't know what's possible then why are you deploying it? You should be telling _me_ what's possible. I mean, you _paid_ for it, how can you possibly not know what you were getting?
> in the expectation that you might learn something useful that will be more valuable in the long run.
"I'll take `what even are profits?' for $200, Alex."
An overly generous steelman in my opinion as well. Have 10% of your employees focus on finding ways to properly leverage the new technology - don’t pressure 100% of your employees with bull shit metrics.
It's that simple.
(Never mind that these bloggers are just writing ad copy for cloud providers.)
You reward me for wasting tokens and punish me for not wasting them, I will maximally waste them and wont "explore hownto make them useful". The latter wastes less tokens and that is punished.
It's quite possible they aren't trying to measure performance but are literally just trying to increase token consumption to feed the bubble and hype.
Plus pressure employees may find new unique use cases for AI.
It's like if your goal is inflation, you give out tons of money and as long as its spent, you achieve your goal.
Absurdly wasteful but Goodhart's Law almost never fails.
It makes for pretty charts, extrapolations, and projections.
It doesn’t matter if the numbers are not particularly correct. As long as the data gathering step can be justified it’ll do. Though bonus points if making the number bigger is a good thing (v.s. tracking something like number of sev 1 issues).
> The first step is to measure whatever can be easily measured. This is okay as far as it goes.
> The second step is to disregard that which can't be easily measured or give it an arbitrary quantitative value. This is artificial and misleading.
> The third step is to presume that what can't be measured easily really isn't very important. This is blindness.
> The fourth step is to say that what can't be easily measured really doesn't exist. This is suicide.
— Daniel Yankelovich, "The New Odds"
[0] https://en.wikipedia.org/wiki/McNamara_fallacy
https://nordicapis.com/the-bezos-api-mandate-amazons-manifes...
I worked for a healthcare tech startup that made everyone wear fitbits and you got cheaper health insurance premiums if you averaged a higher # of steps every day. People were putting their fitbits on drillbits and whirring them around to log like 20,000 steps a day.
This why AWS is bleeding good engineers for years. What is left is starting to look like Boeing post McDonnell merger...
They took out a quarter of their documentation page limited real estate, with AI doc shorts nobody asked for, nobody needs, and cant disable.
Senior management let go our localisation staff. Now they want us to use AI to translate. They still want manual review.
We use Github Copilot at work, we get a measly 300 requests with the budget to go over if necessary. Opus 4.7 or GPT 5.5 would eat all of those up in a day. Are we supposed to be using more than the allotted amount, do management see that as a good thing. Or is it best to stick within the allocated amount. Who knows? Management are playing games everywhere it seems.
One of the weirder things about all this is how arbitrary and non objective the billing structure seems. One of the reasons I'm happy to use it at work, but won't ever personally subscribe. It's so opaque.
Maybe they’re right. But it’s really hard to see how.
I setup entire virtual teams (Dev, QA, product, reviewers etc with the initiating model just acting as the agent manager to keep it's context minimal) to one-shot some stuff and it kept churning and making progress.
Those days are just about over with the change to token pricing but for a time....
"You spent $23, over the $20 food limit. Be more careful next time. You spent $600 on tokens, $200 more than the average. Congratulations!"
> whoever spent $600 on Anthropic last night, great job leveraging Al! But to the person who spent $23 on Uber Eats please remember our limit for food is $20 per meal
I can't say that this isn't happening, but at least the parts of the company I get visibility into, what the article describes isn't my experience. There is a lot of interest in using GenAI, but people are mostly getting kudos around creative uses for GenAI, not just for raw amount of tokens. For most scaled GenAI efforts, there is a lot of focus on output metrics (metrics like accuracy, number of findings, number of things fixed, and so on).
I'm surprised how few comments are written with the prior that Amazon managers aren't stupid or uninformed about how incentives work.
My guess would be that someone created the leaderboard without a lot of consultation with managers, and that some employees feel a competitive urge to try to "win" the leaderboard by burning tokens.
LOL, I'd imagine even Amazon HR would be little restraint in showering such praise.
What we can verify is how how Amazon already treats workers, they will surveil anyone within their systems regardless of the futility of said surveillance. Why are we suppose to not believe them using LLM systems as a means to further control their expensive employees from unionizing or seeking out solidarity with fellow workers? All LLMs do is enable tyrannical managers more power to hold over other workers, said workers are forced to engage in self alienation for fear of losing your job or forcing to do meaningless work as that is what's being tracked (and what LLMs excel at producing).
Hardly a good proposition for any worker.
I'm sorry but I fully do not believe you. This is a company that fires workers for taking too long of a bathroom break where said workers piss in bottles for fear of getting fired and you're going "hey guys, it's not too bad. Only some workers get whipped, others don't!"
One of my favorite heuristics/quotes applies here: "no matter how good the strategy, occasionally consider the result."
Want to know if AI is working for your org? Ask yourself/employees to "show me the result." That requires judgment and taste (is the result something of value, or just the appearance of work having been done), but it will also save you a ton of stress and disappointment later.
― Charlie Munger
However I see tons of people on LinkedIn with ways of backing up context, not wanting to lose context, etc.
This seems like another way the system is being misused. Higher context usage also uses more tokens. I suspect you get worse (and slower) output too than a dense detailed context.
a) you find a particular context that executes well and want to preserve parts of it or not have to repeat explanations
b) you want to continue a session so you don't have to rebuild the context from scratch
I think A is something where it's totally reasonable to preserve pieces as part of like a prompt library or equivalent, or directory-specific agent files, that kind of thing.
I think B is much more likely to lead to problems if you do it over a long time, but it can be pretty useful for getting the last drop of juice out of the metaphorical orange.
I think the antipattern (that I've done myself, admittedly) is swapping between different restored contexts for different tasks or roles - at that point you should be either converting it to more durable documentation if warranted, or curating it more specifically than "restore the entire context" even if it's just one-off.
Ideally that replaces the back and forth cycle of it's this, no it's that, it's that for reasons XYZ with a single ingestible blob that gets the agent up to speed.
Sometimes it's better to dump context incrementally, reinitialize the agent with a subset of the context, or manually prime it, then ask it to write documentation as a focused task.
If every exchange is treated as an independent query/response then it's much easier to see how cutting out the fluff using a combination of its summaries and your own helps stay focused.
Where? What industry, what kind of projects? The only one where I can imagine it to be true is vulnerability research, and I imagine all the low-hanging fruit to be picked soon
It will spin up a boilerplate uboot or BSP config no problem. I still go in and manually check and add peripherals, but opus 4.7 is terrifyingly smart.
Need to modify or add a new peripheral, it's there no problem. Or in a bare metal project, I can point it at an STM32 cubemx starter repo and ask for a feature (set up the ADC on pins 4 and 7, ask me for parameters) and it's just done. I do in a day what would probably take me 2.
It doesn't help me with reviewing others' work, or planning (I maintain that these are manual tasks). So yeah, I agree with the 40-60%. The parts of my job it helps, it really helps.
I didn't even need that bootloader, just didn't like the fact that Adafruit one takes too much space :)
I'm confused, isn't the whole point of using the STM32CubeIDE that all the peripherals, like say setting up an ADC on pins 4 and 7, are checkbox features?
https://wiki.st.com/stm32mcu/wiki/Getting_started_with_ADC
It also generates a ton of bloat and comments.
My experience is it will attempt read from the wrong memory block resulting in garbadge. But that's a while ago so maybe LLMs have gotten better.
We started working on a new product a few months ago and it's really dangerous up front on an empty code base. It can quickly write more code than you can comfortably understand. The more serious danger is when three people are all doing that at once. I had to bring this up at meetings and try to get a better review culture going.
Now that we're a few months in and changes are more targeted additions to an existing system we're happy with, it's _huge_ (which has been my experience on our existing product). I can drop a brief paragraph I speech-to-texted into my agent, give it a general starting place (where I imagine the issue/feature extension point is), and then tell it to do some research and propose a change. I'd guess it's about 50% of the time that I have to update it's implementation plan. Then I let it run (my favorite is setting this up before a meeting) and come back. Then we have to review the code and go from there.
Definitely a 50%+ speed up in some cases, but not all. It's also great for problems that procrastinating, as it reduces friction so much.
At my company(big name, AI beneficiary), middle management seems to mostly be concerned with shuffling chairs on the deck of the Titanic while they wait for their stock to fully vest. There is very little interest in improving anything, just an obsession with risk avoidance and performative sideshows whenever upper management wonders why execution is so poor.
People churning out slop is slowing me down and the full effects of it won't be felt for a while.
Codex was pretty sure something was wrong with the response object being returned by the endpoint in question. It turned out there was a conversion method applied to the endpoint response, which mutated its input. This method had been running w/o problems for a while, until the dev put it in a useEffect. At this point, React dev mode's policy of rendering everything twice kicked in, which caused the second pass through the conversion method to fail on the now-mutated input object.
Codex never even hinted that the conversion method mutating the input could be a problem, nor anything about React dev mode rendering everything twice (specifically to catch problems like this). Apparently, neither of those came up much in its training data.
My point is that this dev seems to have lost, in a few short months of writing everything with Codex, the ability to trace an error from its source (the error trace was being swallowed in a Codex-written catch block that spit out a generic error message). He was completely stuck and just kept doubling down on trying to get Codex to solve the problem, even checking with Copilot as a backup. I'm not optimistic about where this is headed.
In my view you should 1) use AI as a tool to help you learn and 2) write boilerplate you could have easily written yourself. Getting it to think for you is counterproductive (at least until it replaces us entirely).
Everyone I talk to has nowadays KPIs tied to AI usage on their performance evaluation.
It's astonishing how society forgets.
That said, I’m kind of having a blast using CC in corporate with all the connectors available at our disposal, and I baffled how little some of my coworkers know about what’s available and what the capabilities are. So it’s clear that perhaps some encouragement is prudent for those who are slower to embrace new technologies, but I’m not sure tokencounting and tokenmaxing are the answer.
That may be an enterprise saas is shit problem, but I'm just happy that my employer now has a wiki search that works.
> That’s my latest joke — that we’ll have to pretend like we used the tools so they can feel validated they’ve spent all this money on hyped up technology. So, yes, it’s em-dashes and “it’s not just this, it’s that …” so they can hopefully leave us alone
I have an FT subscription and they keep moving toward this kind of narrative first reporting to get clicks. It’s no longer a believable paper.
If I do all of this, do I get a promotion?
Filing JIRA tickets, updates. Opening PRs, having AI review PRs. This will all use tokens.
No need to tokenmaxx, you will end up burning tokens with just regular AI usage
Most people look at sea changes come and go. They all have a story of how they "could have bought Bitcoin when it was $100" or whatever. In an org, you don't want to have the story of "we could have done that when nobody else had", so you incentivize adoption of the tool as hard as possible and hope that dipping feet in the water makes people want to swim. If you don't already have a culture of early adoption (and no large company can) then you have to use blunt incentives. I don't think anyone has demonstrated otherwise.
This measuring of tokenmaxxing as a proxy for something beneficial to the company has got to be the single dumbest thing I have ever heard of in my entire software career.
It would be like some company in the dot com era measuring employee's internet download traffic as a proxy for productivity or internet-pilledness.
Why not just reward employees based on who's submit the largest expenses claims? That might have some correlation to work too, right ?!
Hell, I'm in the bowels of Google as an IC and it's hard to understand what adjacent teams are doing. Even harder for management that never gets their hands on anything.
So while you know engineers are probably bullshitting you with fake work, you can at least turn around and tell your supervisor the numbers. It's all a game of plausible deniability.
...except each keystroke has an associated cost, the sum of which may equal or exceed my salary.
mass hysteria perhaps?
There used to be a time where people used to die from dancing too much (from my understanding in which hey I can be wrong, I usually am): https://en.wikipedia.org/wiki/Dancing_plague_of_1518
I think that although we wish to consider ourselves as smart and really intelligent but we run on biological machines and clocks which evolutionary have not much of a difference since 1518 or even the times when we used to hunt and forage for that matter.
People use AI differently and they can be equally productive with a variety of token usage quantities.
Also, different kinds of work are differently amenable to using AI.
Using it to grade people is, err, rather unwise.
There should be an anti leaderboard that highlight people under a threshold. Not trying to learn how to use ai while working at a company like Amazon is almost certainly a bad thing, and cause for looking into why.
Hell, throw a Tarot reading in the middle of the loop so the agent has non-deterministic behavior too.
https://github.com/trailofbits/skills/tree/main/plugins/let-...
Amazon management wants to play five-dimensional chess? Play Balatro instead.
It does not get any better than that
Jensen, Sam, Dario: https://i.imgur.com/AI7rtCY.jpeg
Is that in the contract to use AI tools? If not, then what are they on about.
Very very few jobs in the US give you a contract.
That said, if you can't figure out how to use AI in a software job you should look into it. Not using AI at this point is a lot like not using CAD as an architect.
They also use a bunch of dumb metrics like, total PRs submitted, total comments made on PRs, etc. To the point that, there are multiple heavily used internal tools to game these metrics. Eg, auto-comment LGTM on any approved PR. Thus, making the metrics even worse than they would have been prior.
> Managers are discouraged from using token use to measure performance, according to a person familiar with the matter.
Like CAD and architects, if you're not using LLM's while coding it's an issue, but Amazon is very clear that this isn't an official metric. I would believe managers know how many tokens you're using, but it sounds like they just interviewed a disgruntled employee who didn't like AI and published it.
You're replying to an amazon employee who says they are being used in performance reviews, in comment thread on an article where 2 other Amazon employees say that their token usage is being tracked and they feel pressure to maximize token usage.
Do you have first hand knowledge to refute these 3 people with first hand knowledge?
The CAD thing is incredibly weird. I've never known an architect who had their CAD usage minutes tracked.
Btw I'm a big tech company and I know many people who are "token maxing". It's very common.
Does CAD software regularly generate an incorrect design that results in a catastrophic failure of the building?
AI is genuinely useful for many tasks. But 2x or greater business value from engineering orgs isn’t it. And even if it was business are terrible at measuring value added on an individual basis.
What they can measure though is token use. I’ve heard the same thing from other large companies my friends work for.
It’s bad enough that I’ve moved a significant amount of money out of US large-cap stocks.
You should have asked AI to come up with a better analogy.
No thanks I’ll just watch y’all slip down the slope.
When LLMs are capable of actually doing a good job, then it might be like that. We are not there yet, and we may never be.
"Wow, look at how fast employee # 2 is setting money on fire! Let's promote him!"
Heh. No need to be ashamed, I used to believe them when they lied to me like this too!