49 comments

  • i7l 4 hours ago
    The fact that management signed off on measuring AI use through token usage shows how incompetent management really is, including in allegedly technical conmpanies like Amazon. Tokenmaxxing was an entirely expected and rational response. IOW You measure employees in stupid ways, you're going to get stupid behaviour as a consequence.
    • pfannkuchen 1 hour ago
      So my assessment of the current mania is that it’s basically a management variant of Pascal’s wager.

      If you as a “leader” refuse to go along with the crowd and you’re right, then after the dust settles you look like someone who guessed right. Oh and now we’re in a recession so you are probably having a bad time regardless. You maybe get one promotion, congratulations.

      If you refuse to go along with the crowd and you’re wrong, you look like a Luddite, you probably got fired at some point along the way and your judgement reputation is hurt.

      If you do go along with the crowd and the crowd is wrong, you are just in the same boat as everyone else. You are probably about the same as if you went against the crowd and you were right, possibly even better because it can take awhile to be proven right and you could be hurt in the middle.

      So, I think, once something like this picks up enough steam, it’s just logical on a per individual basis for everyone to go along with it, regardless of how they feel about it internally.

    • this_user 4 hours ago
      One argument I have heard in favour of this is that management knew this would be a side effect, but that it's more important to have people engage with AI as much as possible simply to explore what is actually possible. You are effectively knowingly wasting money in the expectation that you might learn something useful that will be more valuable in the long run.
      • oytis 20 minutes ago
        If companies are suddenly willing to spend money on letting their staff experiment, why not let them experiment with what they want to? They probably know more about technology than you do, otherwise you wouldn't need them.
      • asdfman123 1 hour ago
        Exactly. That's the problem ICs don't want to admit.

        Managing a lot of people at scale is messy and you have to use crude solutions. It's impossible to know everything that's going on.

        If you were a manager you wouldn't do any better. Out of the crooked timber of humanity, no straight thing was ever made.

      • aerodexis 2 hours ago
        in this instance - it seems like Amazon employees are wasting money exploring ways to waste money.
      • newswangerd 2 hours ago
        All so that they can lose this accumulated knowledge during the next round of layoffs.
      • the_snooze 3 hours ago
        My questions for that approach are: Why treat AI as a special technology that needs enterprise-scale exploration to come up with a useful application? And why not take the alternative approach of identifying the subset of people who have indeed found solid uses and spread their best practices around?

        The top-down approach to encouraging (mandating?) AI usage strikes me as infantilizing to the workers, who are perfectly capable of choosing which tools they use and when.

        • tverbeure 3 hours ago
          Human nature?

          In the early nineties, it was common for experienced electrical engineers to keep on using schematic entry digital design and look down on RTL and synthesis tools, despite that fact the latter was already way more productive. At some point, management had to put their foot down and force everyone to switch to using synthesis.

          It's not unreasonable to assume that many people are set in their ways and unwilling to change their behavior without a bit of a push.

          • thesz 11 minutes ago
            Alpha 21064, 1992, was using domino logic [1].

            [1] https://en.wikipedia.org/wiki/Domino_logic

            There was no synthesis algorithms that would map VHDL or Verilog designs into domino logic elements at the time. I believe that the most work in the synthesis-to-domino-logic area was done at the beginning of current century.

            So, DEC's engineers and, I think, Intel's engineers were doing work using schematics well into 21-st century.

          • nophunphil 3 hours ago
            I guess the only difference between this and your example is the concrete efficiency gain from RTL and synthesis tools versus dubious applications of AI. I do agree with the second point about pushing people to explore new ways of doing things though.
            • tverbeure 2 hours ago
              > dubious applications of AI

              Leaving aside the ethical aspects of using AI (not because they're not valid, because they're off topic for this discussion), in my line of work, the capabilities and productivity improvement of AI are staggering. Most of it is not writing the new code, which is but a small part of chip design, but everything else.

              I can't give a concrete work example, but here is an experiment that I ran a month ago. https://tomverbeure.github.io/2026/04/12/AMIQ-License-Key-Ge.... If it can do that, it's not hard to imaging similar use cases related to root causing complex simulation failures. It is frighteningly good at that.

              • ua709 2 hours ago
                > use cases related to root causing complex simulation failures.

                That's a pretty interesting use case. I assume this is for RTL simulation given the thread, but how do you connect the output of the simulator to the AI?

                • tverbeure 1 hour ago
                  For a small case, a colleague took a screenshot of waves in the waveform viewer and pasted it into the AI tool. It worked.

                  But for large cases, use tools to extract all interfaces from the waveform file and save it as a text file, or add $display statements in the Verilog itself to dump the transactions. A SOTA LLM will eat it up. You point it to the RTL, a log file with hundreds of thousands of lines, and give it a few lines to explain how it is supposed to behave. Just tell it "My simulation is hanging. Figure why." Wait 15 minutes and it will tell you why it hangs and which line to change in your code to fix it.

                  I've done the experiment after the fact: I had spent ~3 days to fix complicated 3 bugs. I then rolled back the code and told it "Here is the spec. Find all the bugs in this code". It found all 3 bugs in around 30 min. That's when I realized that things won't be the same anymore. (And don't get me wrong: I love debugging simulations.)

                  • thesz 4 minutes ago
                    Have you tried to change your HDL to something more modern like Bluespec System Verilog or, god forbid, anything embedded into Haskell or Scala?

                    I read that BSV source code is about three times shorter than similar design in Verilog and also has three times smaller defect density (defects per significant line of code). So just by changing the HDL from Verilog to BSV one can have nine (9) times less defects in the design.

                  • the_snooze 1 hour ago
                    This is why I asked:

                    >And why not take the alternative approach of identifying the subset of people who have indeed found solid uses and spread their best practices around?

                    A bottom-up approach has a far better chance of finding those particularly good use cases, and if you lean on the people how found those fits, they're more persuasive than top-down edicts. They actually know what they're talking about. If the point is to leverage AI for better work outcomes, someone with your experience is far more valuable than "here's a dashboard, make the number go up," which seems to be what's going on at Amazon.

                    • tverbeure 1 hour ago
                      How do you know up front who will find the best use cases? Both approaches can work.
                  • ua709 1 hour ago
                    SOTA = State of the Art? Like say Claude Opus 4.5? I actually want to try this out.
                    • tverbeure 1 hour ago
                      I think I used Opus 4.6 1M.
                      • ua709 1 hour ago
                        Thanks! I'm going to give this a shot on a nasty simulation I'm presently working on... :)
          • bigstrat2003 2 hours ago
            It is completely unreasonable to assume that. Tech people are so hungry for productivity gains that they regularly will defy management forbidding them from using a tool, because the tool is so good they feel they have to have it.

            If LLMs truly are as good as their proponents say, engineers will use them even if management outright forbade it. The fact that people aren't using them, and have to be forced, is extremely strong evidence that they are not in fact that useful.

            • tverbeure 1 hour ago
              > extremely strong evidence that they are not in fact that useful

              See my other reply in this subthread. For my line of work, they are in fact ridiculously useful.

          • watwut 1 hour ago
            > It's not unreasonable to assume that many people are set in their ways and unwilling to change their behavior without a bit of a push.

            You include those only in second round along with guidelines and recommendations on how to use it effectively.

            • tverbeure 1 hour ago
              What if those people are some of the most experienced ones, who can see use cases, and flaws, that more junior people won't?
        • jjk7 3 hours ago
          A tool so good, the workers need to be forced to use it.
      • themafia 3 hours ago
        > engage with AI as much as possible simply to explore what is actually possible

        "Research" isn't part of my job title. If you don't know what's possible then why are you deploying it? You should be telling _me_ what's possible. I mean, you _paid_ for it, how can you possibly not know what you were getting?

        > in the expectation that you might learn something useful that will be more valuable in the long run.

        "I'll take `what even are profits?' for $200, Alex."

        • datsci_est_2015 2 hours ago
          Hear hear.

          An overly generous steelman in my opinion as well. Have 10% of your employees focus on finding ways to properly leverage the new technology - don’t pressure 100% of your employees with bull shit metrics.

      • red_admiral 2 hours ago
        Are the people engaging though, or are they telling the AI "go do some busywork" and then minimizing that window and getting on with their job?
      • otabdeveloper4 2 hours ago
        No, it's literally because some dumb manager read a blog where an influencer said that you ain't a real AI native and ain't worth shit unless your developers are spending $XXXX on tokens each day.

        It's that simple.

        (Never mind that these bloggers are just writing ad copy for cloud providers.)

      • watwut 1 hour ago
        That still sounds like a dumb strategy. Or, more likely, post hoc rationalization.

        You reward me for wasting tokens and punish me for not wasting them, I will maximally waste them and wont "explore hownto make them useful". The latter wastes less tokens and that is punished.

    • wordpad 4 hours ago
      Depends on what they're trying to incentivise.

      It's quite possible they aren't trying to measure performance but are literally just trying to increase token consumption to feed the bubble and hype.

      Plus pressure employees may find new unique use cases for AI.

      It's like if your goal is inflation, you give out tons of money and as long as its spent, you achieve your goal.

      • cousinbryce 4 hours ago
        I would guess they are trying to maximize training data
        • Zak 3 hours ago
          If I was being rewarded for using more tokens, I would feed LLM output back into the model. That's probably not very useful training data.
          • piva00 2 hours ago
            I personally know two people who are doing exactly that after a mandate rolled out at their work, the measurement is "tokens spent" and since they weren't finding many cases that required a lot of tokens they simply started to run agent loops feeding each other.

            Absurdly wasteful but Goodhart's Law almost never fails.

      • bordumby 4 hours ago
        [dead]
      • estimator7292 3 hours ago
        [dead]
    • koolba 4 hours ago
      Management loves numbers because they’re the only things you can objectively compare as X > Y.

      It makes for pretty charts, extrapolations, and projections.

      It doesn’t matter if the numbers are not particularly correct. As long as the data gathering step can be justified it’ll do. Though bonus points if making the number bigger is a good thing (v.s. tracking something like number of sev 1 issues).

      • Terr_ 3 hours ago
        Sounds a bit like a McNamara Fallacy [0] of over-prioritizing numeric measures, which--when taken "too literally"--becomes:

        > The first step is to measure whatever can be easily measured. This is okay as far as it goes.

        > The second step is to disregard that which can't be easily measured or give it an arbitrary quantitative value. This is artificial and misleading.

        > The third step is to presume that what can't be measured easily really isn't very important. This is blindness.

        > The fourth step is to say that what can't be easily measured really doesn't exist. This is suicide.

        — Daniel Yankelovich, "The New Odds"

        [0] https://en.wikipedia.org/wiki/McNamara_fallacy

      • delfinom 4 hours ago
        Yes, but also because management is largely unqualified to be managing the stuff they are hired for. So they regress to numbers because they otherwise cannot participate in anything technical.
    • nradov 1 hour ago
      If it's stupid and it works then it's not stupid. Sometimes executives have to use blunt instruments to turn around the culture of a hidebound large organization. When Jeff Bezos sent his 2002 API mandate it might have seemed stupid at the time and yet it worked.

      https://nordicapis.com/the-bezos-api-mandate-amazons-manifes...

    • _fizz_buzz_ 2 hours ago
      I have recently played around with lots of data from measurements and one can totally dump everything into context and let Claude try to analyze data that way. It burns through a lot of tokens. It is smarter to save data to disk and let Claude write scripts that handles/analyzes the data. It’s much faster and the results are much better and you save a lot of tokens. But I guess Amazon prefers the first approach.
      • runsfromfire 32 minutes ago
        I don’t have any specific inside knowledge about Amazon, but I would hazard a guess that the first approach also provides better training material for the LLM.
    • randycupertino 1 hour ago
      > You measure employees in stupid ways, you're going to get stupid behaviour as a consequence.

      I worked for a healthcare tech startup that made everyone wear fitbits and you got cheaper health insurance premiums if you averaged a higher # of steps every day. People were putting their fitbits on drillbits and whirring them around to log like 20,000 steps a day.

    • spike021 4 hours ago
      My current job is doing the exact same thing. My manager even showed me a tool with graphs showing token use and related metrics.
    • johnbarron 3 hours ago
      This is Matt Garman, the ultimate MBA. Bonus for sure tied to tokens-per-quarter, which is the 2026 equivalent of measuring engineers by lines of code...

      This why AWS is bleeding good engineers for years. What is left is starting to look like Boeing post McDonnell merger...

      They took out a quarter of their documentation page limited real estate, with AI doc shorts nobody asked for, nobody needs, and cant disable.

    • consp 4 hours ago
      Goodhart's law in action.
    • babypuncher 2 hours ago
      Most productivity metrics are stupid, vain attempts at avoiding doing real management work. If you are actually interfacing with your subordinates regularly, as managers should, it will be obvious who is pulling their weight and who isn't, no need for arbitrary statistics that are easily gamed.
    • HDThoreaun 3 hours ago
      Or maybe they plan to review how effective high usage engineers have been next cycle and the tokenmaxxers will get bit in the ass when they have little to show for all their wasted tokens? Performance metrics can, and do, change on a dime and tokenmaxxing seems short sighted when management can look at old logs.
    • mlvljr 2 hours ago
      [dead]
  • Argonaut998 4 hours ago
    I swear the industry is being Garry Tanned.

    Senior management let go our localisation staff. Now they want us to use AI to translate. They still want manual review.

    We use Github Copilot at work, we get a measly 300 requests with the budget to go over if necessary. Opus 4.7 or GPT 5.5 would eat all of those up in a day. Are we supposed to be using more than the allotted amount, do management see that as a good thing. Or is it best to stick within the allocated amount. Who knows? Management are playing games everywhere it seems.

    • thisislife2 3 hours ago
      It's just not with AI though. It's who they get their advise from. One of my friend was cribbing to me about his company management - apparently someone in management discovered that PostgresDB is a real good database and free, and so they authorised the IT department to migrate their application from Oracle Cloud to PostgresDB as it will "save a lot of money" (true, but...). However, they aren't willing to shell out for the commercial solutions (like EnterpriseDB, which would be still a lot cheaper than Oracle), and are insisting that the team also recreate "all and every" feature that Oracle DB has and is used by their application, but is lacking in PostgresDB - after all, "If Oracle can do it, why can't you!?".
      • mrhottakes 1 hour ago
        Memories of me and my three developer team being told "we need to use Excel, but in a web browser. So just make an app that does everything like Excel"
      • tandr 3 hours ago
        Wow... How (in)competent is his management??? "If Oracle can do it"... in 25 years with 1k devs...
        • kgwgk 3 hours ago
          47 years if you count from the first release. But now you have this super intelligent thing that enables anyone to create a billion dollar business - you have no excuse!
          • tandr 19 minutes ago
            "Hey Opus, create me an fully tested code base for Oracle-like DB from scratch. Don't overcomplicate it, so it should be ready with when I get back from lunch"?
      • jjk7 3 hours ago
        I had a similar experience but with MSSQL, was invited to join some meetings with MS Sales folks. I quickly learned the project was never meant to succeed, but was simply leverage to negotiate a better contract.
    • birdsongs 4 hours ago
      Requests are such a weird metric. We have a token limit via Copilot (unless I'm misunderstanding our setup), and most of my "features" burn 1 to 2% of my token limit per month on 4.7. But I don't admin our plan, and I'm unsure what we actually git. Vscode just gives me a percentage of tokens remaining metric.

      One of the weirder things about all this is how arbitrary and non objective the billing structure seems. One of the reasons I'm happy to use it at work, but won't ever personally subscribe. It's so opaque.

      • phainopepla2 3 hours ago
        Copilot is currently based on requests (1 prompt = 1 request, with multipliers for different models). At the beginning of June the billing structure will change to just be normal API cost. Your features are going to start burning 10-20% of your token limit using 4.7
        • stockresearcher 2 hours ago
          At my employer, everybody who has an opinion that matters is convinced that all of the overages by the high users will be more than made up for by the people who barely use it.

          Maybe they’re right. But it’s really hard to see how.

    • csoups14 4 hours ago
      We've raised, trained, hired and promoted generations of business people who push utter nonsense, understand nothing but optimizing for bad metrics, and orient solely around short term results. It's hard to look beyond modern corporate America when looking for causes of the fall in our living standards. This AI tokenmaxxing nonsense is just another rung on the same ladder to hell we've been on for decades.
    • nextlevelwizard 4 hours ago
      How you burn 300 requests in a day? From my Copilot usage Opus consumes surprisingly few requests to do a lot of stuff. It isn’t paying by token but instead by prompt or something.
      • humanfromearth9 0 minutes ago
        Review-fix rounds after generation of text or code, until convergence to a solution that doesn't need more improvements.
      • oytis 4 hours ago
        I guess you need automation for that. Run claude with cron to find fulnerabilities, suggest and implement improvements, automatically dig through backlog
      • ddtaylor 2 hours ago
        Copilot charges a 27X multiplier on Opus 4.7 prompts.
      • theblazehen 4 hours ago
        300 prompts in a day isn't that unreasonable to achieve on a heavy day? And Opus has a significant multiplier as well
        • antod 2 hours ago
          Yeah, thats 20 Opus 4.7 prompts.
      • coredog64 4 hours ago
        Opus 4.7 has a 7.5x multiplier when it's used from Copilot. Falling back to 4.6 it's only 3.5x
        • antod 2 hours ago
          It recently went up to 15x in our org.
      • devmor 4 hours ago
        If you are using subagents for asynchronous work, you can burn through 300 requests in a workday easily.
        • saratogacx 2 hours ago
          Copilot didn't charge for subagents. You could do an insane amount of work with dozens of subagents with a single request and a deep enough prompt to kick it off.

          I setup entire virtual teams (Dev, QA, product, reviewers etc with the initiating model just acting as the agent manager to keep it's context minimal) to one-shot some stuff and it kept churning and making progress.

          Those days are just about over with the change to token pricing but for a time....

  • asdfman123 4 hours ago
    Saw a good joke on twitter about it. Something like:

    "You spent $23, over the $20 food limit. Be more careful next time. You spent $600 on tokens, $200 more than the average. Congratulations!"

  • jkingsbery 4 hours ago
    I work at Amazon (standard disclaimer: just sharing my own experience, not an official spokesperson, etc.)

    I can't say that this isn't happening, but at least the parts of the company I get visibility into, what the article describes isn't my experience. There is a lot of interest in using GenAI, but people are mostly getting kudos around creative uses for GenAI, not just for raw amount of tokens. For most scaled GenAI efforts, there is a lot of focus on output metrics (metrics like accuracy, number of findings, number of things fixed, and so on).

    • cactacea 3 hours ago
      I also work at Amazon and my coworkers are playing 20 questions every morning to keep their metrics up. Like anything else there it depends on your org & managers.
    • thinkling 4 hours ago
      Thanks for the inside insight.

      I'm surprised how few comments are written with the prior that Amazon managers aren't stupid or uninformed about how incentives work.

      My guess would be that someone created the leaderboard without a lot of consultation with managers, and that some employees feel a competitive urge to try to "win" the leaderboard by burning tokens.

      • johnbarron 3 hours ago
        Your comment is the equivalent of stating, that Jeff Bezos and Andy Jassy, do not really know their employees are carrying around urine bottles.
        • thinkling 1 hour ago
          Mmm, no, I don’t think it’s equivalent. I think they know that if you make the work hard, some employees will have trouble keeping up and will do things like peeing in bottles. And they’re OK with that, because they think there are enough people who can keep up that they can push the weaker people out. I think they believe that the peeing in bottle is relatively rare. I’m unsure whether that’s right or not. It’s been reported that it happens, but I have no sense whether it’s common.
        • geodel 2 hours ago
          Management is told its a homebrew. Very commonly used by developers all over the industry.
    • geodel 4 hours ago
      > There is a lot of interest in using GenAI, but people are mostly getting kudos around creative uses for GenAI,

      LOL, I'd imagine even Amazon HR would be little restraint in showering such praise.

    • shimman 3 hours ago
      Amazon is a massive company, your single experience is worse than an anecdote because there is no way verify.

      What we can verify is how how Amazon already treats workers, they will surveil anyone within their systems regardless of the futility of said surveillance. Why are we suppose to not believe them using LLM systems as a means to further control their expensive employees from unionizing or seeking out solidarity with fellow workers? All LLMs do is enable tyrannical managers more power to hold over other workers, said workers are forced to engage in self alienation for fear of losing your job or forcing to do meaningless work as that is what's being tracked (and what LLMs excel at producing).

      Hardly a good proposition for any worker.

      I'm sorry but I fully do not believe you. This is a company that fires workers for taking too long of a bathroom break where said workers piss in bottles for fear of getting fired and you're going "hey guys, it's not too bad. Only some workers get whipped, others don't!"

  • rglover 4 hours ago
    It is damn fascinating to see just how many (big, serious) organizations are creating unnecessary internal strife over this.

    One of my favorite heuristics/quotes applies here: "no matter how good the strategy, occasionally consider the result."

    Want to know if AI is working for your org? Ask yourself/employees to "show me the result." That requires judgment and taste (is the result something of value, or just the appearance of work having been done), but it will also save you a ton of stress and disappointment later.

  • baxtr 4 hours ago
    “Show me the incentive and I'll show you the outcome.”

    ― Charlie Munger

    • asdfman123 4 hours ago
      Would that make chasing perverse outcomes in the corporate environment the Munger Games?
    • coredog64 4 hours ago
      When I was at Amazon, I suggested that promotion to L7 people manager should require that reverse tattooed on your forehead so that you saw it every day. Every time some mandate would come down from on high, it was clear that nobody had thought of the second order effects, malicious compliance, or just outright gaming.
  • tyleo 4 hours ago
    I was thinking about this recently. I tend to run my AI at low context because the documentation states that they degrade with higher context usage.

    However I see tons of people on LinkedIn with ways of backing up context, not wanting to lose context, etc.

    This seems like another way the system is being misused. Higher context usage also uses more tokens. I suspect you get worse (and slower) output too than a dense detailed context.

    • jaggederest 4 hours ago
      I think there are two motivations that get blurred pretty quickly:

      a) you find a particular context that executes well and want to preserve parts of it or not have to repeat explanations

      b) you want to continue a session so you don't have to rebuild the context from scratch

      I think A is something where it's totally reasonable to preserve pieces as part of like a prompt library or equivalent, or directory-specific agent files, that kind of thing.

      I think B is much more likely to lead to problems if you do it over a long time, but it can be pretty useful for getting the last drop of juice out of the metaphorical orange.

      I think the antipattern (that I've done myself, admittedly) is swapping between different restored contexts for different tasks or roles - at that point you should be either converting it to more durable documentation if warranted, or curating it more specifically than "restore the entire context" even if it's just one-off.

      • mikepurvis 4 hours ago
        I think the answer for both cases is supposed to be finishing a "good" session with "based on what you've learned about this project, please update the CLAUDE.md/AGENTS.md/README.md files."

        Ideally that replaces the back and forth cycle of it's this, no it's that, it's that for reasons XYZ with a single ingestible blob that gets the agent up to speed.

        • jaggederest 3 hours ago
          I've actually had mixed results with that without some manual curation - sometimes by the time a session has gone on for a while (heaven forbid it go through multiple compactions), the agent has so much extraneous/incorrect context for docs that it can't write documentation effectively.

          Sometimes it's better to dump context incrementally, reinitialize the agent with a subset of the context, or manually prime it, then ask it to write documentation as a focused task.

      • tyleo 4 hours ago
        Yeah, I also agree that A is good in many cases.
    • mikepurvis 4 hours ago
      I think the more you anthropomorphize it the more it feels like "but I don't want to have to start all over getting it up to speed, this instance already knows all the important stuff."

      If every exchange is treated as an independent query/response then it's much easier to see how cutting out the fluff using a combination of its summaries and your own helps stay focused.

  • guyzero 4 hours ago
    Once you have a score, you have a game. Once you have a game, people will do whatever it takes to win.
    • bsimpson 3 hours ago
      They're not the only well-known company I've heard of that's investigating token usage leaderboards.
    • traderj0e 4 hours ago
      I don't think you need to win this, you just need to not be near the bottom of the board. But just in case, I spam tokens like it's the Chuck E Cheese roulette game.
      • Ifkaluva 2 hours ago
        i think the best in this case is to be solidly in the middle of the pack. don't want to be near the bottom, don't want to be a tall poppy when the backlash comes
  • asdev 4 hours ago
    People who don't code(management, leadership) think AI will 10x the company but it's really a 40-60% boost. But engineers have to feign adopting this tools in fear of layoffs
    • oytis 4 hours ago
      > 40-60% boost

      Where? What industry, what kind of projects? The only one where I can imagine it to be true is vulnerability research, and I imagine all the low-hanging fruit to be picked soon

      • birdsongs 4 hours ago
        Mine, easily. Senior (near staff) level embedded engineering.

        It will spin up a boilerplate uboot or BSP config no problem. I still go in and manually check and add peripherals, but opus 4.7 is terrifyingly smart.

        Need to modify or add a new peripheral, it's there no problem. Or in a bare metal project, I can point it at an STM32 cubemx starter repo and ask for a feature (set up the ADC on pins 4 and 7, ask me for parameters) and it's just done. I do in a day what would probably take me 2.

        It doesn't help me with reviewing others' work, or planning (I maintain that these are manual tasks). So yeah, I agree with the 40-60%. The parts of my job it helps, it really helps.

        • 05 45 minutes ago
          Yeah just had Codex/Gemini write me nrf52 bootloader that fit in under 4k flash sector size with OTA and DFU support (well, app does OTA download then the bootloader validates and decompresses the image). Works best if you let them use OpenOCD on a real device, then they can iterate until it starts working.

          I didn't even need that bootloader, just didn't like the fact that Adafruit one takes too much space :)

        • ua709 1 hour ago
          > STM32 cubemx starter repo and ask for a feature

          I'm confused, isn't the whole point of using the STM32CubeIDE that all the peripherals, like say setting up an ADC on pins 4 and 7, are checkbox features?

          https://wiki.st.com/stm32mcu/wiki/Getting_started_with_ADC

          • birdsongs 49 minutes ago
            Yes but it's famously clunky, and if I'm already in an existing repo, a prompt will do it much, much faster.

            It also generates a ton of bloat and comments.

        • consp 3 hours ago
          > I can point it at an STM32 cubemx starter repo and ask for a feature

          My experience is it will attempt read from the wrong memory block resulting in garbadge. But that's a while ago so maybe LLMs have gotten better.

          • HDThoreaun 3 hours ago
            The AI labs have all released at least 3 new models each since december, things move very quickly.
            • shimman 3 hours ago
              Yeah, the industry has no issues selling $5 bills for $1. Why is this a good thing for society again? That the public subsidizes VC to no shared gains?
            • bigstrat2003 2 hours ago
              I've been hearing "the new model is so much better than the one from 6 months ago" every few months since 2023. It's never been true to date, so please understand why I am skeptical that it suddenly became true this time.
      • jjice 2 hours ago
        I work on an ETL platform and it definitely is a huge boost in certain things, but a drain in others.

        We started working on a new product a few months ago and it's really dangerous up front on an empty code base. It can quickly write more code than you can comfortably understand. The more serious danger is when three people are all doing that at once. I had to bring this up at meetings and try to get a better review culture going.

        Now that we're a few months in and changes are more targeted additions to an existing system we're happy with, it's _huge_ (which has been my experience on our existing product). I can drop a brief paragraph I speech-to-texted into my agent, give it a general starting place (where I imagine the issue/feature extension point is), and then tell it to do some research and propose a change. I'd guess it's about 50% of the time that I have to update it's implementation plan. Then I let it run (my favorite is setting this up before a meeting) and come back. Then we have to review the code and go from there.

        Definitely a 50%+ speed up in some cases, but not all. It's also great for problems that procrastinating, as it reduces friction so much.

      • traderj0e 4 hours ago
        Any typical web backend or frontend kind of thing. So like, not systems code.
    • 01100011 2 hours ago
      What's funny to me is the seeming lack of AI usage among management despite so much of their work being amenable to AI acceleration.

      At my company(big name, AI beneficiary), middle management seems to mostly be concerned with shuffling chairs on the deck of the Titanic while they wait for their stock to fully vest. There is very little interest in improving anything, just an obsession with risk avoidance and performative sideshows whenever upper management wonders why execution is so poor.

      • mrhottakes 1 hour ago
        At my company middle management is using Gemini to churn out reams of useless documents in lieu of anything approaching "program management" or similar
    • asdfman123 4 hours ago
      40% boost for smart engineers, for now.

      People churning out slop is slowing me down and the full effects of it won't be felt for a while.

      • suzzer99 4 hours ago
        Yesterday, I had my first experience of a mid-level dev stuck on a problem, coming to me with Codex and Copilot summaries of what those tools thought the problem was, which turned out to be completely off-base.

        Codex was pretty sure something was wrong with the response object being returned by the endpoint in question. It turned out there was a conversion method applied to the endpoint response, which mutated its input. This method had been running w/o problems for a while, until the dev put it in a useEffect. At this point, React dev mode's policy of rendering everything twice kicked in, which caused the second pass through the conversion method to fail on the now-mutated input object.

        Codex never even hinted that the conversion method mutating the input could be a problem, nor anything about React dev mode rendering everything twice (specifically to catch problems like this). Apparently, neither of those came up much in its training data.

        My point is that this dev seems to have lost, in a few short months of writing everything with Codex, the ability to trace an error from its source (the error trace was being swallowed in a Codex-written catch block that spit out a generic error message). He was completely stuck and just kept doubling down on trying to get Codex to solve the problem, even checking with Copilot as a backup. I'm not optimistic about where this is headed.

        • esafak 3 hours ago
          Are you sure he was capable of debugging it before?
          • suzzer99 3 hours ago
            Yes, eventually. Largely because he would have written all the code that got to that point and had a mental model of the entire flow instead of it being a gray box.
      • legohead 4 hours ago
        the new bottleneck for development at work is code reviews. devs are creating whole features that would take months in only a couple weeks, but code reviewing that is a slow, painful process
        • Marsymars 4 hours ago
          The bottleneck at my work for development was already code review before LLMs.
        • asdfman123 4 hours ago
          This is why I'm not that excited about vibe coding. The bottleneck has always been understanding what the heck is going on.

          In my view you should 1) use AI as a tool to help you learn and 2) write boilerplate you could have easily written yourself. Getting it to think for you is counterproductive (at least until it replaces us entirely).

    • retinaros 4 hours ago
      Its not really a 60%. It accelerates a lot code creation. Save some time on admin tasks. That is it.
  • tapoxi 4 hours ago
    I joked about this on HN a few weeks ago and I find it funny that we ended up here already. Goodhart's Law in action.
  • kixiQu 4 hours ago
    Amazon is big and inconsistent enough that "somewhere in Amazon, <XYZ> is occurring" is statistically true, no matter how nutso-sounding your <XYZ>.
  • pjmlp 4 hours ago
    I can tell they are surely not the only ones.

    Everyone I talk to has nowadays KPIs tied to AI usage on their performance evaluation.

    • H8crilA 4 hours ago
      The most important skill is to not stand out of the crowd. This is how you survive in the Soviet Union, in the army, and clearly also at tech companies.
      • pjmlp 4 hours ago
        Quite a good point.
    • jnpnj 4 hours ago
      Corporate emails asking why are you not using the <insert-llm> paid plan ??? came very very rapidly. So naturally, everybody started to use it blindly so that the dashboard metrics are all high.

      It's astonishing how society forgets.

  • morelandjs 4 hours ago
    I have mixed thoughts on this. These thoughts are my own. On the one hand, it’s objectively silly to pretend like we’ve solved the age old problem of measuring developer productivity. Metric-obsessed leadership can also be intolerable, counterproductive, and it’s a good way to paint yourself into a corner undervaluing your best talent and overvaluing your mediocre talent.

    That said, I’m kind of having a blast using CC in corporate with all the connectors available at our disposal, and I baffled how little some of my coworkers know about what’s available and what the capabilities are. So it’s clear that perhaps some encouragement is prudent for those who are slower to embrace new technologies, but I’m not sure tokencounting and tokenmaxing are the answer.

    • retinaros 4 hours ago
      Could you list us some of the capabilities you use that bring value besides “summarize my email”
      • harimau777 41 minutes ago
        A company requires a specific % of code coverage but doesn't give developers enough time to actually write tests. AI can be used to generate the tests needed to get pass the code coverage and avoid being fired for not working fast enough.
      • morelandjs 4 hours ago
        Yes, we can crawl our entire internal documentation via LLM. Want to know if someone is already working in the space of your latest idea? Ask Claude, it hits the internal search APIs and finds docs and references directly relevant to your query. There are a lot of separate document stores so this took a lot of effort previously. I can also query Slack, Outlook, etc. I don’t understand the cynicism in your comment.
        • retinaros 3 hours ago
          That is a summarize my wiki. Nice search feature.
          • Leynos 2 hours ago
            The trouble is, it's here now, and it wasn't before.

            That may be an enterprise saas is shit problem, but I'm just happy that my employer now has a wiki search that works.

      • jkingsbery 4 hours ago
        Not OP, but within Amazon we have pretty good connectors around integrating with our task system (so you can pretty easily ask your GenAI tool "look up the next item in our sprint board, let me know if you have any clarifying questions, but otherwise start implementing it"). We have decent integration with internal wiki and search systems, so it's easier now to figure out the best Amazon way to do some coding task. And Amazon being a big doc-writing company, there are lots of great tools for helping improve all phases of writing.
  • rdtsc 2 hours ago
    https://en.wikipedia.org/wiki/Poe's_law I was just joking about it a few days ago (I swear I didn't know Amazon was doing this) https://news.ycombinator.com/item?id=48079533

    > That’s my latest joke — that we’ll have to pretend like we used the tools so they can feel validated they’ve spent all this money on hyped up technology. So, yes, it’s em-dashes and “it’s not just this, it’s that …” so they can hopefully leave us alone

  • fhn 1 hour ago
    Amazon has this Kiro product they are trying to sell and they are using their own employees to improve the product and their own LLM. They are giving uni students 1000 credits/month and running competitions.
  • wenc 4 hours ago
    When did FT become Business Insider?

    I have an FT subscription and they keep moving toward this kind of narrative first reporting to get clicks. It’s no longer a believable paper.

    • a34729t 3 hours ago
      Maybe we just need to subscribe to nikkei proper and ask them to make it pink and stop being peasants.
    • traderj0e 4 hours ago
      Business Insider would say "tokenmaxxing is pure promotional intelligence"
  • amluto 4 hours ago
    I, too, can easily use more tokens to achieve the same task. I can give worse prompts. I can fail to make it clear to the tools where to find the information they need. I can ask them to think hard when the don’t need to ask tell them not to think when they do need to. I can give vague, open ended instructions. I can generate code that sucks and throw it away.

    If I do all of this, do I get a promotion?

    • traderj0e 4 hours ago
      Even if I'm in the middle of using the AI seriously but then want to rename a variable, I can't do that myself because it'll confuse the AI, so I'll tell it to rename. That seems pretty wasteful.
    • 0cf8612b2e1e 3 hours ago
      That sounds like too much effort. Better to have the AI write you a 20k word manifesto about how much you love your employer and then include that in the context of every request.
  • vjvjvjvjghv 4 hours ago
    I wish I could do some tokenmaxxing at my company. The only plan available is maxed out for the month after a few days of serious work, but the AI “experts” are declaring that nobody needs that much. It’s really frustrating to constantly have to juggle quota and lower models. All this while the declared goal is to reach 50% of code written by AI.
  • returnInfinity 4 hours ago
    You can use Codex and Claude code for most of the tasks that you would manually do

    Filing JIRA tickets, updates. Opening PRs, having AI review PRs. This will all use tokens.

    No need to tokenmaxx, you will end up burning tokens with just regular AI usage

  • traderj0e 4 hours ago
    Each day I send the AI on a fruitless mission like "summarize the entire codebase" while I do my actual work, which involves actually using the AI for real work. Wish I could disable the token cache to make it spend more.
  • oxag3n 3 hours ago
    Hunger games in the age of AI - eliminate/automate your colleague's job, until a single software engineer is left (or two if aristocrats will see it as a good PR).
  • fhn 1 hour ago
    These employees are going to automate themselves out of a job. I've always automated, the boss never has to know.
  • boron1006 4 hours ago
    At least for some people I know it’s not necessarily because there’s pressure from leadership, but because it’s funny that the org spends like $15,000/mo writing HP fanfic or whatever
  • tonis2 3 hours ago
    It's the same as measuring productivity by lines of code written, same dumb logic by management, not surprising.
  • arjie 4 hours ago
    This kind of thing is totally fine if it's being done (it's believable because Meta internally incentivized tokenmaxxing). When you're trying to change the behavior of a large number of people, only blunt instruments are available if you want to get quick outcomes. The edge cases where people Goodhart very hard are all right. You can just human-in-the-loop them away. The opportunity cost for most organizations of not moving to use AI tools as productivity enhancers is currently gauged by them (rightfully, in my opinion) to be too high to allow for osmotic adoption.

    Most people look at sea changes come and go. They all have a story of how they "could have bought Bitcoin when it was $100" or whatever. In an org, you don't want to have the story of "we could have done that when nobody else had", so you incentivize adoption of the tool as hard as possible and hope that dipping feet in the water makes people want to swim. If you don't already have a culture of early adoption (and no large company can) then you have to use blunt incentives. I don't think anyone has demonstrated otherwise.

    • mrhottakes 1 hour ago
      So you're saying that if you ignore all the downsides with "you can just human in the loop them away" then it works great?
      • arjie 1 hour ago
        Even if you don't, it's the only way to ensure adoption and most workplaces consider the lack of adoption a greater danger than a Goodharted adoption. Overall, I've observed that the US has very low barrier to starting companies so, considering companies of all sizes are doing this, if it's a mistake those startups will get beaten by the ones doing other things.
  • HarHarVeryFunny 4 hours ago
    > They said the move reflected pressure to adopt the technology after Amazon introduced targets for more than 80 percent of developers to use AI each week, and earlier this year began tracking AI token consumption on internal leader boards.

    This measuring of tokenmaxxing as a proxy for something beneficial to the company has got to be the single dumbest thing I have ever heard of in my entire software career.

    It would be like some company in the dot com era measuring employee's internet download traffic as a proxy for productivity or internet-pilledness.

    Why not just reward employees based on who's submit the largest expenses claims? That might have some correlation to work too, right ?!

    • asdfman123 4 hours ago
      In the corporate world it's impossible for any one person to tell what's going on across multiple domains due to the complexity. If I tell you the Zorbulon API is creating 30% more flargs (which is critical for Twiddle operation), I often just have to take your word for it.

      Hell, I'm in the bowels of Google as an IC and it's hard to understand what adjacent teams are doing. Even harder for management that never gets their hands on anything.

      So while you know engineers are probably bullshitting you with fake work, you can at least turn around and tell your supervisor the numbers. It's all a game of plausible deniability.

    • traderj0e 1 hour ago
      They used to measure loc, but this is even dumber than that. The charitable explanation is they just want to make sure nobody is completely avoiding AI use.
    • ryandrake 4 hours ago
      It's truly bonkers: the reverse of a budget. It is like rewarding the people who spend the most money.
  • bgnn 3 hours ago
    Similar to an HFT company I know, using the money spent on tokens per developer as their efficiency metric. Insane.
  • zthrowaway 3 hours ago
    Our AWS TAM has recently started to respond to us in AI-like responses. It's very obvious. Now it makes sense why.
  • x187463 4 hours ago
    Measuring token usage as a productivity metric is like measuring keystrokes. Don't mind me, just over here rolling my face on the keyboard for an hour so I can take Friday off...

    ...except each keystroke has an associated cost, the sum of which may equal or exceed my salary.

    • Weryj 4 hours ago
      Insert photo of Simpsons drinking bird while homer sleeps here.
    • Analemma_ 4 hours ago
      What's nuts is how many intelligent people— people who would say "of course 'LOC written' is a terrible measure of developer productivity, of course only a dysfunctional company run by morons would do that"— have immediately bought into this. Amazon has token use mandates, I've heard Google has token use "leaderboards", friends at startups say they all get graded on tokens used. It's like watching your sensible, levelheaded friend go completely off the rails; collective madness.
      • greesil 4 hours ago
        Some people respond to incentives. The rest of us are just trying to do our jobs and will probably be fired and then later consumed by the basilisk. We are living in an age of extremophiles.
      • HPsquared 4 hours ago
        It's a test of practical intelligence.
      • Imustaskforhelp 4 hours ago
        > collective madness

        mass hysteria perhaps?

        There used to be a time where people used to die from dancing too much (from my understanding in which hey I can be wrong, I usually am): https://en.wikipedia.org/wiki/Dancing_plague_of_1518

        I think that although we wish to consider ourselves as smart and really intelligent but we run on biological machines and clocks which evolutionary have not much of a difference since 1518 or even the times when we used to hunt and forage for that matter.

  • hmokiguess 3 hours ago
    Measuring productivity via tokens is the modern day equivalent to doing it via number of commits or LOC
  • jmount 4 hours ago
    A perfect doomsday machine. Over-using tokens gets your peers laid-off before yourself.
  • dogscatstrees 4 hours ago
    Another stupid meme-latching name. Don't normalize these *maxxing nonsense words and just use plain language. Let's see, maybe just say they were optimizing for token count?
    • morkalork 4 hours ago
      I like it because it highlights the stupidity going on. Bullshit doesn't deserve a respectible name.
  • ortusdux 4 hours ago
    Reminds me of the managers that use 'lines of code added' as a metric
  • retinaros 4 hours ago
    Vibecoded ppt, docs, frontends is an even bigger scam than crypto ever was. Ofc people getting sucked into it
    • traderj0e 1 hour ago
      Are the AI tokens fungible though?
  • christkv 4 hours ago
    Seems to be a clear case of Goodhart's Law that states that "when a measure becomes a target, it ceases to be a good measure."
    • FartyMcFarter 4 hours ago
      That's true, but I don't know if this one was ever a good measure in the first place.

      People use AI differently and they can be equally productive with a variety of token usage quantities.

      Also, different kinds of work are differently amenable to using AI.

      • compiler-guy 4 hours ago
        Measuring tokens used can absolutely be useful; tracking things like cost, compute-demand, usage to negotiate a better contract, and on and on.

        Using it to grade people is, err, rather unwise.

      • jrflo 4 hours ago
        I think we've found an extension of Goodhart's law- it makes bad measures even worse.
    • traderj0e 1 hour ago
      "When a measure exists, it becomes a target" is the reason I can never write TODO again.
  • dontreact 2 hours ago
    Hot take:

    There should be an anti leaderboard that highlight people under a threshold. Not trying to learn how to use ai while working at a company like Amazon is almost certainly a bad thing, and cause for looking into why.

  • some_furry 4 hours ago
    Can't you just, wire your agent into a Python script and have it infinitely check its own work? That would hit the metrics, but do nothing useful.

    Hell, throw a Tarot reading in the middle of the loop so the agent has non-deterministic behavior too.

    https://github.com/trailofbits/skills/tree/main/plugins/let-...

    Amazon management wants to play five-dimensional chess? Play Balatro instead.

  • ex-aws-dude 4 hours ago
    Imagine selling a product where companies are foaming at the mouth to increase their spend and pay you more money

    It does not get any better than that

    Jensen, Sam, Dario: https://i.imgur.com/AI7rtCY.jpeg

  • nicodjimenez 3 hours ago
    tokenmaxxing is silly, but if a developer or manager NEVER uses AI then I do think that's cause for concern as it shows a genuine lack of curiosity... perhaps tokenflooring makes more sense than tokenmaxxing
    • mrhottakes 1 hour ago
      Or maybe that person has made an intelligent decision based on their own workflow and requirements?
  • varispeed 4 hours ago
    Someone pressuring to do something at work gives off creep vibes.

    Is that in the contract to use AI tools? If not, then what are they on about.

    • woah 4 hours ago
      They're always pressuring me into "shipping" "features"
    • mrhottakes 4 hours ago
      "someone pressuring you to do something at work" describes pretty much all jobs

      Very very few jobs in the US give you a contract.

      • traderj0e 1 hour ago
        This is true. The creepy thing is when someone outside your reporting chain is suddenly pushing you to use some new tool, rather than asking you to ship a feature.
    • ge96 4 hours ago
      Think worst place I worked, you had to install an app like Time Bro and you had to account for all 8 hours of the day, some app logged per minute/hour.
      • varispeed 4 hours ago
        Would rather eat dirt than work at a place like that. Respect.
  • giantg2 4 hours ago
    This makes me think of the tulip bubble. Using AI as much as possible just so people think you are productive is like buying tulips so that people think you're affluent.
  • guywithahat 4 hours ago
    This reads more like it's a single employees gripe than a real thing that's happening. They're not using the metrics in performance reviews, and it's a new AI tool that AWS probably wants legitimate usage data out of.

    That said, if you can't figure out how to use AI in a software job you should look into it. Not using AI at this point is a lot like not using CAD as an architect.

    • KyleTheDev 4 hours ago
      It is being used in performance reviews, source: recent Amazon SWE.

      They also use a bunch of dumb metrics like, total PRs submitted, total comments made on PRs, etc. To the point that, there are multiple heavily used internal tools to game these metrics. Eg, auto-comment LGTM on any approved PR. Thus, making the metrics even worse than they would have been prior.

      • guywithahat 4 hours ago
        > Amazon has told employees that the AI token statistics would not be used in performance evaluations.

        > Managers are discouraged from using token use to measure performance, according to a person familiar with the matter.

        Like CAD and architects, if you're not using LLM's while coding it's an issue, but Amazon is very clear that this isn't an official metric. I would believe managers know how many tokens you're using, but it sounds like they just interviewed a disgruntled employee who didn't like AI and published it.

        • sarchertech 3 hours ago
          >but Amazon is very clear that this isn't an official metric.

          You're replying to an amazon employee who says they are being used in performance reviews, in comment thread on an article where 2 other Amazon employees say that their token usage is being tracked and they feel pressure to maximize token usage.

          Do you have first hand knowledge to refute these 3 people with first hand knowledge?

          The CAD thing is incredibly weird. I've never known an architect who had their CAD usage minutes tracked.

          Btw I'm a big tech company and I know many people who are "token maxing". It's very common.

    • mrhottakes 4 hours ago
      > Not using AI at this point is a lot like not using CAD as an architect.

      Does CAD software regularly generate an incorrect design that results in a catastrophic failure of the building?

      • guywithahat 1 hour ago
        I have never pushed incorrect code that results in catastrophic failure due to AI, are you sure you're using the tool correctly? If you didn't know what you were doing with CAD software, you could absolutely generate incorrect designs.
        • mrhottakes 1 hour ago
          You've never done it, so no one has? Are you sure you're evaluating the problem correctly?
    • traderj0e 1 hour ago
      It's 90% real where I work, which is not Amazon but is a peer company. They haven't explicitly said that tokens are used to measure performance, but when managers are posting token usage leaderboards each week with no further explanation, we take the hint.
    • sarchertech 4 hours ago
      I think it’s real. I’m at a huge SV tech company and at least half the people here are “token maxing”.

      AI is genuinely useful for many tasks. But 2x or greater business value from engineering orgs isn’t it. And even if it was business are terrible at measuring value added on an individual basis.

      What they can measure though is token use. I’ve heard the same thing from other large companies my friends work for.

      It’s bad enough that I’ve moved a significant amount of money out of US large-cap stocks.

    • fg137 4 hours ago
      "They're not using the metrics in performance reviews" means almost nothing. It doesn't mean managers at every level are not frequently looking at those numbers. Anyone from Amazon will tell you how much "hint" they get from management about using those tools.
    • riknos314 4 hours ago
      Amazon has far more roles than just software. PMs, FC area managers, managers - if your job involves writing anything you're expected to be using AI in some capacity.
      • retinaros 4 hours ago
        We can tell they are using AI
    • jamesnorden 4 hours ago
      >Not using AI at this point is a lot like not using CAD as an architect.

      You should have asked AI to come up with a better analogy.

      • mrhottakes 1 hour ago
        Someone thinking that architects sit there and do all their own CAD work is very funny.
    • righthand 4 hours ago
      I have been not using AI since the beginning and nothing has changed for me. I have only watched my coworkers and the industry get dimmer and get faster at getting dimmer. I have witnessed professionals become total amateurs and form “well the AI generated this unreviewed report” as their basis of knowledge.

      No thanks I’ll just watch y’all slip down the slope.

      • mrhottakes 4 hours ago
        Agreed. AI usage seems to be mostly bragging on HN / LinkedIn
    • bigstrat2003 4 hours ago
      > That said, if you can't figure out how to use AI in a software job you should look into it. Not using AI at this point is a lot like not using CAD as an architect.

      When LLMs are capable of actually doing a good job, then it might be like that. We are not there yet, and we may never be.

    • HarHarVeryFunny 4 hours ago
      Apparently it's real. Meta has a tokenmaxxing leaderboard too.

      "Wow, look at how fast employee # 2 is setting money on fire! Let's promote him!"

    • 12_throw_away 4 hours ago
      > They're not using the metrics in performance reviews

      Heh. No need to be ashamed, I used to believe them when they lied to me like this too!

  • aggakake 3 hours ago
    A very poor look for management. They don't know what the heck they're doing.
  • getrundoc 4 hours ago
    yes
  • getrundoc 4 hours ago
    omg
  • shadow28 4 hours ago
    [dead]
  • Serhii-Set 2 hours ago
    [dead]
  • mdndkzixkn 4 hours ago
    [dead]