> Code reviewing coworkers are rapidly losing their minds as they come to the crushing realization that they are now the first layer of quality control instead of one of the last. Asked to review; forced to pick apart. Calling out freshly added functions that are never called, hallucinated library additions, and obvious runtime or compilation errors. All while the author—who clearly only skimmed their “own” code—is taking no responsibility, going “whoopsie, Claude wrote that. Silly AI, ha-ha.”
LLMs have made Brandolini's law ("The amount of energy needed to refute bullshit is an order of magnitude larger than to produce it") perhaps understated. When an inexperienced or just inexpert developer can generate thousands of lines of code in minutes, the responsibility for keeping a system correct & sane gets offloaded to the reviewers who still know how to reason with human intelligence.
As a litmus test, look at a PR's added/removed LoC delta. LLM-written ones are almost entirely additive, whereas good senior engineers often remove as much code as they add.
In my opinion this is another case where people look at it as a technical problem when it's actually a people problem. If someone does it once, they get a stern message about it. If it happens twice, it gets rejected and sent to their manager. Regardless of how you authored a pull request, you are signing off on it with your name. If it's garbage, then you're responsible.
Because we know what the value is without AI. I’ve been in the industry for about ten years and others have been in it longer than I have. Folks have enough experience to know what good looks like and to know what bad looks like.
The Stanford study showed mixed results, and you can stratify the data to show that AI failures are driven by process differences as much as circumstantial differences.
The MIT study just has a whole host of problems, but ultimately it boils down to: giving your engineers cursor and telling them to be 10x doesn't work. Beyond each individual engineer being skilled at using AI, you have to adjust your process for it. Code review is a perfect example; until you optimize the review process to reduce human friction, AI tools are going to be massively bottlenecked.
Yeah it doesn’t really seem different from people copy/pasting from stack overflow without reading through it. This isn’t really a new thing, though I guess nobody was really acting like SO was the second coming so it’s probably happening more now.
> Yeah it doesn’t really seem different from people copy/pasting from stack overflow without reading through it.
It is vastly different because there are no (as far as I've ever seen) multi-thousand line blocks of code to cut & paste as-is from stack overflow.
If you're pasting a couple dozen lines of code from a third party without understanding it, that's bad, but not unbearable to discover in a code review.
But if you're posting a 5000 line pull request that you've never read and expect me to do all your work validating it, we have a problem.
How do have code review be an educational experience for onboarding/teaching if any bad submission is cut down with due prejudice?
I am happy to work with a junior engineer and is trying, and we have to loop on some silly mistakes, and pick and choose which battles to balance building confidence with developing good skills.
But I am not happy to have a junior engineer throw LLM stuff at me, inspired the confidence that the psycophantic AI engendered in it, and then have to churn on that. And if you're not in the same office, how do you even hope to sift through which bad parts are which kind?
Code review as an educational device is done. We're going to stop caring about the code before people who are bad programmers right now have time to get good.
We need to focus on architectural/system patterns and let go of code ownership in the traditional sense.
Aren't you effectively saying that no one will understand the code they're actually deploying? That's always true to an extent, but at least today you mostly understand the code in your sub area. If we're saying the future is AI + careful review, how am I going to have enough context to even do that review?
I expect that in most cases you'll review "hot spots" that AI itself identifies while trusting AI review for the majority of code. When you need to go deeper, I expect you'll have to essentially learn the code to fix it, in roughly the same way people will occasionally need to look at the compiler output to hunt down bugs.
To mentor requires a mentee. If a junior is not willing to learn (reasoning, coming up, with an hypothesis, implementing the concept, and verifying it), then why should a senior bother to teach. As a philosopher has once said, a teacher is not meant to give you the solution, but to help you come up with your own.
The problem is leadership buy in. The person throwing the LLM slop at github has great metrics when the leadership are looking at cursor usage, lines of code, PR numbers, while the person slowing down to actually read wtf the other people are submitting is now so drowning in slop that they have less time to produce on their own. So the execs look at it as the person complaining "not keeping up with the times".
If leadership is that inept, then this is likely only 1 of many problems they are creating for the organization. I would be looking for alternative employment ASAP.
the issue isn't recognizing malign influence within your current organization... it's an issue throughout the entire industry, and I think what we're all afraid of is that it's becoming more inevitable every day, because we're not the ones who have the final say. the luddites essentially failed, after all, because the wider world was not and is not ready for a discussion about quality versus profit.
A poor quality product can only be profitable if no high quality alternative exists (at a similar price point). Every time that's the case, it's an epic opportunity for anybody with the wherewithal to raise some funding and build that high quality alternative themselves. A dysfunctional industry running on AI slop will not be able to keep you from eating their lunch unless they can achieve some sort of regulatory capture, which would be a separate (political) issue.
Regarding your Luddite reference, I think the cost-vs-quality debate was actually the centerpiece of that incident. Would you rather pay $100 for a T-shirt that's only marginally better than one that costs $10? I certainly would not. People are constantly evaluating cost-quality tradeoffs when making purchasing decisions. The exact ratio of the tradeoff matters. There's always a price point at which something starts (or stops) making sense.
...have you seen the funding numbers for AI startups versus non-AI? it's not even remotely close rn...
A major problem of the way we have built our society in a way such that the wrong people end up with the most power and authority.
the majority of engineers across the industry feel the same way we do and yet there's little most of us can do unless we all decide to do something together :/
Maybe the process should have actual two stage pull requests. First stage is you have to comment the request and show some test cases against it. And only then next person has to take a look. Not sure if such flow is even possible with current tools.
> All while the author—who clearly only skimmed their “own” code—is taking no responsibility, going “whoopsie, Claude wrote that. Silly AI, ha-ha.”
Now I don't do code reviews in large teams anymore, but if I did and something like that happened, I'd allow it exactly once, otherwise I'd try to get the person fired. Barring that, I'd probably leave, as that sounds like a horrible experience.
Ya, there's not much you can do when leadership is so terrible. If this kind of workflow is genuinely blessed by management, I would just start using Claude for code reviews too. Then when things break and people want to point fingers at the code reviewer, I'd direct them to Claude. If it's good enough to write code without scrutiny, it's good enough to review code without scrutiny.
Recently I was looking at building a gnarly form, that had some really complex interactions and data behind it. It just kept being subtly buggy in all different ways. I threw Claude at it, went down so many rabbit holes, it was convinced there were bugs in all the different frameworks and libraries I was using because it couldn't find the issue in the code (that it had written most of).
After a couple of days of tearing my hair out, I eventually dug in and rewrote it from first principles myself. The code afterwards was so much shorter, so much clearer, and worked a hell of a lot better (not going to say perfectly, but, well, haven't had a single issue with it since).
This is a broader issue about how where we place blame when LLMs are involved. Humans seem to want to parrot the work and take credit when it’s correct while deflecting blame when it’s wrong. With a few well placed lawsuits this paradigm will shift imho
I feel like I went through this stage ahead of time, a decade ago, when I was junior dev, and was starting my days by: first reviewing the work of a senior dev who was cramming out code and breaking things at the speed of light (without LLMs); and then leaving a few dozen comments on pull requests of the offshore team. By midday I had enough for the day.
Now that I'm no longer at that company since a few years ago, I'm invincible. No LLM can scare me!
I have noticed Claude's extreme and obtuse reluctance to delete code, even code that it just wrote that I told it is wrong. For example, it might produce a fn:
fn foo(bar)
And then I say, no, I actually wanted you to "foo with a frobnitz", so now we get:
fn foo(bar) // Never called
fn foo_with_frobnitz(bar)
The problem rather is that you still have to stay somewhat agreeable while calling out the bullshit. If you were "socially allowed" to treat colleagues like
> All while the author—who clearly only skimmed their “own” code—is taking no responsibility, going “whoopsie, Claude wrote that. Silly AI, ha-ha.”
as they really deserve, the problem would disappear really fast.
So the problem that you outlined is rather social, and not the LLMs per se (even though they very often do produce shitty code).
They should get a clear explanation of the problem and of the team expectations the first time it happens.
If it happens a second time? A stern talk from their manager.
A third time? PIP or fired.
Let your manager be the bad guy. That's part of what they're for.
Your manager won't do that? Then your team is broken in a way you can't fix. Appeal to their manager, first, and if that fails put your resume on the street.
> If it happens a second time? A stern talk from their manager.
In my experience, the stern talk would probably go to you, for making the problem visible. The manager wouldn't want their manager to hear of any problems in the team. Makes them look bad, and probably lose on bonuses.
Happened to me often enough. What you described I would call a lucky exception.
> All while the author—who clearly only skimmed their “own” code—is taking no responsibility, going “whoopsie, Claude wrote that. Silly AI, ha-ha.”
After you made your colleagues upset submitting crappy code for review, you start to pay attention.
> LLM-written ones are almost entirely additive,
Unless you noticed that code has to be removed, and you instruct the LLM to do so.
I don't think LLMs really change the dynamics here. "Good programmers" will still submit good code, easy for their colleagues to review, whether it was written with the help of an LLM or not.
>After you made your colleagues upset submitting crappy code for review, you start to pay attention.
If the only thing keeping you from submitting crappy code is an emotional response from coworkers, you are not a "good programmer", no matter what you instruct your LLM.
I'm working on the second project handed to me that was vibe-coded. What annoys me assuming it runs is the high number of READMEs which I'm not even sure which one to use/if applicable.
They are usually verbose/include things like "how to run a virtual env for python"
I'd say it depends on how coding assistants are used, when on autopilot I'd agree, as they don't really take the time to reflect on the work they've done before going on with the next feature of the spec. But in a collaborative process that's of course different as you are pointing out things you want to have implemented in a different way. But I get your point, most PR's you'd flag as AI generated slop are the ones where someone just ran them on autopilot and was somewhat satisfied with the outcome, while treating the resulting code as blackbox
You have two options: Burn out because you need to correct every stupid line of code, or... Start to not give a damn about quality of code and live a happy life while getting paid.
The sane option is to join the cult. Just accept every pull request. Git blame won't show your name anyways. If CEOs want you to use AI, then tell AIs to do your review, even better.
I feel that for long time people coming into the industry did not really care about code as a craft, but more of code as easy money.
This was first salient to me when I saw posts about opensource developers who make critical infrastructure living hand to mouth. Then the day in the life of a software engineer working in a coffee shop. Then the bootcamps or just learn to code movement. Then the leetcode grinders. Then developers living in cars in SF due to lack of affordable housing. Now it is about developers vibe coding themselves out of a job.
The issue is and will always be that developers are not true professionals. The standards are loosely enforced and we do a poor job of controlling who comes in and out of the industry. There are no ethics codes, skillsets are arbitrary, and we don't have any representation. Worst yet we bought into this egocentric mindset where abuses to workers and customers are overlooked.
This makes no sense to me. Lawyers have bar associations, doctors have medical associations, coders have existential angst.
Now the bosses are like automate your way out of a job or you will lose your job.
I always ask myself, in what other "profession" would its members be so hostile to their own interests?
Because there's a difference between a "coder" and a software engineer.
Someone who finished a bootcamp might be able to write a simple program in Python, but that doesn't make them a software engineer.
I've said this out loud before and have gotten told I'm an elitist, that my degree doesn't make me better at software than those without one. That majoring in computer science teaches you only esoteric knowledge that can't be applied in a "real job".
On the other hand, the industry being less strict about degrees can be considered a positive. There definitely do exist extremely talented self-taught software engineers that have made a great career for themselves.
But I definitely agree with the need of some sort of standard. I don't care if some bootcamper gets a job at the latest "AI on the blockchain as a service" unicorn startup, good for them. I'd rather have people with formal degrees work on something like a Therac-25, though.
As one of the “self taught software engineers that made a great career for myself”, I think you are correct. Maybe not so much in the “better or worse” sense, but there are definitely moments in my “real job” where I can recognize that the thing we’re talking about or working on is something that my colleagues had formal instruction on and I didn’t, and usually in cases like this they’re better suited to talk about and work though the problem.
To me the biggest difference is that they had the time/opportunity to work on a huge breadth of different problems and develop their pattern matching ability, whereas I only get to work on problems specific to my role/employer, so anything extra I have to learn on my own time. But they already know it.
I think the profession of teacher comes close. There are extremely good and extremely bad teachers and everything in between. Knowing the subject you teach very well does not guarantee you can teach it well, often on the contrary.
Maybe coders can see themselves as teachers to the machine. Either they teach character by character, or vibe idea by vibe idea, or anything in between.
Ignoring LLMs for a second, some code I write is done in sort of full-craft full-diligence mode, where I am only committing something where I am very proud of it's structure and of every line of code. I know it inside and out, I have reasons for every decision, major or minor, and I don't know of any ways to make it better. Not only is the code excellent, I've also produced a person (me) who is an expert in that code.
Most code is not like that. Most code I want to get something done, and so I achieve something quite a bit below that bar. But some things I get to write in that way, and it is very rewarding to do so. It's my favorite code to write by a mile.
Back to LLMs - I find it is both easier than ever and harder than ever to write code in that mode. Easier than ever because, if I can actually get and stay in that mode psychologically, I can get the result I want faster, and the bar is higher. Even though I am able to write MUCH better code than an LLM is, I can write even better code with LLM assistance.
But it is harder than ever to get into that mode and stay in that mode. It is so easy to just skim LLM-generated code, and it looks good and it works. But it's bad code, maybe just a little bit at first, but it gets worse and worse the more you let through. Heck, sometimes it just starts out as not-excellent code, but every time you accept it without enough diligence the next output is worse. And by the time you notice it's often too late, you've slopped yourself, while also failing to produce an expert in the code that's been written.
Within the past 2 months, as I've started to use AI more, I've had this trajectory:
1. only using AI for small things, very impressed by it
2. giving AI bigger tasks and figuring out how to use it well for those bigger tasks
3. full-agentic mode where AI just does its thing and I review the code at the end
4. realising that I still need to think through all the code and that AI is not the shortcut I was hoping it to be (e.g. where I can give it a high-level plan and be reasonably satisfied with the final code)
5. going back to giving AI small tasks
I've found AI is very useful for research, proof-of-concepts and throwaway code of "this works, but is completely unacceptable in production". It's work I tend to do anyway before I start tackling the final solution.
Big-picture coding is in my hands, but AI is good at filling in the logic for functions and helping out with other small things.
Thank you, author. This essay made my day. It resonates with my thinking of last months. I tried to use AI at work, but most of times I regrettably scratched whatever it did and did stuff on my own. So many points I agree with. Delegating thinking to AI is the worst thing I can do to my career. AI at best is mediocre text generator.
So funny to read how people attack author using non-related to the essay’s message criticism.
The worst thing for me is that I am actually good at LLM-based coding
My coworkers that are in love with this new world are producing complete AI slop and still take ages to complete tasks. Meanwhile I can finally play my strength as I actually know software architecture, can ask the LLM to consider important corner case and so on.
Plus, I am naturally good at context management. Being neurodivergent has given me decades of practice in working with entities that have a different way of thinking that me own. I have more mechanical empathy for the LLM because I don't confuse it for a human. My coworkers meanwhile get super frustrated that the LLM can not read their mind.
That said, LLMs are getting better. My advantage will not last. And the more AI slop gets produced the more we need LLMs to cope with all the AI slop in our code bases. A vicious cycle. No one will actually know what the code does. Soon my job will mostly consist of praying to the machine gods.
It seems to me that someone like you, seen from the outside (e.g. from a code-reviewing colleague), simply appears to be getting more productive, with no drop in quality. Maybe some stylistic shifts.
I don't think anyone is complaining about that too much.
I wonder how many people there are like you, where we don't get much data. If people don't complain about it, we generally don't hear about it, because they're just quietly moving on with their work.
Not to be confused with the AI hypesters who are loudly touting the benefits with dubious claims, of course (:
I think I also fit into this category. Minor to medium productivity boost and maybe some stylistic evolving, but largely no complaints because it's just another tool I use sometimes.
Oh, first time hearing that term. Thank you, I love it!
Though I don't think this is at play here. Maybe a bit but seeing how my coworkers prompt, there is objective difference. I will spend half an hour on writing a good prompt, revise the implementation plan with the LLM multiple times before I allow it to even start doing anything while my coworkers just write "fix this" and wonder why the stupid AI can't read their minds.
I am producing AI slop as well, just hopefully a bit less. Obviously hand crafted code is still much better but my boss wants me to use "AI" so I do as I am told.
Configuring editors, dot files, and dev environments consistently adds value by giving you familiarity with your working environment, honing your skills with your tools, and creating a more productive space tailored to your needs.
Who else becomes the go to person for modifying build scripts?
The amount of people I know who have no idea how to work with Git after decades in the field using it is pretty amazing. It's not helpful for everyone else when you're the one they're delegating their merge conflict bullshit too cause they've never bothered to learn anything about the tools they're using.
The issue is with the problem space - version control and reconciliation is hard. The fact we even have software to automate 99% of it is amazing.
Lawyers spend literally hundreds of hours doing just that. Well, their paralegals do.
Git is a legitimately amazing tool, but it can't magically make version control free. You still have to think because ultimately software can't decide which stuff is right and which is wrong.
How dumbed down does everything need to be? Git has warts for sure, but this whole ideas guy no actual understanding of anything is how you get trainwrecks. There is no free lunch, and you're going to pay one way or another for not understanding the tools of the craft, and that not everything can be ridiculously simple.
Git doesn't just have warts, its DX is actively bad. If it was good you wouldn't have so many tools designed to make it not suck to work with 20 years after release. The graph first and diff first design decisions are both bad choices that are probably burning millions of man hours per year fixing things that should just work (to be fair, they were the right decisions at the time, times have changed).
It's pretty great if you understand how to do resets, interactive rebases, understand the differences between merges and rebases, keep your commit history fairly clean, and just work with the tool. I haven't had a problem with Git since I spent a day going through the git book something like 10 years ago.
Meanwhile this is in a discussion about tools which people spend incalculable amounts of hours tuning, for reference. The number of articles on Hacker News about how people have tuned their LLM setups is... grand to say the least.
Maybe Git is too complicated for hobby users, because it has a steep learning curve. But after two weeks using you now enough to handle things, so it shouldn't be a problem in any professional environment.
What about any tool, language, library, or codebase that is unnecessarily complex? Should we never bother to put in the effort to learn to use them? It doesn't mean they are without value to us as programmers. For better or worse, the hallmark of many good programmers I've met is a much higher than average tolerance for sitting down and just figuring out how something computer-related works instead of giving up and routing around it.
> To solve problems. Coding is the means to an end, not the end itself.
100% this. I think a lot of the people who are angry at AI coding for them are "code calligraphers" who care more about the form of the thing they're making than the problem that it solves. I can't see how someone's who's primarily solution focused would shed a tear at AI coding for them.
Why would someone who likes solving problems choose a very lucrative career path solving problems… hmmm
You can also solve problems as a local handyman but that doesn’t pad the 401K quite as well as a career in software.
I feel like there’s a lot of tech-fetishist right now on the “if you don’t deeply love to write code then just leave!” train without somehow realizing that most of us have our jobs because we need to pay bills, not because it’s our burning passion.
It's because there are a significant number of us for who tinkering with and building shit is basically a compulsion. And software development is vastly more available, and quicker to iterate and thus more satisfying, than any other tinkering discipline. It's probably related to whatever drives some people to make art, the only difference being that the market has decided that the tinkers are worth a hell of a lot more.
For evidence towards the compulsion argument, look at the existence of FOSS software. Or videogame modding. Or all the other freely available software in existence. None of that is made by people who made the rational decision of "software development is a lucrative field that will pay me a comfortable salary, thus I should study software development". It's all made by people for whom there is no alternative but to build.
> I feel like there’s a lot of tech-fetishist right now on the “if you don’t deeply love to write code then just leave!” train without somehow realizing that most of us have our jobs because we need to pay bills, not because it’s our burning passion.
I would claim that I love coding quite a lot. The problem is rather that my bosses and colleagues don't care about what I love about it. It is rather appreciated if you implement tasks fast with shitty code instead of considering the fact that tasks are easy to implement and the code is really fast as a strong evidence that the abstractions were well-chosen.
Thus, I believe that people who just do it for the money have it easier in the "programming industry" than programmers who really love programming, and are thus a big annoyance to managers.
I thus really wonder myself why companies tell all the time about "love for programming" instead of "love for paying the bills" and "love for implementing tasks fast with shitty code", which would give them people who are a much better culture fit for their real organizational processes.
Very level-headed comment. I'm one of those who sees programming as a means to an end and nothing else.
If I order something to be delivered, I don't care what model of car the delivery company uses. Much less what kind of settings they have for the carburetor needles or what kind of oil they're using. Sure, somebody somewhere might have to care about this.
That's also how people like me see programming. If the code delivers what we need, then great. Leave it be like that. There are more interesting problems to solve, no need to mess with a solution which is working well.
The things is most times, you are indeed buying the car that is going to make the delivery. And it's going to live in your garage. And if you're not careful, one day it will drive itself off a cliff, stall in the middle of a 10 hour drive, or you'll get robbed by individuals hiding in the trunk.
People that realize this care about their oil type and what tire they put on. People that do not, pay it forward when that crash does happen and they don't know how to recover, so queue up the war room, etc...
Even if you're not dogfooding your own software, if you do not take care of it properly, the cost of changes will climb up.
> Even if you're not dogfooding your own software, if you do not take care of it properly, the cost of changes will climb up.
How do you mean? If the software works, then it's done. There is no maintenance and it will continue working like that for decades. It doesn't have corrosion and moving parts like a car. Businesses make sure not to touch it or the systems it is depending on.
That would be fine if the dependencies were permanent. Hardware fail and need to be replaced. The OS will be upgraded (macOS is more than happy to make breaking changes). If the software is networked, that’s another transient plane. Libraries will fall out of support range.
Then there’s the fact that the user’s needs fluctuate. Imagine having to pay for a whole another software because the current code is spaghetti and full of hardcoded value and magic constants. It worked, but now you want a slight adjustment, but that can’t no longer be made unless you’re willing to rewrite the whole thing (and pay for it). That would be like having to buy a whole new car, because you moved to the house next door, as the car is hardwired to move only between your old place and where you work.
In my opinion, the OS should not be updated. Not if important software is running on the machine. That's why we see cash registers still using Windows XP.
Sure, if you test it and see that there is no issue with updating, then you can update if you want. But neither the OS or the hardware or anything else should get any priority over the business-crucial software you are running. Even with hardware failures, the better option is to get older hardware for replacement if newer hardware has compatibility issues.
> "...without somehow realizing that most of us have our jobs because we need to pay bills..."
Oh, I wouldn't say that. The hacker culture of the 1970s from which the word hacker originated often poked fun at incurious corporate programmers and IIRC even Edsger Dijkstra wrote a fair bit of acerbic comments about them and their disinterest in the craft and science of computing.
Well, most of them (the hackers from the 70s) probably did do it solely for the love of the game.
We’re 50 years past that now. We’re in the era of boot camps. I feel semi confident saying “most of us” meaning the current developer work force are here for well paying jobs.
Don’t get me wrong I like software development! I enjoy my work. And I think I’d probably like it better than most things I’d otherwise be doing.
But what I’ve been getting at is that I enjoy it for the solving problems part. The actual writing of code itself for me just happens to be the best way to enjoy problem solving while making good money that enables a comfortable life.
To be put it another way, if being a SWE paid a poverty wage, I would not be living in a trailer doing this for my love of coding. I would go be a different kind of engineer.
I think bootcamp era was a decade ago and we're past it now. Not long ago I saw something on here about how a lot of them are closing down and attendance is dropping for the ones still open - likely because of LLMs.
You owe your cushy job and big paycheck entirely to those tech-fetishists that came before you.
Secondly, you are very blind if you don’t see that the AI making your job “easier” is close to replacing you entirely, if you don’t also have a deep understanding of the code produced. What’s to stop the Project Manager from vibe coding you out of the loop entirely?
State of the industry both short and medium term is that you want to be the one doing replacing vs being the one being replaced. Not great but this is where we are at. If you are say SRE there are myriad of companies working hard to eliminate SREs but they need experts to set shit up so that SREs are not needed. Same thing will cascade to other Tech work, some faster than others. Career-wise I think it is wise now to position yourself as one that knows how to set shit up for the “great replacement”
Yes we are rapidly moving towards a time where bullshitting will be more valued than deep understanding and problem solving. Both LLMs and the broader culture are pushing in that direction.
We all owe every part of everything to those who’ve come before us. That goes without saying, really.
> Secondly, you are very blind if you don’t see that the AI making your job “easier” is close to replacing you entirely, if you don’t also have a deep understanding of the code produced.
Brother don’t patronize me. I’m a senior engineer I’m not yeeting vibe code I don’t understand into prod.
I also understand the possibility of all of this potentially devaluing my labor or even wholesale taking my job.
What would you like me to do about that? Is me refusing to use the tools going to change that possibility?
Have yet to hear what else we should be doing about this. The hackernews answer appears to be some combination of petulance + burying head in the sand.
It’s more of a funeral, collective expression of grievance of a great, painful loss. An obituary for a glorious, short time in history where it was possible to combine a specific kind of intelligence, creativity, discipline, passion and values and be well compensated for it. A time when the ability to solve problems and solve them well had value. Not just being better at taking credit than other people.
It was wonderful.
I know you don’t care. So just go to some other forum where you don’t have to endure the whining of us who have lost something that was important to us.
At 47, I am an older guy already. But in my generation, people who went on to be programmers usually started tinkering with code at ~ 11 y.o. (back then on ZX Spectrum and similar cheap beasts available in freshly post-Communist Europe) out of interest and passion, not because of "I want to build a lucrative career".
(Given how massively widespread piracy was back then, programming looked rather like a good way to do hard work for free.)
Money matters, but coders who were drawn into the field purely by money and are personally detached from the substance of the job is an unknown species for me.
"You can also solve problems as a local handyman"
That is NOT the same sort of talent. My fingers are clumsy; my mind is not.
Additionally in many countries, being a developer is an office worker like everyone else, there isn't SV lottery level salaries.
In fact, those of us that rather stay programmers beyond 30 years old are usually seen as failure, from society point of view, where is our hunger for career and climbing up the ladder?
Now the whole set of SaaS products, with low code integrations, that were already a bit depressing from programmer point of view, are getting AI agents as well.
It feels like coding as in the old days is increasingly being left for hobby coding, or a few selected ones working on industry infrastructure products/frameworks.
> if handyman work was paying $600/hr your fingers would un-clums themselves reaaaaaaly fast
I don't believe that. When it comes to motoric skills, including dancing etc., I am probably in the lowest quintile of the population.
Of course, I could become somewhat better by spending crazy amounts of time on training, but I would still be non-competitive even in comparison with an average person.
OTOH I am pretty good at writing prose/commentary, even though it is not a particulary lucrative activity, to the degree of being a fairly known author in Czechia. My tenth book is just out.
Talents are weird and seem to have mind of their own. I never planned to become an author, but something inside just wanted out. My first book was published just a few days shy of my 40th birthday, so not a "youthful experiment" by any means.
A bit harsh off a single post. I like solving problems, not just software engineering problems and I like writing code as a hobby, but I went to this job field only due to high salary and benefits.
In fact, I usually hate writing code at day job because it is boring things 20 out of 26 sprints.
I don't think it is. Labeling passion and love for your work "tech fetishism", is spiritually bankrupt. Mind you we're in general here not talking about people working in a mine to survive, which is a different story.
But people who do have a choice in their career, doing something they have no love for solely to add more zeros to their bank account? That is the fetish, that is someone who has himself become an automaton. It's no surprise they seem to take no issues with LLMs because they're already living like one. Like how devoid of curiosity do you have to be to do something half your waking life that you don't appreciate if you're very likely someone who has the freedom to choose?
> Like how devoid of curiosity do you have to be to do something half your waking life that you don't appreciate if you're very likely someone who has the freedom to choose?
Do you understand work-life balance? I get paid to do the job, I satisfy my curiosities in my free-time.
> But people who do have a choice in their career, doing something they have no love for solely to add more zeros to their bank account?
Because I doubt finding a well paying job that you love is something that is achievable in our society, at least not for most people.
IMO, the real fetishization here is "work is something more than a way to get paid" that's a corporate propaganda I'm not falling for.
>Because I doubt finding a well paying job that you love is something that is achievable in our society,
Which is why I stressed twice, including in the part you chose to quote, that I am talking about people who can achieve that. If you have to take care of your sick grandmother, you don't need to feel addressed.
But if you did have the resources to choose a career, like many people who comment here, and you ended up a software developer completely devoid of passion for the craft you're living like a Severance character. You don't get to blame the big evil corporations for a lack of dedication to a craft. You don't need to work for one to be a gainfully employed programmer, and even if you do and end up on a deadbeat project, you can still love what you do.
This complete indifference to what you produce, complete alienation from work, voluntarily chosen is a diseased attitude.
> The point of most jobs in the world is to "solve problems". So why did you pick software over those?
Because in a lot of jobs where you (have to) solve problems, the actual problems to solve are rather "political". So, if you are not good at office politics or you are not a good diplomat, software is often a much better choice.
The honest answer that applies to almost everyone here is that as a kid, they liked playing computer games and heard that the job pays well.
It's interesting, because to become a plumber, you pretty much need a plumber parent or a friend to get you interested in the trade show you the ropes. Meanwhile, software engineering is closer to the universal childhood dream of "I want to become an astronaut" or "I want to be a pop star", except more attainable. It's very commoditized by now, so if you're looking for that old-school hacker ethos, you're gonna be disappointed.
I think you're grossly underestimating the number of people here who fell into software development because it's one of the best outlets for "the knack" in existence. Sure, this site is split between the "tech-bro entrepreneur"-types and developers, and there are plenty of developers who got into this for the cash, but in my experience about a quarter of developers (so maybe 10-15% of users on this site) got into this profession due to getting into programming because it fed an innate need to tinker, and then after they spent a ton of time on it discovered that it was the best way to pay the bills available to them.
I got stupidly lucky that one of my hobbies as an avid indoorsman was not only valued by the private sector but also happened to pay well. This career was literally the only thing that saved me from a life of poverty.
> Coding is the means to an end, not the end itself.
> That may be fun for you, but it doesn’t add value
I'm not disagreeing with you per se, but those statements are subjective, not an objective truth. Lots of people fundamentally enjoy the process of coding, and would keep doing it even in a hypothetical world with no problems left to solve, or if they had UBI.
You can spend as much time as you want on "configuration of our editor, tinkering with dot files, and dev environments" and otherwise honing your craft, the business machine will still look at you as cogs.
May seem depressing, but the bright side is that you as an individual are then free to find joy in your work wherever you can find it... whether its in delivering high-quality code, or just collecting a paycheck.
I think the author makes a decent point with regards to 'problem solving' and better tools and how LLM's somehow feel different. Fortran is a better tool, but you can still reproducibly trace things back to assembly code through the compiler.
LLM's feel like a non-deterministic compiler that transforms English into code of some sort.
These are my thoughts exactly. Whenever I use agents to assist me in creating a simple program for myself, I carefully guide it through everything I want created, with me usually writing pages and pages of detailed plaintext instructions and specifications when it comes to the backends of things, I then modify it and design a user interface.
I very much enjoy the end product and I also enjoy designing (not necessarily programming) a program that fits my needs, but rarely implementing, as I have issues focusing on things.
A chef who sharpens his knives should stop because it doesn't add value
A contractor who prefers a specific brand of tool is wrong because the tool is a means to an end
This is what you sound like. Just because you don't understand the value of a craftsman picking and maintaining their tools doesn't mean the value isn't real.
Yes, but the point of being a chef is the food, not the knives. If there's a better way to prepare food than a knife, but you refuse to change, are you really a chef? Or are you a chef knife enthusiast?
I don't think that's really the point of this post; it's all about how LLMs are destroying our craft (ie, "I really like using knives!"), not really about whether the food is better.
I think the real problem is that it's actually increasingly difficult to defend the artisanal "no-AI" approach. I say this as a prior staff-level engineer at a big tech company who has spent the last six months growing my SaaS to ~$100k in ARR, and it never could have happened without AI. I like the kind of coding the OP is talking about too, but ultimately I'm getting paid to solve a problem for my customers. Getting too attached to the knives is missing the point.
Call me crazy, but my guess is that that may not have been able to happen without the decade of experience it took you to get to the Staff level engineering position at a big tech company which has enabled you to gain the skills required to review the AI code you're producing properly.
Absolutely. But, what if the point of using the knives, is to be able to understand how to use the machines which can use knives for us, and if we're not replicating the learning part, where do we end up?
I thought it's interesting that GPT5's comments (on prompting it for feedback on the article) seem to overlap with some of the points you guys made:
My [GPT5's -poster's note] take / Reflections
I find the article a useful provocation:
it asks us to reflect on what we value in being programmers.
It’s not anti-AI per se, but it is anti-losing-the-core craft.
For someone in your position (in *redacted* / Europe)
it raises questions about what kind of programming work you want:
deep, challenging, craft-oriented, or more tool/AI mediated.
It might also suggest you think about building skills
that are robust to automation: e.g., architecture,
critical thinking, complex problem solving, domain knowledge.
The identity crisis is less about “will we have programmers” and
more “what shapes will programming roles take”.
A closer analogy would be a chef who chooses to have a robot cut his tomatoes. If the robot did it perfect every time I'm sure he would use the robot. If the robot mushed the tomatoes some of the time, would he spend time carefully inspecting the tomatoes? or would he just cut them himself?
Even if the robot did it perfectly, you'd still have posts like these lamenting the loss of the craft of cutting tomatoes. And they're not wrong!
I guess I don't understand posts like this IF you think you can do it better without LLMs. I mean, if using AI makes you miserable because you love the craft of programming, AND you think using AI is a net loss, then just...don't use it?
But I think the problem here that all these posts are speaking to is that it's really hard to compete without using AI. And I sympathize, genuinely. But also...are we knife enthusiasts or chefs?
I genuinely don't think it's hard to compete. I use AI sometimes, I don't use it more than I use it. I find myself at least just-as-productive as people who primarily use AI.
I personally tire of people acting like it's some saving grace that doubles/triples/100x your productivity and not a tool that may give you 10-20% uplift just like any other tool
There are chefs but they are not us. Though it will upset many to hear it, what we are is fast food workers, assembling and reheating prepackaged stuff provided to us. Now a machine threatens to do the assembling and reheating for us, better and faster than we on average do.
The chefs coming up with recipes and food scientists doing the pre-packaging will do fine and are still needed. The people making the fast food machine will also do well for themselves. The rest of us fast food workers, well, not so much...
This is a strawman. The point is that the original poster was going on about knives, forgetting that the final product is the actual thing that matters, not whatever tool is used to create it. In your example, if the food is inferior, then the food is inferior.
Some of you have never been laid off and it shows.
Intrinsic value is great, where achievable. Companies do not care at all about intrinsic value. I take pride in my work and my craft to the extent I am allowed to, but the reality is that those of us who can’t adapt to the businesses desires will be made obsolete and cut loose, regardless of whatever values we hold.
The issue is that a lot of “programmers” think bike-shedding is the essence of programming. Fifty years ago, they would have been the ones saying that not using punch cards takes away from the art of programming, and then proudly showing off multiple intricate hole punchers they designed for different scenarios.
Good problem solvers... solve problems. The technological environment will never devalue their skills. It’s only those who rest on their laurels who have this issue.
> LLMs seem like a nuke-it-from-orbit solution to the complexities of software. Rather than addressing the actual problems, we reached for something far more complex and nebulous to cure the symptoms.
The author overlooks a core motivation of AI here: to centralize the high-skill high-cost “creative” workers into just the companies that design AIs, so that every other business in the world can fire their creative workers and go back to having industrial cogs that do what they’re told instead of coming up with ‘improvements’ that impact profits. It’s not that the companies are reaching for something complex and nebulous. It’s that companies are being told “AI lets you eject your complex and nebulous creative workers”, which is a vast reduction in nearly everyone’s business complexity. Put in the terms of a classic story, “The Wizard of Oz”, no one bothers to look behind the curtain because everything is easier for them — and if there’s one constant across both people and corporations, it’s the willingness to disregard long-term concerns for short-term improvements so long as someone else has to pay the tradeoff.
Yes! This happened in so many industries. Banking is my go to example, where we used to have local bankers making decisions based on local knowledge of their community, but then decision making was centralized into remote central HQs and the local bankers moved into living below the API, while the central HQ guys began to make all the bucks. See also "seeing like a state" and the concept of legibility.
When I started programming for Corporate™ back 1995, it was a wildly different career than what it has become. Say what you want about the lunatics running the asylum, but we liked it that way. Engineering knew their audience, knew the tech stack, knew what was going on in "the industry", ultimately called the shots.
Your code was your private sandbox. Want to rewrite it every other release? Go for it. Like to put your curly braces on a new line? Like TABs (good for you)? Go for it. It's your code, you own it. (You break it, you fix it.)
No unit tests (we called that parameter checking). No code reviews (well, nothing formal — often, time was spent in co-workers offices talking over approaches, white-boarding API… Often if a bug was discovered or known, you just fixed it. There may have been a formal process beginning, but to the lunatics, that was optional.
You can imagine how management felt — having to essentially just trust the devs to deliver.
In the end management won, of course.
When I am asked if I am sorry that I left Apple, I have to tell people, no. I miss working at Apple in the 90's, but that Apple was never coming back. And I hate to say it, but I suspect the industry itself will never return to those "cowboy coding" days. It was fun while it lasted.
I started around the same time. No unit tests but we did have code reviews because of ISO 9001 requirements. That meant printing out the diffs on the laser printer and corralling 3 people into a meeting room to pour over them and then have them literally sign off on the change. This was for an RTOS that ran big industrial controls in things like steel plants and offshore oil rigs.
Project management was a 40 foot Gantt chart printed out on laser printer paper and taped to the wall. The sweet sound of waterfall.
Back when I started in the late 2000s you had much clearer lines around your career path and speciality.
There was a difference between a sysadmin and a programmer. Now, I’m expected to be my own sysadmin-ops guy while also delivering features. While I worked on my systems chops for fun on the side, I purposely avoided it on the work side, I don’t usually enjoy how bad vendor documentation, training, etc. can be in the real world of Corporate America.
Unfortunately, the problem with cowboy coding is that it takes one idiot in the team to ruin it for everyone. As company grows, there are more and more idiots by pure chance, which means you need bigger and bigger walls to contain the blast radius. If you have a team of trustworrthy engineers then cowboy coding is extremely efficient, but it simply doesn't scale, especially considering how difficult it is to evaluate the quality of given candidate when hiring.
I believe that cowboy coding might still be practiced in small companies, or in small corporate pockets, where the number of engineers doesn't need to scale.
> And I hate to say it, but I suspect the industry itself will never return to those "cowboy coding" days. It was fun while it lasted.
I don't think the industry will return to it, but I suspect there will be isolated environments for cowboys. When I was at WhatsApp (2011-2019), we were pretty far on the cowboy side of the spectrum... although I suspect it's different now.
IMHO, what's appropriate depends on how expensive errors are to detect before production, and how expensive errors are when detected after production. I lean into reducing the cost to fix errors rather than trying to detect errors earlier. OTOH, I do try not to make embarrassing errors, so I try to test for things that are reasonable to test for.
Depends. I see the teams around me slowly being corralled like cattle, no longer doing the corralling. My own team is still chiefly cowboys but the writing is on the wall and as we grow younger we lose more and more footing in this battle.
I also agree with comments on this thread stating that problem solving should be the focus and not the code.
However my view is that our ability to solve problems which require a specific type of deep thought will diminish over time as we allow for AI to do more of this type of thinking.
Purely asking for a feature is not “problem solving”.
I think you can enjoy both aspects - both the problem solving and the craft. There will be people who agree that of course from a rational perspective solving the problem is what matters, but for them personally the "fun" is gone. Generally people that identify themselves as "programmers" as the article does would be the people who enjoy problem solving/tinkering/building.
What if you want to be a better problem solver (in the tech domain)? Where should you focus your efforts? That's what is confusing to me. There is a massive war between the LLM optimists and pessimists. Whenever I personally use LLM tools, they are disappointing albeit still useful. The optimists tell me I should be learning how to prompt better, that I should be spending time learning about agentic patterns. The pessimists tell me that I should be focusing on fundamentals.
The fact of the matter is, that a lot of the development work out there is just boilerplate: build scripts, bootstrapping and configuration, defining mappings for Web APIs and ORMs (or any type of DB interaction), as well as dealing with endless build chain errors and stuff I honestly think is bullshit.
When I see a TypeScript error that's borderline incomprehensible, sometimes I just want to turn to an LLM (or any tool, if there were enough of formalized methods and automatic fixes/refactoring to make LLMs irrelevant, I'd be glad!) and tell it "Here's the intent, make it work."
It's fun to me to dig into the code when I want to reason about the problem space and the domain, but NOT very much so when I have to do menial plumbing. Or work with underdocumented code by people long gone. Or work on crappy workarounds and bandaids on top of bandaids, that were pushed out the door due to looming deadlines, sometimes by myself 2 months prior. Or work with a bad pattern in the codebase, knowing that refactoring it might take changes in 30 times that I don't have enough time for right now. LLM makes some of those issues dissolve, or at least have so little friction that they become solvable.
Assumption: when I use LLMs, I treat it as any other code, e.g. it must compile, it must be readable, make sense and also work.
I think "Identity Crisis" is a bit over dramatic, but I for the most part agree with the sentiment. I have written something in the same vane, but still different enough that I would love to comment it but its just way more efficient to point to my post. I hope that is OK: https://handmadeoasis.com/ai-and-software-engineering-the-co...
It's the explicitly stated goal of several of the largest companies on the planet which put up a lot of money to try to reach that goal. And the progress over the past few years has been stunning.
I liked your emphasis on individual diversity, and an attendant need to explore, select, adapt, and integrate tooling. With associated self-awareness. Pushing that further, your "categories" seem more like exemplars/prototypes/archetypes/user-stories, helpful discussion points in a high-dimensional space of blended blobs. And as you illustrate, it branches not just on the individual, but also on what they are up to. And not just on work vs hobby, but on context and task.
It'd be neat to have a big user story catalog/map, which tracks what various services are able to help with.
I was a kid in NE43 instead of TFA's Building 26 across the street - with Lisp Machines and 1980s MIT AI's "Programmer's Apprentice" dreams. I years ago gave up on ever having a "this... doesn't suck" dev env, on being able to "dance code". We've had such a badly crippling research and industrial policy, and profession... "not in my lifetime" I thought. Knock on wood, I'm so happy for this chance at being wrong. And also, for "let's just imagine for a moment, ignoring the utterly absurd resources it would take to create, science education content that wasn't a wretched disaster... what might that look like?" - here too it's LLMs, or no chance at all.
> I would love to read a study on why people so readily believe and trust in AI chatbots.
We associate authority experts with a) quick and b) broad answers. It's like when we're listening to a radio show and they patch in "Dr So N. So" an expert in Whatever from Academia Forever U. They seem to know their stuff because a) they don't see "I don't know, let me get back to you after I've looked into that" and they can share a breadth of associated validations.
LLMs simulate this experience, by giving broadish, confident, answers very quickly. We have been trained by life's many experiences to trust these types of answers.
This comes up whenever _anything_ is automated: "this is the end of programming as a career!" I heard this about Rational Rose in the 90's, and Visual Basic in the 80's.
I don't think I'm sticking my head in the sand - an advanced enough intelligence could absolutely take over programming tasks - but I also think that such an intelligence would be able to take over _every_ thought-related task. And that may not be a bad thing! Although the nature of our economy would have to change quite a bit to accommodate it.
I might be wrong: Doug Hofstadter, who is way, way smarter than me, once predicted that no machine would ever beat a human at chess unless it was the type of machine that said "I'm bored of chess now, I would prefer to talk about poetry". Maybe coding can be distilled to a set of heuristics the way chess programs have (I don't think so, but maybe).
Whether we're right or wrong, there's not much we can do about it except continue to learn.
I can tell that nowadays in what concerns distributed systems built on top of SaaS enterprise products, using MACH architecture approach, my programming at work is quite minimal.
Most of my programming skills are kept up to date on side projects, thanfully I can managed the time to do them, between family and friends.
> Creative puzzle-solving is left to the machines, and we become mere operators disassociated from our craft.
For me, at least, this has not been the case. If I leave the creative puzzle-solving to the machine, it's gonna get creative alright, and create me a mess to clean up. Whether this will be true in the future, hard to say. But, for now, I am happy to let the machines write all the React code I don't feel like writing while I think about other things.
Additionally, as an aside, I already don't think coding is always a craft. I think we want it to be one because it gives us the aura of craftspeople. We want to imagine ourselves as bent over a hunk of marble, carving a masterpiece in our own way, in our time. And for some of us, that is true. For most programmers in human history though, they were already slinging slop before anybody had coined the term. Where is the inherent dignity and human spirit on display in the internal admin tool at a second tier insurance company? Certainly, there is business value there, but it doesn't require a Michalengo to make something that takes in a pdf and spits out a slightly changed pdf.
Most code is already industrial code, which is precisely the opposite of code as craft. We are dissociated from the code we write, the company owns it, not us, which is by definition the opposite of a craftsmen and craft mode of production. I think AI is putting a finer, sharper point on this, but it was already there and has been since the beginning of the field.
Great read, unlike technologies of the past that automated away the dangerous/boring/repetitive/soul-sucking jobs, LLM's are an assault on our thinking.
Social media already reduced our attention spans to that of goldfish, open offices made any sort of deep meaningful work impossible.
This process has been affecting most of the world's workers for the past several centuries. Programming has received a special treatment for the last few decades, and it's understandable that HN users would jump to protect their life investment, but it need not.
Hand-coding can continue, just like knitting co-exists with machine looms, but it need not ultimately maintain a grip on the software productive process.
It is better to come to terms with this reality sooner rather than later in my opinion.
> This process has been affecting most of the world's workers for the past several centuries.
It has also been responsible for predicting revolutions which never failed to materialize. 3D printing would make some kind of manufacturing obsolete, computers would make about half the world's jobs obsolete, etc etc.
Hand coding can be the knitting to the loom, or it can be industrialized plastic injection molding to 3D printing. How do you know? That distinction is not a detail--it's the whole point.
It's survivorship bias to only look at horses, cars, calculators, and whatever other real job market shifting technologies occurred in the past and assume that's how it always happens. You have to include all predictions which never panned out.
As human beings we just tend no to do that.
[EDIT: this being Pedantry News let me get ahead of an inevitable reply: 3D printing is used industrially, and it does have tremendous value. It enabled new ways of working, it grew the economy, and in some cases yes it even replaced processes which used to depend on injection molding. But by and large, the original predictions of "out with the old, in with the new" did not pan out. It was not the automobile to the horse and buggy. It was mostly additive, complementary, and turned out to have different use cases. That's the distinction.]
> Hand coding can be the knitting to the loom, or it can be industrialized plastic injection molding to 3D printing. How do you know? That distinction is not a detail--it's the whole point.
One could have made a reasonable remark in the past about how injection molding is dramatically faster than 3D printing (it applies material everywhere, all at once), scales better for large parts, et cetera. This isn't really true for what I'm calling hand-coding.
Obviously nothing about the future can be known for certain... but there are obvious trends that need not stop at software engineering.
I think there is only a very narrow band where LLMs are good enough at producing software that "hand-coding" is genuinely dead but at the same time bad enough that (expensive) humans still need to be paid to be in the loop.
Did an AI write your post or did you "hand write it"?
Code needs to be simple and maintainable and do what it needs to do. Auto complete wasn't a huge time saver because writing code wasn't the bottleneck then and it definitely is not the bottleneck now. How much you rely on an LLM won't necessarily change the quality or speed of what you produce. Specially if you pretend you're just doing "superior prompting with no hand coding involved".
LLMs are awesome but the IDE didn't replace the console text editor, even if it's popular.
> Code needs to be simple and maintainable and do what it needs to do.
And yet after 3 decades in the industry I can tell you this fantasy exists only on snarky HN comments.
> Hand-coding is no longer "the future"?
hand-coding is 100% not the future, there are teams already that absolutely do not hand-code anything anymore (I help with one of them that used to have 19 "hand-coders" :) ). The typing for sure will get phased out. it is quite insane that it took "AI" to make people realize how silly and wasteful is to type characters into IDEs/editors. the sooner you see this clearly the better it will be for your career
> How much you rely on an LLM won't necessarily change the quality or speed of what you produce.
if it doesn't you need to spend more time and learn and learn and learn more. 4/6/8 terminals at a time doing all various things for you etc etc :)
I think you’re doing something wrong if your throughput gets too high. Llms will go hog wild adding hundreds of Loc for features that could take tens. You have to maintain that shit later
I started writing code in basic on a beige box. My first code on windows was a vb6 window that looked like the AOL login screen and used open email relays to send me passwords.
I've written a ton of code in my life and while I've been a successful startup CTO, I've always stayed in IC level roles (I'm in one right now in addition to hobby coding) outside of that, data structures and pipelines, keep it simple, all that stuff that makes a thing work and maintainable.
But here is the thing, writing code isn't my identity, being a programmer, vim vs emacs, mechanical keyboard, RTFM noob, pure functions, serverless, leetcode, cargo culting, complexity merchants, resume driven dev, early semantic css lunacy, these are thing outside of me.
I have explored all of these things, had them be part of my life for better or worse, but they aren't who I am.
I am a guy born with a bunch of heart defects who is happy to be here and trying new stuff, I want to explore in space and abstraction through the short slice of time I've got.
I want to figure stuff out and make things and sometimes that's with a keyboard and sometimes that's with a hammer.
I think there are a lot of societal status issues (devs were mostly low social status until The Social Network came out) and personal identity issues.
I've seen that for 40 years, anything tied to a persons identity is basically a thing they can't be honest about, can't update their priors on, can't reason about.
And people who feel secure and appreciated don't give much grace to those who don't, a lot of callous people out there, in the dev community too.
I don't know why people are so fast to narrow the scope of who they are.
Humans emit meaning like stars emit photons.
The natural world would go on without us, but as far as we have empirically observed we make the maximally complex, multi modally coherent meaning of the universe.
We are each like a unique write head in the random walk of giving the universe meaning.
There are a ton of issues from a network resilience and maximizing the random meaning generation walk where Ai and consolidation are extremely dangerous, I think as far as new stuff in the pipeline it's between Ai and artificial wombs that have the greatest risks for narrowing the scope of human discovery and unique meaning expansion to a catastrophic point.
But so many of these arguments are just post-hoc rationalizations to poorly justify what at root is this loss of self identity, we were always in the business of automating jobs out from under people, this is very weak tea and crocodile tears.
The simple fact is, all our tools should allow us to have materially more comfortable and free lives, the Ai isn't the problem, it's the fact that devs didn't understand that tech is best when empowering people to think and connect better and have more freedom and self determination with their time.
If that isn't happening it's not the codes fault, it's the network architecture of our current human power structures fault.
Agree, and well said. There are no points for hard work, only results -- this is an extremely liberating principle when taken to the limit and we should be happy to say goodbye to an era of manual software-writing being the norm, even if it costs the ego of some guy who spent the last 20 years being told SWE made him a demi-god.
It really is a higher level language for coding though. Not as precise as Fortran but far more upside. I imagine monks bemoaning the printing press that took away the joy of their perfectly handwritten bibles they made in solitude
Welcome to HN! I don't think you understood TFA. A) LLM prompting is categorically different from Fortan, not another version of it that's 'less precise'. B) 'upside' again is entirely different from craftsmanship.
To be honest I already reached that identity crisis even before LLMs.
Nowadays many enterprise projects have become placing SaaS products together, via low code/no code integrations.
A SaaS product for the CMS, another one for assets, another for ecommerce and payments, another for sending emails, another for marketing, some edge product for hosting the frontend, finally some no code tools to integrate everything, or some serverless code hosted somewhere.
Welcome to MACH architecture.
Agents now made this even less about programming, as the integrations can be orchestrated via agents, instead of low code/no code/serverless.
I'm in the opposite camp. Programming has never been fun to me, and LLMs are a godsend to deal with all the parts I don't care for. LLMs have accelerated my learning speed and productivity, and believe it or not, programming even started to become fun and engaging!
As an aside, I've been using copilot code review before handing off any of my code to colleagues. It's a bit pedantic, but it generally catches all the most stupid things I've done so that the final code review tends to be pretty smooth.
I hate to suggest that the fix to LLM slop is more LLMs, but in this case it's working for me. My coworkers also seem to appreciate the gesture.
I agree that LLMs are great for a cursory review, but crucially, when you ask copilot to review your code, you actually read and think about everything copilot tells you in the response. The biggest issues arise because people will blindly submit AI-generated code without reading or thinking about it.
Some people code to talk and don't want anything said for them. That's okay. Photography and paintings landed in different places with different purposes.
But all of Programming isn't the same thing. We just need new names for different types of programmers. I'm sure there were farmers that lamented the advent of machines because of how it threatened their identity, their connection to the land, etc....
but I want to personally thank the farmers who just got after growing food for the rest of us.
I think in a few years, we will realize that LLMs have impacted our lives in a deeply negative way. The relatively small improvements LLMs bring to my life will be vastly outweighted by the negatives.
If LLM abilities stagnate around the current level it's not even out of the question that LLMs will negatively impact productivity simply because of all of the AI slop we'll have to deal with.
Hmmm. Interesting prediction. I think even on social media, the consensus is still shaky, and social media is an unalloyed bad IMHO. I think personal cars impacted our lives in a deeply negative way but most people disagree. There is really no consensus on LLMs right now, I think if they stagnate this is where the discourse will stagnate also.
More likely, like other tools, it will be possible to point to clear harms and clear benefits, and people will disagree about the overall impact.
It's honestly not that deep. If AI increases productivity, we should accept it. If it doesn't, then the hype will eventually fade out. In any case, having attachment to the craft is a bit cringe. Technological progress trumps any emotional attachment.
The IT world is waiting for a revolution. Only in order to blame that revolution for the mistakes of a few powerful people.
I would not be surprised if all this revolutionary sentiment is manufactured. That thing about "Luddites" (not a thing that will stick by the way), this nostalgic stuff, all of it.
We need to be much smarter than that and not fall for such obvious traps.
An identity is a target on your back. We don't need one. We don't need to unite to a cause, we're already amongst one of the most united kinds of workers there is, and we don't need a galvanizing identity to do it.
When I became a software engineer about two decades ago, I held a similar world view as the OP: programming is a craft, I'm an artist, a creator, a hacker.
With years, as I matured, and the industry matured, I came to realize that corporate programming is assembly line work but with a much bigger paycheck. You can dance around it as much as you want, but in the end, if you truly zoom out, you will realize that it's no different from an assembly line. There are managers to oversee your work and your time allocations; there is a belief that more people = more output; and everyone past your manager's manager seem to think that what you do is trivial and can be easily replaced by robots. So a software engineer who calls himself an artist is basically that same as a woodworker who works for a big furniture company, and yet insists on calling himself an artist, and referring to their work as craft, while in reality they assemble someone's else vision and product, by using industry standard tools.
And at first, I tried to resist. I held strong opinions as the OP. How come they came for MY CRAFT?! But then I realized that there is no craft. Sure, a handful of people work on really cool things. But if you look around, most companies are just plain dumb REST services with a new an outfit slapped on them. There is no craft. The craft has been distilled and filtered into 3-4 popular frameworks that dictate how things should be written, and chances are if I take an engineer and drop them in another company using the same framework, they won't even notice. Craft is when you build something new and unique, not when you deploy NextJS to Vercel with shadcn/ui and look like the other 99% of new-age SaaS offerings.
So I gave up. And I mainly use AI at my $DAY_JOB. Because why not? It was mundane work before (same REST endpoints, but with different names; copying and pasting around common code blocks), and now I don't suffer that much anymore. Instead of navigating the slop that my coworkers wrote before AI, I just review what AI wrote, in small pieces, and make sure it works as expected. Clean code? Hexagonal architecture? Separation of concerns? Give me a break. These are tools for "architects" and "tech leads" to earn a pat on their shoulder and stroke their ego, so they can move to a different company, collecting a bigger paycheck, while I get stuck with their overengineered solutions.
If I want to craft, I write code in my free time when I'm not limited by corner-cutting philosophy, abusive deadlines, and (some) incompetent coworkers each with their ego spanning to the moon as if instead of building a REST service for 7 users, they are building a world transforming and life-saving device for billions (powered by web3/blockchain/AI of course).
The problem I have with this argument is that it actually is English this time.
COBOL and SQL aren't English, they're formal languages with keywords that look like English. LLMs work with informal language in a way that computers have never been able to before.
Say that to the prompt guys and their AGENT.md rules.
Formalism is way easier than whatever this guys are concocting. And true programmer bliss is live programming. Common programming is like writing a sheet music and having someone else play it. Live programming is you at the instrument tweaking each part.
Yes natural languages are by nature ambiguous. Sometimes it's better to write specification in code rather than in a natural language(Jetbrains MPS for example).
But in faithful adherence to some kind of uncertainty principle, LLM prompts are also not a programming language, no matter if you turn down the temperature to zero and use a specialized coding model.
They can just use programming languages as their output.
This is also a strength. Formal languages struggle to work with concepts that cannot be precisely defined, which are especially common in the physical world.
e.g. it is difficult to write a traditional program to wash dishes, because how do you formally define a dish? You can only show examples of dishes and not-dishes. This is where informal language and neural networks shine.
The thing is... All those people were right. We no longer need the kinds of people we used to call programmers. There exists a new job, only semi related, that now goes by the name programmer. I don't know how many of the original programming professionals managed to make the transition to this new progression.
John Von Neumann famously questioned the value of compilers. Eventually we get the keyboard kids that have dominated computing since the early 70's in some form or another whether in a forward thinking way like Dan Ingalls or in an idealic way like the gcc/Free Software crowd. In parallel to this you have people like Laurel, Sutherland, Nelson who live in lateral thinking land.
The real issue is that we've been in-store for a big paradigm shift in how we interact with computers for decades at this point. SketchPad let us do competent, constraints based mathematics with images. Video games and the Logo language demonstrate the potential for programming using, "kinetics." In the future we won't code with symbols we'll dance our intent into and through the machine.
OK, but if you can't find out how to use new tools well, how good are you really as a craftsperson?
"We've always done it this way" is the path of calcification, not of a vibrant craft. And there are certainly many ways you can use LLMs to craft better things, without slop and vibecoding.
Programmer isn't a real thing, all these classes of people are made up. The biggesdt difference between an iPad Toddler and Dijkstra is that the toddler is much more efficient at programming.
Sure you can discover things that aren't intuitively obvious and these things may be useful, but that's more scientist than anything to do with programming.
programming + science = computer science
programming + engineering = software engineering
programming + iPad = interactive computing
programming + AI = vibe coding
Don't equate programming with software engineering when they are clearly two distinct things. This article would more accurately be called the software engineers' identity crisis.
Maybe some hobby engineers (programming + craft) might also be feeling this depending on how many external tools they already rely on.
What's really shocking is how many software engineers claim to put in Herculean effort in their code, but ship it on top (or adjacent if you have an API) of "platforms" that could scarcely be less predictable. These platforms have to work very hard to build trust, but it's all meaningless cause users are locked in anyway. When user abuse is rampant people are going to look for deus ex machina and some slimy guy will be there to sell it to them.
I'm seeing this reaction a lot from younger people (say, roughly under 25). And it's a shame this new suspicion has now translated into a prohibition on the use of dashes.
It's utterly uncommon in the kind of casual writing for which people are using AI, that's why it got noticed. Social media posts, blogs, ...
AI almost certainly picked it up mainly from typeset documents, like PDF papers.
It's also possible that some models have a tokenizing rule for recognizing faked-out em-dashes made of hyphens and turning them into real em-dash tokens.
On my own (long abandoned) blog, about 20% of (public) posts seem to contain an em dash: https://shreevatsa.wordpress.com/?s=%E2%80%94 (going by 4 pages of search results for the em dash vs 21 pages in total).
Ironically, I love using em dashes in my writing, but if I ever have to AI generate an email or summary or something, I will remove it for this exact reason.
That's simply not true, and pointlessly derogatory.
This article does not appear to be AI-written, but use of the emdash is undeniably correlated with AI writing. Your reasoning would only make sense if the emdash existed on keyboards. It's reasonable for even good writers to not know how or not care to do the extra keystrokes to type an emdash when they're just writing a blog post - that doesn't mean they have bad writing skills or don't understand grammar, as you have implied.
> That's simply not true, and pointlessly derogatory.
That same critique should first be aimed at the topmost comment, which has the same problem plus the added guilt of originating (A) a false dichotomy and (B) the derogatory tone that naturally colors later replies.
> It's reasonable for even good writers to not know how or not care
The text is true, but in context there's an implied fallacy: If X is "reasonable", it does not follow that Not-X is unreasonable.
More than enough (reasonable) real humans do add em-dashes when they write. When it comes to a long-form blog post—like this one submitted to HN—it's even more likely than usual!
> the extra keystrokes
Such as alt + numpad 0150 on Windows, which has served me well when on that platform for... gosh, decades now.
I don't think the character is that uncommon in the output of slightly-sophisticated writers and is not hard to generate (e.g., on macOS pressing option-shift-minus generates an em-dash).
In fact, on macOS and iOS simply typing two dashes (--) gets autocorrected to an em dash. I used it heavily, which was a bit sloppy since it doesn't also insert the customary hair spaces around the em dash.
Incidentally, I turned this autocorrection off when people started associating em dashes with AI writing. I now leave them manual double dashes--even less correct than before, but at least people are more likely to read my writing.
That's a silly take, just because they existed and were proper grammar before AI slop popularized them doesn't mean they're not statistically likely to indicate slop today, depending on the context.
What's sillier is people associating em-dashes with AI slop specifically because they are unsophisticated enough never to have learned how to use them as part of their writing, and assuming everyone else must be as poor of a writer as they are.
It's the literary equivalent of thinking someone must be a "hacker" because they have a Bash terminal open.
It doesn't really matter. Before LLM's, they were relatively rarely seen, after LLM's, they are commonly seen in AI-written text. Its not unreasonable for people to associate them with being AI-written.
You're overthinking it. LLMs exploded the prevalence of em-dashes. That doesn't mean you should assume any instance of an em-dash means LLM content, but it's a reasonable heuristic at the moment.
I dunno, I feel like the base rate fallacy [0] could easily become a factor... Especially if we don't even have an idea what the false-positive or false-negative rates are yet, let alone true prevalence.
> That doesn't mean you should assume any instance of an em-dash means LLM content
No, it doesn't. But people are putting that out there, people are getting accused of using AI because they know how to use em dashes properly, and this is dumb.
People have long talked about how reading code is far more important than writing code when working as a professional SWE. LLMs have only increased the relative importance of code review. If you're not doing a detailed code review of every line your LLM generates (just like you should have always been doing while reviewing human-generated code), you're doing a bad job. Sure, it's less fun, but that's not the point. You're a professional.
> Code reviewing coworkers are rapidly losing their minds as they come to the crushing realization that they are now the first layer of quality control instead of one of the last. Asked to review; forced to pick apart. Calling out freshly added functions that are never called, hallucinated library additions, and obvious runtime or compilation errors. All while the author—who clearly only skimmed their “own” code—is taking no responsibility, going “whoopsie, Claude wrote that. Silly AI, ha-ha.”
LLMs have made Brandolini's law ("The amount of energy needed to refute bullshit is an order of magnitude larger than to produce it") perhaps understated. When an inexperienced or just inexpert developer can generate thousands of lines of code in minutes, the responsibility for keeping a system correct & sane gets offloaded to the reviewers who still know how to reason with human intelligence.
As a litmus test, look at a PR's added/removed LoC delta. LLM-written ones are almost entirely additive, whereas good senior engineers often remove as much code as they add.
If you are wasting time you may be value negative to a business. If you are value negative over the long run you should be let go.
We’re ultimately here to make money, not just pump out characters into text files.
The MIT study just has a whole host of problems, but ultimately it boils down to: giving your engineers cursor and telling them to be 10x doesn't work. Beyond each individual engineer being skilled at using AI, you have to adjust your process for it. Code review is a perfect example; until you optimize the review process to reduce human friction, AI tools are going to be massively bottlenecked.
It is vastly different because there are no (as far as I've ever seen) multi-thousand line blocks of code to cut & paste as-is from stack overflow.
If you're pasting a couple dozen lines of code from a third party without understanding it, that's bad, but not unbearable to discover in a code review.
But if you're posting a 5000 line pull request that you've never read and expect me to do all your work validating it, we have a problem.
BUT...
How do have code review be an educational experience for onboarding/teaching if any bad submission is cut down with due prejudice?
I am happy to work with a junior engineer and is trying, and we have to loop on some silly mistakes, and pick and choose which battles to balance building confidence with developing good skills.
But I am not happy to have a junior engineer throw LLM stuff at me, inspired the confidence that the psycophantic AI engendered in it, and then have to churn on that. And if you're not in the same office, how do you even hope to sift through which bad parts are which kind?
We need to focus on architectural/system patterns and let go of code ownership in the traditional sense.
Regarding your Luddite reference, I think the cost-vs-quality debate was actually the centerpiece of that incident. Would you rather pay $100 for a T-shirt that's only marginally better than one that costs $10? I certainly would not. People are constantly evaluating cost-quality tradeoffs when making purchasing decisions. The exact ratio of the tradeoff matters. There's always a price point at which something starts (or stops) making sense.
A major problem of the way we have built our society in a way such that the wrong people end up with the most power and authority.
the majority of engineers across the industry feel the same way we do and yet there's little most of us can do unless we all decide to do something together :/
It's bizarre to me that people want to blame LLMs instead of the employees themselves.
(With open source projects and slop pull requests, it's another story of course.)
Now I don't do code reviews in large teams anymore, but if I did and something like that happened, I'd allow it exactly once, otherwise I'd try to get the person fired. Barring that, I'd probably leave, as that sounds like a horrible experience.
https://www.folklore.org/Negative_2000_Lines_Of_Code.html
Recently I was looking at building a gnarly form, that had some really complex interactions and data behind it. It just kept being subtly buggy in all different ways. I threw Claude at it, went down so many rabbit holes, it was convinced there were bugs in all the different frameworks and libraries I was using because it couldn't find the issue in the code (that it had written most of).
After a couple of days of tearing my hair out, I eventually dug in and rewrote it from first principles myself. The code afterwards was so much shorter, so much clearer, and worked a hell of a lot better (not going to say perfectly, but, well, haven't had a single issue with it since).
Now that I'm no longer at that company since a few years ago, I'm invincible. No LLM can scare me!
I have noticed Claude's extreme and obtuse reluctance to delete code, even code that it just wrote that I told it is wrong. For example, it might produce a fn:
And then I say, no, I actually wanted you to "foo with a frobnitz", so now we get:> All while the author—who clearly only skimmed their “own” code—is taking no responsibility, going “whoopsie, Claude wrote that. Silly AI, ha-ha.”
as they really deserve, the problem would disappear really fast.
So the problem that you outlined is rather social, and not the LLMs per se (even though they very often do produce shitty code).
If it happens a second time? A stern talk from their manager.
A third time? PIP or fired.
Let your manager be the bad guy. That's part of what they're for.
Your manager won't do that? Then your team is broken in a way you can't fix. Appeal to their manager, first, and if that fails put your resume on the street.
In my experience, the stern talk would probably go to you, for making the problem visible. The manager wouldn't want their manager to hear of any problems in the team. Makes them look bad, and probably lose on bonuses.
Happened to me often enough. What you described I would call a lucky exception.
> Your manager won't do that? Then your team is broken in a way you can't fix.
If you apply this standard, then most teams are broken.
After you made your colleagues upset submitting crappy code for review, you start to pay attention.
> LLM-written ones are almost entirely additive,
Unless you noticed that code has to be removed, and you instruct the LLM to do so.
I don't think LLMs really change the dynamics here. "Good programmers" will still submit good code, easy for their colleagues to review, whether it was written with the help of an LLM or not.
If the only thing keeping you from submitting crappy code is an emotional response from coworkers, you are not a "good programmer", no matter what you instruct your LLM.
They are usually verbose/include things like "how to run a virtual env for python"
Can't be any compilation errors in a README, no need to worry about bugs. And if they're long and boring enough, no one will ever read them.
AI generated READMEs = free metrics bonus points, for the performance reviews :-)
The sane option is to join the cult. Just accept every pull request. Git blame won't show your name anyways. If CEOs want you to use AI, then tell AIs to do your review, even better.
This was first salient to me when I saw posts about opensource developers who make critical infrastructure living hand to mouth. Then the day in the life of a software engineer working in a coffee shop. Then the bootcamps or just learn to code movement. Then the leetcode grinders. Then developers living in cars in SF due to lack of affordable housing. Now it is about developers vibe coding themselves out of a job.
The issue is and will always be that developers are not true professionals. The standards are loosely enforced and we do a poor job of controlling who comes in and out of the industry. There are no ethics codes, skillsets are arbitrary, and we don't have any representation. Worst yet we bought into this egocentric mindset where abuses to workers and customers are overlooked.
This makes no sense to me. Lawyers have bar associations, doctors have medical associations, coders have existential angst.
Now the bosses are like automate your way out of a job or you will lose your job.
I always ask myself, in what other "profession" would its members be so hostile to their own interests?
Someone who finished a bootcamp might be able to write a simple program in Python, but that doesn't make them a software engineer.
I've said this out loud before and have gotten told I'm an elitist, that my degree doesn't make me better at software than those without one. That majoring in computer science teaches you only esoteric knowledge that can't be applied in a "real job".
On the other hand, the industry being less strict about degrees can be considered a positive. There definitely do exist extremely talented self-taught software engineers that have made a great career for themselves.
But I definitely agree with the need of some sort of standard. I don't care if some bootcamper gets a job at the latest "AI on the blockchain as a service" unicorn startup, good for them. I'd rather have people with formal degrees work on something like a Therac-25, though.
Maybe coders can see themselves as teachers to the machine. Either they teach character by character, or vibe idea by vibe idea, or anything in between.
Most code is not like that. Most code I want to get something done, and so I achieve something quite a bit below that bar. But some things I get to write in that way, and it is very rewarding to do so. It's my favorite code to write by a mile.
Back to LLMs - I find it is both easier than ever and harder than ever to write code in that mode. Easier than ever because, if I can actually get and stay in that mode psychologically, I can get the result I want faster, and the bar is higher. Even though I am able to write MUCH better code than an LLM is, I can write even better code with LLM assistance.
But it is harder than ever to get into that mode and stay in that mode. It is so easy to just skim LLM-generated code, and it looks good and it works. But it's bad code, maybe just a little bit at first, but it gets worse and worse the more you let through. Heck, sometimes it just starts out as not-excellent code, but every time you accept it without enough diligence the next output is worse. And by the time you notice it's often too late, you've slopped yourself, while also failing to produce an expert in the code that's been written.
Big-picture coding is in my hands, but AI is good at filling in the logic for functions and helping out with other small things.
So funny to read how people attack author using non-related to the essay’s message criticism.
My coworkers that are in love with this new world are producing complete AI slop and still take ages to complete tasks. Meanwhile I can finally play my strength as I actually know software architecture, can ask the LLM to consider important corner case and so on.
Plus, I am naturally good at context management. Being neurodivergent has given me decades of practice in working with entities that have a different way of thinking that me own. I have more mechanical empathy for the LLM because I don't confuse it for a human. My coworkers meanwhile get super frustrated that the LLM can not read their mind.
That said, LLMs are getting better. My advantage will not last. And the more AI slop gets produced the more we need LLMs to cope with all the AI slop in our code bases. A vicious cycle. No one will actually know what the code does. Soon my job will mostly consist of praying to the machine gods.
I don't think anyone is complaining about that too much. I wonder how many people there are like you, where we don't get much data. If people don't complain about it, we generally don't hear about it, because they're just quietly moving on with their work.
Not to be confused with the AI hypesters who are loudly touting the benefits with dubious claims, of course (:
Though I don't think this is at play here. Maybe a bit but seeing how my coworkers prompt, there is objective difference. I will spend half an hour on writing a good prompt, revise the implementation plan with the LLM multiple times before I allow it to even start doing anything while my coworkers just write "fix this" and wonder why the stupid AI can't read their minds.
I am producing AI slop as well, just hopefully a bit less. Obviously hand crafted code is still much better but my boss wants me to use "AI" so I do as I am told.
To solve problems. Coding is the means to an end, not the end itself.
> careful configuration of our editor, tinkering with dot files, and dev environments
That may be fun for you, but it doesn’t add value. It’s accidental complexity that I am happy to delegate.
Who else becomes the go to person for modifying build scripts?
The amount of people I know who have no idea how to work with Git after decades in the field using it is pretty amazing. It's not helpful for everyone else when you're the one they're delegating their merge conflict bullshit too cause they've never bothered to learn anything about the tools they're using.
Lawyers spend literally hundreds of hours doing just that. Well, their paralegals do.
Git is a legitimately amazing tool, but it can't magically make version control free. You still have to think because ultimately software can't decide which stuff is right and which is wrong.
Meanwhile this is in a discussion about tools which people spend incalculable amounts of hours tuning, for reference. The number of articles on Hacker News about how people have tuned their LLM setups is... grand to say the least.
Maybe Git is too complicated for hobby users, because it has a steep learning curve. But after two weeks using you now enough to handle things, so it shouldn't be a problem in any professional environment.
100% this. I think a lot of the people who are angry at AI coding for them are "code calligraphers" who care more about the form of the thing they're making than the problem that it solves. I can't see how someone's who's primarily solution focused would shed a tear at AI coding for them.
You can also solve problems as a local handyman but that doesn’t pad the 401K quite as well as a career in software.
I feel like there’s a lot of tech-fetishist right now on the “if you don’t deeply love to write code then just leave!” train without somehow realizing that most of us have our jobs because we need to pay bills, not because it’s our burning passion.
For evidence towards the compulsion argument, look at the existence of FOSS software. Or videogame modding. Or all the other freely available software in existence. None of that is made by people who made the rational decision of "software development is a lucrative field that will pay me a comfortable salary, thus I should study software development". It's all made by people for whom there is no alternative but to build.
I would claim that I love coding quite a lot. The problem is rather that my bosses and colleagues don't care about what I love about it. It is rather appreciated if you implement tasks fast with shitty code instead of considering the fact that tasks are easy to implement and the code is really fast as a strong evidence that the abstractions were well-chosen.
Thus, I believe that people who just do it for the money have it easier in the "programming industry" than programmers who really love programming, and are thus a big annoyance to managers.
I thus really wonder myself why companies tell all the time about "love for programming" instead of "love for paying the bills" and "love for implementing tasks fast with shitty code", which would give them people who are a much better culture fit for their real organizational processes.
If I order something to be delivered, I don't care what model of car the delivery company uses. Much less what kind of settings they have for the carburetor needles or what kind of oil they're using. Sure, somebody somewhere might have to care about this.
That's also how people like me see programming. If the code delivers what we need, then great. Leave it be like that. There are more interesting problems to solve, no need to mess with a solution which is working well.
People that realize this care about their oil type and what tire they put on. People that do not, pay it forward when that crash does happen and they don't know how to recover, so queue up the war room, etc...
Even if you're not dogfooding your own software, if you do not take care of it properly, the cost of changes will climb up.
How do you mean? If the software works, then it's done. There is no maintenance and it will continue working like that for decades. It doesn't have corrosion and moving parts like a car. Businesses make sure not to touch it or the systems it is depending on.
Then there’s the fact that the user’s needs fluctuate. Imagine having to pay for a whole another software because the current code is spaghetti and full of hardcoded value and magic constants. It worked, but now you want a slight adjustment, but that can’t no longer be made unless you’re willing to rewrite the whole thing (and pay for it). That would be like having to buy a whole new car, because you moved to the house next door, as the car is hardwired to move only between your old place and where you work.
Sure, if you test it and see that there is no issue with updating, then you can update if you want. But neither the OS or the hardware or anything else should get any priority over the business-crucial software you are running. Even with hardware failures, the better option is to get older hardware for replacement if newer hardware has compatibility issues.
Oh, I wouldn't say that. The hacker culture of the 1970s from which the word hacker originated often poked fun at incurious corporate programmers and IIRC even Edsger Dijkstra wrote a fair bit of acerbic comments about them and their disinterest in the craft and science of computing.
We’re 50 years past that now. We’re in the era of boot camps. I feel semi confident saying “most of us” meaning the current developer work force are here for well paying jobs.
Don’t get me wrong I like software development! I enjoy my work. And I think I’d probably like it better than most things I’d otherwise be doing.
But what I’ve been getting at is that I enjoy it for the solving problems part. The actual writing of code itself for me just happens to be the best way to enjoy problem solving while making good money that enables a comfortable life.
To be put it another way, if being a SWE paid a poverty wage, I would not be living in a trailer doing this for my love of coding. I would go be a different kind of engineer.
I think bootcamp era was a decade ago and we're past it now. Not long ago I saw something on here about how a lot of them are closing down and attendance is dropping for the ones still open - likely because of LLMs.
Secondly, you are very blind if you don’t see that the AI making your job “easier” is close to replacing you entirely, if you don’t also have a deep understanding of the code produced. What’s to stop the Project Manager from vibe coding you out of the loop entirely?
> Secondly, you are very blind if you don’t see that the AI making your job “easier” is close to replacing you entirely, if you don’t also have a deep understanding of the code produced.
Brother don’t patronize me. I’m a senior engineer I’m not yeeting vibe code I don’t understand into prod.
I also understand the possibility of all of this potentially devaluing my labor or even wholesale taking my job.
What would you like me to do about that? Is me refusing to use the tools going to change that possibility?
Have yet to hear what else we should be doing about this. The hackernews answer appears to be some combination of petulance + burying head in the sand.
It’s more of a funeral, collective expression of grievance of a great, painful loss. An obituary for a glorious, short time in history where it was possible to combine a specific kind of intelligence, creativity, discipline, passion and values and be well compensated for it. A time when the ability to solve problems and solve them well had value. Not just being better at taking credit than other people.
It was wonderful.
I know you don’t care. So just go to some other forum where you don’t have to endure the whining of us who have lost something that was important to us.
I come here to learn, discuss, and frankly, to hang onto a good life as long as I can have it.
The collective whinging in every AI topic is both annoying and self-defeating.
No brother. You are the one being annoyed by it, because you are the one doing nothing about it.
>What would you like me to do about that? Is me refusing to use the tools going to change that possibility?
What I know I refused 'em out of principle, turns out I'm doing fine. I also know for certain that had I not refused them, I would not be doing fine.
>to hang onto a good life as long as I can have it.
Trick question: do you think you deserve a good life?
What if there isn't enough good life for everyone, do you deserve it more than others?
Than which ones then?
>collective whinging
And this is why I think you don't.
The moment you began to perceive mass dissent as "collective whinging" was the moment the totalitarian singularity won you over.
And then it's an entirely different conversation, conducted by entirely different means of expression.
(Given how massively widespread piracy was back then, programming looked rather like a good way to do hard work for free.)
Money matters, but coders who were drawn into the field purely by money and are personally detached from the substance of the job is an unknown species for me.
"You can also solve problems as a local handyman"
That is NOT the same sort of talent. My fingers are clumsy; my mind is not.
Additionally in many countries, being a developer is an office worker like everyone else, there isn't SV lottery level salaries.
In fact, those of us that rather stay programmers beyond 30 years old are usually seen as failure, from society point of view, where is our hunger for career and climbing up the ladder?
Now the whole set of SaaS products, with low code integrations, that were already a bit depressing from programmer point of view, are getting AI agents as well.
It feels like coding as in the old days is increasingly being left for hobby coding, or a few selected ones working on industry infrastructure products/frameworks.
> That is NOT the same sort of talent. My fingers are clumsy; my mind is not.
if handyman work was paying $600/hr your fingers would un-clums themselves reaaaaaaly fast :)
I don't believe that. When it comes to motoric skills, including dancing etc., I am probably in the lowest quintile of the population.
Of course, I could become somewhat better by spending crazy amounts of time on training, but I would still be non-competitive even in comparison with an average person.
OTOH I am pretty good at writing prose/commentary, even though it is not a particulary lucrative activity, to the degree of being a fairly known author in Czechia. My tenth book is just out.
Talents are weird and seem to have mind of their own. I never planned to become an author, but something inside just wanted out. My first book was published just a few days shy of my 40th birthday, so not a "youthful experiment" by any means.
It feels like we’re all going to have to have a reinvention or two ahead of us.
In fact, I usually hate writing code at day job because it is boring things 20 out of 26 sprints.
I don't think it is. Labeling passion and love for your work "tech fetishism", is spiritually bankrupt. Mind you we're in general here not talking about people working in a mine to survive, which is a different story.
But people who do have a choice in their career, doing something they have no love for solely to add more zeros to their bank account? That is the fetish, that is someone who has himself become an automaton. It's no surprise they seem to take no issues with LLMs because they're already living like one. Like how devoid of curiosity do you have to be to do something half your waking life that you don't appreciate if you're very likely someone who has the freedom to choose?
Do you understand work-life balance? I get paid to do the job, I satisfy my curiosities in my free-time.
> But people who do have a choice in their career, doing something they have no love for solely to add more zeros to their bank account?
Because I doubt finding a well paying job that you love is something that is achievable in our society, at least not for most people.
IMO, the real fetishization here is "work is something more than a way to get paid" that's a corporate propaganda I'm not falling for.
Which is why I stressed twice, including in the part you chose to quote, that I am talking about people who can achieve that. If you have to take care of your sick grandmother, you don't need to feel addressed.
But if you did have the resources to choose a career, like many people who comment here, and you ended up a software developer completely devoid of passion for the craft you're living like a Severance character. You don't get to blame the big evil corporations for a lack of dedication to a craft. You don't need to work for one to be a gainfully employed programmer, and even if you do and end up on a deadbeat project, you can still love what you do.
This complete indifference to what you produce, complete alienation from work, voluntarily chosen is a diseased attitude.
Because in a lot of jobs where you (have to) solve problems, the actual problems to solve are rather "political". So, if you are not good at office politics or you are not a good diplomat, software is often a much better choice.
It's interesting, because to become a plumber, you pretty much need a plumber parent or a friend to get you interested in the trade show you the ropes. Meanwhile, software engineering is closer to the universal childhood dream of "I want to become an astronaut" or "I want to be a pop star", except more attainable. It's very commoditized by now, so if you're looking for that old-school hacker ethos, you're gonna be disappointed.
I'm not disagreeing with you per se, but those statements are subjective, not an objective truth. Lots of people fundamentally enjoy the process of coding, and would keep doing it even in a hypothetical world with no problems left to solve, or if they had UBI.
Sad to see people reduce themselves willingly to cogs inside business machine.
May seem depressing, but the bright side is that you as an individual are then free to find joy in your work wherever you can find it... whether its in delivering high-quality code, or just collecting a paycheck.
Of course, but I said that people see themselves this way.
To be clear, personally I do not find fiddling with configs particularly exciting, but some people do.
LLM's feel like a non-deterministic compiler that transforms English into code of some sort.
I very much enjoy the end product and I also enjoy designing (not necessarily programming) a program that fits my needs, but rarely implementing, as I have issues focusing on things.
solving problems is an outcome of programming, not the purpose of programming
A contractor who prefers a specific brand of tool is wrong because the tool is a means to an end
This is what you sound like. Just because you don't understand the value of a craftsman picking and maintaining their tools doesn't mean the value isn't real.
Outcome is really the same, right? Why waste all that effort on a deep understanding of how to prepare food?
And it also seems exceedingly wasteful to boot.
I think the real problem is that it's actually increasingly difficult to defend the artisanal "no-AI" approach. I say this as a prior staff-level engineer at a big tech company who has spent the last six months growing my SaaS to ~$100k in ARR, and it never could have happened without AI. I like the kind of coding the OP is talking about too, but ultimately I'm getting paid to solve a problem for my customers. Getting too attached to the knives is missing the point.
I guess I don't understand posts like this IF you think you can do it better without LLMs. I mean, if using AI makes you miserable because you love the craft of programming, AND you think using AI is a net loss, then just...don't use it?
But I think the problem here that all these posts are speaking to is that it's really hard to compete without using AI. And I sympathize, genuinely. But also...are we knife enthusiasts or chefs?
I personally tire of people acting like it's some saving grace that doubles/triples/100x your productivity and not a tool that may give you 10-20% uplift just like any other tool
The chefs coming up with recipes and food scientists doing the pre-packaging will do fine and are still needed. The people making the fast food machine will also do well for themselves. The rest of us fast food workers, well, not so much...
And you can see it coming so there is plenty of time to prepare.
They will never be able to undestand this, unfortunately
Sure the customer still gets fed but it's a far inferior product... And is that chef really cheffing?
Nevertheless there's still a luxury market for hand prepared food.
Perhaps software will evolve the same way
...
> doesn't add value
What about intrinsic value? So many programmers on HN seem to just want to be MBAs in their heart of hearts
Intrinsic value is great, where achievable. Companies do not care at all about intrinsic value. I take pride in my work and my craft to the extent I am allowed to, but the reality is that those of us who can’t adapt to the businesses desires will be made obsolete and cut loose, regardless of whatever values we hold.
I consider myself an engineer — a problem solver. Like you said, code is just the means to solve the problems put before me.
I’m just as content if solving the problem turns out to be a process change or user education instead of a code commit.
I have no fetish for my terminal window or IDE.
Good problem solvers... solve problems. The technological environment will never devalue their skills. It’s only those who rest on their laurels who have this issue.
I really enjoy programming and like the author said, it's my hobby.
On some level I kind of resent the fact that I don't really get to do my hobby for work any more. It's something fundamentally different now.
The author overlooks a core motivation of AI here: to centralize the high-skill high-cost “creative” workers into just the companies that design AIs, so that every other business in the world can fire their creative workers and go back to having industrial cogs that do what they’re told instead of coming up with ‘improvements’ that impact profits. It’s not that the companies are reaching for something complex and nebulous. It’s that companies are being told “AI lets you eject your complex and nebulous creative workers”, which is a vast reduction in nearly everyone’s business complexity. Put in the terms of a classic story, “The Wizard of Oz”, no one bothers to look behind the curtain because everything is easier for them — and if there’s one constant across both people and corporations, it’s the willingness to disregard long-term concerns for short-term improvements so long as someone else has to pay the tradeoff.
When I started programming for Corporate™ back 1995, it was a wildly different career than what it has become. Say what you want about the lunatics running the asylum, but we liked it that way. Engineering knew their audience, knew the tech stack, knew what was going on in "the industry", ultimately called the shots.
Your code was your private sandbox. Want to rewrite it every other release? Go for it. Like to put your curly braces on a new line? Like TABs (good for you)? Go for it. It's your code, you own it. (You break it, you fix it.)
No unit tests (we called that parameter checking). No code reviews (well, nothing formal — often, time was spent in co-workers offices talking over approaches, white-boarding API… Often if a bug was discovered or known, you just fixed it. There may have been a formal process beginning, but to the lunatics, that was optional.
You can imagine how management felt — having to essentially just trust the devs to deliver.
In the end management won, of course.
When I am asked if I am sorry that I left Apple, I have to tell people, no. I miss working at Apple in the 90's, but that Apple was never coming back. And I hate to say it, but I suspect the industry itself will never return to those "cowboy coding" days. It was fun while it lasted.
Project management was a 40 foot Gantt chart printed out on laser printer paper and taped to the wall. The sweet sound of waterfall.
There was a difference between a sysadmin and a programmer. Now, I’m expected to be my own sysadmin-ops guy while also delivering features. While I worked on my systems chops for fun on the side, I purposely avoided it on the work side, I don’t usually enjoy how bad vendor documentation, training, etc. can be in the real world of Corporate America.
I believe that cowboy coding might still be practiced in small companies, or in small corporate pockets, where the number of engineers doesn't need to scale.
I don't think the industry will return to it, but I suspect there will be isolated environments for cowboys. When I was at WhatsApp (2011-2019), we were pretty far on the cowboy side of the spectrum... although I suspect it's different now.
IMHO, what's appropriate depends on how expensive errors are to detect before production, and how expensive errors are when detected after production. I lean into reducing the cost to fix errors rather than trying to detect errors earlier. OTOH, I do try not to make embarrassing errors, so I try to test for things that are reasonable to test for.
I also agree with comments on this thread stating that problem solving should be the focus and not the code.
However my view is that our ability to solve problems which require a specific type of deep thought will diminish over time as we allow for AI to do more of this type of thinking.
Purely asking for a feature is not “problem solving”.
The fact of the matter is, that a lot of the development work out there is just boilerplate: build scripts, bootstrapping and configuration, defining mappings for Web APIs and ORMs (or any type of DB interaction), as well as dealing with endless build chain errors and stuff I honestly think is bullshit.
When I see a TypeScript error that's borderline incomprehensible, sometimes I just want to turn to an LLM (or any tool, if there were enough of formalized methods and automatic fixes/refactoring to make LLMs irrelevant, I'd be glad!) and tell it "Here's the intent, make it work."
It's fun to me to dig into the code when I want to reason about the problem space and the domain, but NOT very much so when I have to do menial plumbing. Or work with underdocumented code by people long gone. Or work on crappy workarounds and bandaids on top of bandaids, that were pushed out the door due to looming deadlines, sometimes by myself 2 months prior. Or work with a bad pattern in the codebase, knowing that refactoring it might take changes in 30 times that I don't have enough time for right now. LLM makes some of those issues dissolve, or at least have so little friction that they become solvable.
Assumption: when I use LLMs, I treat it as any other code, e.g. it must compile, it must be readable, make sense and also work.
It'd be neat to have a big user story catalog/map, which tracks what various services are able to help with.
I was a kid in NE43 instead of TFA's Building 26 across the street - with Lisp Machines and 1980s MIT AI's "Programmer's Apprentice" dreams. I years ago gave up on ever having a "this... doesn't suck" dev env, on being able to "dance code". We've had such a badly crippling research and industrial policy, and profession... "not in my lifetime" I thought. Knock on wood, I'm so happy for this chance at being wrong. And also, for "let's just imagine for a moment, ignoring the utterly absurd resources it would take to create, science education content that wasn't a wretched disaster... what might that look like?" - here too it's LLMs, or no chance at all.
I wonder though if the space is mature enough for such a map or if it would become to generic to say anything meaningful.
We associate authority experts with a) quick and b) broad answers. It's like when we're listening to a radio show and they patch in "Dr So N. So" an expert in Whatever from Academia Forever U. They seem to know their stuff because a) they don't see "I don't know, let me get back to you after I've looked into that" and they can share a breadth of associated validations.
LLMs simulate this experience, by giving broadish, confident, answers very quickly. We have been trained by life's many experiences to trust these types of answers.
I forwarded your article to my son the dev, since your post captured the magic of being a programmer so well.
And yes Levy’s book Hackers is most excellent.
Maybe it is
I don't think I'm sticking my head in the sand - an advanced enough intelligence could absolutely take over programming tasks - but I also think that such an intelligence would be able to take over _every_ thought-related task. And that may not be a bad thing! Although the nature of our economy would have to change quite a bit to accommodate it.
I might be wrong: Doug Hofstadter, who is way, way smarter than me, once predicted that no machine would ever beat a human at chess unless it was the type of machine that said "I'm bored of chess now, I would prefer to talk about poetry". Maybe coding can be distilled to a set of heuristics the way chess programs have (I don't think so, but maybe).
Whether we're right or wrong, there's not much we can do about it except continue to learn.
Most of my programming skills are kept up to date on side projects, thanfully I can managed the time to do them, between family and friends.
Thanks for reminding me about Rational Rose though! That was a nostalgia trip
For me, at least, this has not been the case. If I leave the creative puzzle-solving to the machine, it's gonna get creative alright, and create me a mess to clean up. Whether this will be true in the future, hard to say. But, for now, I am happy to let the machines write all the React code I don't feel like writing while I think about other things.
Additionally, as an aside, I already don't think coding is always a craft. I think we want it to be one because it gives us the aura of craftspeople. We want to imagine ourselves as bent over a hunk of marble, carving a masterpiece in our own way, in our time. And for some of us, that is true. For most programmers in human history though, they were already slinging slop before anybody had coined the term. Where is the inherent dignity and human spirit on display in the internal admin tool at a second tier insurance company? Certainly, there is business value there, but it doesn't require a Michalengo to make something that takes in a pdf and spits out a slightly changed pdf.
Most code is already industrial code, which is precisely the opposite of code as craft. We are dissociated from the code we write, the company owns it, not us, which is by definition the opposite of a craftsmen and craft mode of production. I think AI is putting a finer, sharper point on this, but it was already there and has been since the beginning of the field.
Social media already reduced our attention spans to that of goldfish, open offices made any sort of deep meaningful work impossible.
I hope this madness dies before it devours us.
Hand-coding can continue, just like knitting co-exists with machine looms, but it need not ultimately maintain a grip on the software productive process.
It is better to come to terms with this reality sooner rather than later in my opinion.
It has also been responsible for predicting revolutions which never failed to materialize. 3D printing would make some kind of manufacturing obsolete, computers would make about half the world's jobs obsolete, etc etc.
Hand coding can be the knitting to the loom, or it can be industrialized plastic injection molding to 3D printing. How do you know? That distinction is not a detail--it's the whole point.
It's survivorship bias to only look at horses, cars, calculators, and whatever other real job market shifting technologies occurred in the past and assume that's how it always happens. You have to include all predictions which never panned out.
As human beings we just tend no to do that.
[EDIT: this being Pedantry News let me get ahead of an inevitable reply: 3D printing is used industrially, and it does have tremendous value. It enabled new ways of working, it grew the economy, and in some cases yes it even replaced processes which used to depend on injection molding. But by and large, the original predictions of "out with the old, in with the new" did not pan out. It was not the automobile to the horse and buggy. It was mostly additive, complementary, and turned out to have different use cases. That's the distinction.]
One could have made a reasonable remark in the past about how injection molding is dramatically faster than 3D printing (it applies material everywhere, all at once), scales better for large parts, et cetera. This isn't really true for what I'm calling hand-coding.
Obviously nothing about the future can be known for certain... but there are obvious trends that need not stop at software engineering.
How would you formulate this verifiably? Wanna take it to longbets.org?
Hand-coding is no longer "the future"?
Did an AI write your post or did you "hand write it"?
Code needs to be simple and maintainable and do what it needs to do. Auto complete wasn't a huge time saver because writing code wasn't the bottleneck then and it definitely is not the bottleneck now. How much you rely on an LLM won't necessarily change the quality or speed of what you produce. Specially if you pretend you're just doing "superior prompting with no hand coding involved".
LLMs are awesome but the IDE didn't replace the console text editor, even if it's popular.
And yet after 3 decades in the industry I can tell you this fantasy exists only on snarky HN comments.
> Hand-coding is no longer "the future"?
hand-coding is 100% not the future, there are teams already that absolutely do not hand-code anything anymore (I help with one of them that used to have 19 "hand-coders" :) ). The typing for sure will get phased out. it is quite insane that it took "AI" to make people realize how silly and wasteful is to type characters into IDEs/editors. the sooner you see this clearly the better it will be for your career
> How much you rely on an LLM won't necessarily change the quality or speed of what you produce.
if it doesn't you need to spend more time and learn and learn and learn more. 4/6/8 terminals at a time doing all various things for you etc etc :)
I've written a ton of code in my life and while I've been a successful startup CTO, I've always stayed in IC level roles (I'm in one right now in addition to hobby coding) outside of that, data structures and pipelines, keep it simple, all that stuff that makes a thing work and maintainable.
But here is the thing, writing code isn't my identity, being a programmer, vim vs emacs, mechanical keyboard, RTFM noob, pure functions, serverless, leetcode, cargo culting, complexity merchants, resume driven dev, early semantic css lunacy, these are thing outside of me.
I have explored all of these things, had them be part of my life for better or worse, but they aren't who I am.
I am a guy born with a bunch of heart defects who is happy to be here and trying new stuff, I want to explore in space and abstraction through the short slice of time I've got.
I want to figure stuff out and make things and sometimes that's with a keyboard and sometimes that's with a hammer.
I think there are a lot of societal status issues (devs were mostly low social status until The Social Network came out) and personal identity issues.
I've seen that for 40 years, anything tied to a persons identity is basically a thing they can't be honest about, can't update their priors on, can't reason about.
And people who feel secure and appreciated don't give much grace to those who don't, a lot of callous people out there, in the dev community too.
I don't know why people are so fast to narrow the scope of who they are.
Humans emit meaning like stars emit photons.
The natural world would go on without us, but as far as we have empirically observed we make the maximally complex, multi modally coherent meaning of the universe.
We are each like a unique write head in the random walk of giving the universe meaning.
There are a ton of issues from a network resilience and maximizing the random meaning generation walk where Ai and consolidation are extremely dangerous, I think as far as new stuff in the pipeline it's between Ai and artificial wombs that have the greatest risks for narrowing the scope of human discovery and unique meaning expansion to a catastrophic point.
But so many of these arguments are just post-hoc rationalizations to poorly justify what at root is this loss of self identity, we were always in the business of automating jobs out from under people, this is very weak tea and crocodile tears.
The simple fact is, all our tools should allow us to have materially more comfortable and free lives, the Ai isn't the problem, it's the fact that devs didn't understand that tech is best when empowering people to think and connect better and have more freedom and self determination with their time.
If that isn't happening it's not the codes fault, it's the network architecture of our current human power structures fault.
Nowadays many enterprise projects have become placing SaaS products together, via low code/no code integrations.
A SaaS product for the CMS, another one for assets, another for ecommerce and payments, another for sending emails, another for marketing, some edge product for hosting the frontend, finally some no code tools to integrate everything, or some serverless code hosted somewhere.
Welcome to MACH architecture.
Agents now made this even less about programming, as the integrations can be orchestrated via agents, instead of low code/no code/serverless.
I will never, ever go back to the time before.
I hate to suggest that the fix to LLM slop is more LLMs, but in this case it's working for me. My coworkers also seem to appreciate the gesture.
But all of Programming isn't the same thing. We just need new names for different types of programmers. I'm sure there were farmers that lamented the advent of machines because of how it threatened their identity, their connection to the land, etc....
but I want to personally thank the farmers who just got after growing food for the rest of us.
It's like that trope of the little angel and demon sitting on the protagonist's shoulders.
"I can get more work done"
"But it's not proper work"
"Sometimes it doesn't matter if it's proper work, not everything is important"
"But you won't learn the tools"
"Tools are incidental"
"I feel like I'm not close to the craft"
"Your colleagues weren't really reading your PRs anyway"
"This isn't just another tool"
"This is just another tool"
And so on forever.
I'm staying to think that if you don't have both these opposing views swirling around in your mind, you haven't thought enough about it.
If LLM abilities stagnate around the current level it's not even out of the question that LLMs will negatively impact productivity simply because of all of the AI slop we'll have to deal with.
More likely, like other tools, it will be possible to point to clear harms and clear benefits, and people will disagree about the overall impact.
You could say that about programming languages in general. "Why are we leaving all the direct binary programming for the compilers?"
The IT world is waiting for a revolution. Only in order to blame that revolution for the mistakes of a few powerful people.
I would not be surprised if all this revolutionary sentiment is manufactured. That thing about "Luddites" (not a thing that will stick by the way), this nostalgic stuff, all of it.
We need to be much smarter than that and not fall for such obvious traps.
An identity is a target on your back. We don't need one. We don't need to unite to a cause, we're already amongst one of the most united kinds of workers there is, and we don't need a galvanizing identity to do it.
With years, as I matured, and the industry matured, I came to realize that corporate programming is assembly line work but with a much bigger paycheck. You can dance around it as much as you want, but in the end, if you truly zoom out, you will realize that it's no different from an assembly line. There are managers to oversee your work and your time allocations; there is a belief that more people = more output; and everyone past your manager's manager seem to think that what you do is trivial and can be easily replaced by robots. So a software engineer who calls himself an artist is basically that same as a woodworker who works for a big furniture company, and yet insists on calling himself an artist, and referring to their work as craft, while in reality they assemble someone's else vision and product, by using industry standard tools.
And at first, I tried to resist. I held strong opinions as the OP. How come they came for MY CRAFT?! But then I realized that there is no craft. Sure, a handful of people work on really cool things. But if you look around, most companies are just plain dumb REST services with a new an outfit slapped on them. There is no craft. The craft has been distilled and filtered into 3-4 popular frameworks that dictate how things should be written, and chances are if I take an engineer and drop them in another company using the same framework, they won't even notice. Craft is when you build something new and unique, not when you deploy NextJS to Vercel with shadcn/ui and look like the other 99% of new-age SaaS offerings.
So I gave up. And I mainly use AI at my $DAY_JOB. Because why not? It was mundane work before (same REST endpoints, but with different names; copying and pasting around common code blocks), and now I don't suffer that much anymore. Instead of navigating the slop that my coworkers wrote before AI, I just review what AI wrote, in small pieces, and make sure it works as expected. Clean code? Hexagonal architecture? Separation of concerns? Give me a break. These are tools for "architects" and "tech leads" to earn a pat on their shoulder and stroke their ego, so they can move to a different company, collecting a bigger paycheck, while I get stuck with their overengineered solutions.
If I want to craft, I write code in my free time when I'm not limited by corner-cutting philosophy, abusive deadlines, and (some) incompetent coworkers each with their ego spanning to the moon as if instead of building a REST service for 7 users, they are building a world transforming and life-saving device for billions (powered by web3/blockchain/AI of course).
</rant>
When SQL was born, some people said, "It's English! We won't need programmers anymore!"
Now we have AI prompting, and some people are saying, "It's English! We won't need programmers anymore!"
Really?
COBOL and SQL aren't English, they're formal languages with keywords that look like English. LLMs work with informal language in a way that computers have never been able to before.
Formalism is way easier than whatever this guys are concocting. And true programmer bliss is live programming. Common programming is like writing a sheet music and having someone else play it. Live programming is you at the instrument tweaking each part.
But in faithful adherence to some kind of uncertainty principle, LLM prompts are also not a programming language, no matter if you turn down the temperature to zero and use a specialized coding model.
They can just use programming languages as their output.
e.g. it is difficult to write a traditional program to wash dishes, because how do you formally define a dish? You can only show examples of dishes and not-dishes. This is where informal language and neural networks shine.
The real issue is that we've been in-store for a big paradigm shift in how we interact with computers for decades at this point. SketchPad let us do competent, constraints based mathematics with images. Video games and the Logo language demonstrate the potential for programming using, "kinetics." In the future we won't code with symbols we'll dance our intent into and through the machine.
https://www.youtube.com/watch?v=6orsmFndx_o http://www.squeakland.org/tutorials/ https://vimeo.com/27344103
"We've always done it this way" is the path of calcification, not of a vibrant craft. And there are certainly many ways you can use LLMs to craft better things, without slop and vibecoding.
This website
1. reacts well to my system preference of a dark theme in my news-reader
2. has a toggle at the top for dark theme
3. works flawlessly with DarkReader in my browser
Until I saw your comment, I didn't even know the website had a light version.
Again: What?
Sure you can discover things that aren't intuitively obvious and these things may be useful, but that's more scientist than anything to do with programming. programming + science = computer science programming + engineering = software engineering programming + iPad = interactive computing programming + AI = vibe coding Don't equate programming with software engineering when they are clearly two distinct things. This article would more accurately be called the software engineers' identity crisis. Maybe some hobby engineers (programming + craft) might also be feeling this depending on how many external tools they already rely on. What's really shocking is how many software engineers claim to put in Herculean effort in their code, but ship it on top (or adjacent if you have an API) of "platforms" that could scarcely be less predictable. These platforms have to work very hard to build trust, but it's all meaningless cause users are locked in anyway. When user abuse is rampant people are going to look for deus ex machina and some slimy guy will be there to sell it to them.
The author sounds like a scribe meditating on the arrival of the printing press.
Also in the footer: "Everything on this website—emdash and all—is created by a human."
Three hyphens---it looks good! When I use three hyphens, it's like I dropped three fast rounds out of a magazine. It demands attention.
AI almost certainly picked it up mainly from typeset documents, like PDF papers.
It's also possible that some models have a tokenizing rule for recognizing faked-out em-dashes made of hyphens and turning them into real em-dash tokens.
On my own (long abandoned) blog, about 20% of (public) posts seem to contain an em dash: https://shreevatsa.wordpress.com/?s=%E2%80%94 (going by 4 pages of search results for the em dash vs 21 pages in total).
This article does not appear to be AI-written, but use of the emdash is undeniably correlated with AI writing. Your reasoning would only make sense if the emdash existed on keyboards. It's reasonable for even good writers to not know how or not care to do the extra keystrokes to type an emdash when they're just writing a blog post - that doesn't mean they have bad writing skills or don't understand grammar, as you have implied.
That same critique should first be aimed at the topmost comment, which has the same problem plus the added guilt of originating (A) a false dichotomy and (B) the derogatory tone that naturally colors later replies.
> It's reasonable for even good writers to not know how or not care
The text is true, but in context there's an implied fallacy: If X is "reasonable", it does not follow that Not-X is unreasonable.
More than enough (reasonable) real humans do add em-dashes when they write. When it comes to a long-form blog post—like this one submitted to HN—it's even more likely than usual!
> the extra keystrokes
Such as alt + numpad 0150 on Windows, which has served me well when on that platform for... gosh, decades now.
Where do you think the training data came from?
Incidentally, I turned this autocorrection off when people started associating em dashes with AI writing. I now leave them manual double dashes--even less correct than before, but at least people are more likely to read my writing.
It's the literary equivalent of thinking someone must be a "hacker" because they have a Bash terminal open.
I dunno, I feel like the base rate fallacy [0] could easily become a factor... Especially if we don't even have an idea what the false-positive or false-negative rates are yet, let alone true prevalence.
[0] https://en.wikipedia.org/wiki/Base_rate_fallacy
No, it doesn't. But people are putting that out there, people are getting accused of using AI because they know how to use em dashes properly, and this is dumb.