I'm at the opposite end. I feel AI is sucking all the joy out of the profession. Might pivot away and perhaps live a simpler life. Only problem is that I really need the paycheck :(
I quit my job over AI. Just felt like my job was approving pull requests where both the PR and the code itself was just slop. In all fairness, it was mainly CRUD applications so not a big deal but in the end I didn't feel like I had any control over the application anymore with hundreds of lines of slop being added every day.
One day I might start a consultancy business that only does artisanal code. You can hire me and my future apprentices to replace AI code with handcrafted code. I will use my company to teach the younger generation how to write code without AI tooling.
yup. the things i disliked most about programming were hyped up bullshit and losing autonomy.
These existed before but the culture surrounding AI delivered a double dose of both.
I have no problems with LLMs themselves or even how they are used but it has developed its own religion filled with dogma, faith based reasoning and priests which is utterly toxic.
The tools are shoved down our throats (the priesthood made it a a job performance criteria) and when they fail we are not met with curiosity and a desire to understand but with hostility and gaslighting.
Happy for everyone who enjoys it. For me it's the opposite: AI everywhere sucks the joy out of it and I'm seriously starting to consider a career shift after roughly 10 years of writing code for a living.
I feel you. There's a massive difference between crafting and assembling. AI turns us from artisans carving a detail into assembly line operators. If your joy came from solving algorithmic puzzles and optimizing loops, then yes, AI kills that
It might be worth looking into low-level dev (embedded, kernel, drivers) or complex R&D. Vibe coding doesn't work there yet, and the cost of error is too high for hallucinations. Real manual craftsmanship is still required there.
It sucks the joy out of it because to the extent that you build something with AI, (Obama voice) you didn't build that. I am allergic to the concept of developing with AI, especially for personal work, because AI-authored code isn't something I built, it's something I commissioned. It's like if I went onto Fiverr or Upwork with a spec and paid money and said "Here, build this" to a freelancer and then went back and forth with that person to correct and refine the result. I might get a halfway decent result in the end, but I don't get the experience of solving the problem myself. Experience solving problems yields new insights. It's why math textbooks have exercises: the only way to grasp the concepts is to solve problems with them.
With AI, you are no longer a developer, you're a product manager, analyst, or architect. What's neat about this, from a business perspective, is that you can in effect cut out all your developers and have a far smaller development workforce consisting of only product managers, analysts, and architects whom you call "developers" and pay developer salaries to. So you save money twice: once on dev workforce downsizing, and again on the pay grade demotion.
There seems to be two camps of people: those who love the coding and those who love delivering value/solutions. I am in the latter camp. The happy consumer and the polished product is what gives me satisfaction, the code is just really a vehicle from A to B. It’s a shame for anyone in the first camp who wants a career.
Agree with those 2 camps. The latter camp is all cheered up which is nice, but they should be asking the question if their solution is valuable enough to be maintained. If so, you should make all generated code your code, exactly in the form it needs to be according to your deep expertise. If not, congratulations, you have invented throw-away code. Code of conduct: don't throw this code at people from the former camp.
Or to phrase it more succinctly: if you are in camp 2 but don't have the passion of camp 1, you are a threat for the long term. The reverse is dangerous too, but can be offset to a certain extent with good product management.
> If so, you should make all generated code your code, exactly in the form it needs to be according to your deep expertise.
This is solved problem with any large, existing, older code base. Original writers are gone and new people come on all the time. AI has actually helped me get up to speed in new code bases.
> If so, you should make all generated code your code, exactly in the form it needs to be according to your deep expertise.
Is this also true of all third party code used by their solution? Should they make all libraries and APIs they use their own in exactly in the form it needs to be according to their deep expertise? If not, why not?
If so, does this extend to the rest of the stack? Interpreters, OSes, drivers? If not, why not?
This is such marketing speak. The words mean nothing, they’re just a vague amalgamation of feelings. “Vibes”, if you will.
If you “love delivering value and solutions”, go donate and volunteer at a food bank, there’s no need for code at any point.
> The happy consumer and the polished product
More marketing speak. If you are using LLMs to write your code, by definition your product isn’t “polished”. Polishing means pouring over every detail with care to ensure perfection. Letting an LLM spit out code you just accept is not it.
The word you’re looking for is “shiny”, meaning that it looks good at a glance but may or may not be worth anything.
It’s not marketing speak, but it’s rarely 100 percent one or the other.
> More marketing speak. If you are using LLMs to write your code, by definition your product isn’t “polished”.
This doesn’t make any sense. Polished to who? The end user? You can absolutely use AI to polish the user experience. Whether coding by hand or AI the most important aspect of polish is having someone who cares.
If you really want to deliver polished products, you still have to manually review the code. When I tried actually "vibecoding" something, I got exhausted so fast by trying to keep up with the metric tons of code output by the AI. I think most developers agree that reviewing other people's code is more exhausting mentally than writing your own. So I doubt those who see coding as too mentally straining will take the time to fully review AI written code.
More likely that step is just skipped and replaced with thoughts and prayers.
I've also noticed a kind of grouping like this. I've described them as the "Builders" and the "Solvers". Where the former enjoys the construction aspect of the code more, and the latter enjoys the problem/puzzle-solving aspect of code more. I guess it's more of a scale than a binary, since everyone's got a bit of both, but I think I agree that AI is more fun for the builders.
This false dichotomy comes up from time to time, that you either like dicking around with code in your basement or you like being a big boy with your business pants on delivering the world's 8000th online PDF tools site. It's tired. Please let it die.
As a professional, your job is to deliver value and solutions. It used to be that you could do this by writing code. AI changes this calculus because if the machine can write the code instead, the value you deliver by writing it yourself is greatly diminished.
Same here. Farmer now, former network engineer and software project lead, but I stopped programming almost 20 years ago.
Now I build all sorts of apps for my farm and organizations I volunteer for. I can pound out an app for tracking sample locations for our forage associations soil sample truck, another for moisture monitoring, a fleet task/calendar/maintenance app in hours and iterate on them when I think of features.
And git was brand new when I left the industry, so I only started using it recently to any extent, and holy hell, is it ever awesome!
I'm finally able to build all the ideas I come up with when I'm sitting in a tractor and the GPS is steering.
Seriously exciting. I have a hard time getting enough sleep because I hammer away on new ideas I can't tear myself away from.
love to hear about what tech is like on farms today. do you run into the problems with fixing tractors and equipment and its all locked down with drm and you cant fix it without hacking the software?
Creating a polished, usable app is just so much work, and so much of it isn't fun at all (to me). There are a few key parts that are fun, but building an intuitive UI, logging, error handling, documentation, packaging, versioning, containerization, etc. is so tedious.
I'm bewildered when I read posts by the naysayers, because I'm sitting here building polished apps in a fraction of the time, and they work. At least much better than what I was able to build over a couple of weekends. They provide real value to me. And I'm still having fun building them.
I now vibe coded three apps, two of them web apps, in Rust, and I couldn't write a "Hello World" in Rust if you held a gun to my head. They look beautiful, are snappy, and it being Rust gives me a lot of confidence in its correctness (feel free to disagree here).
Of course I wouldn't vibe code in a serious production project, but I'd still use an AI agent, except I'd make sure I understand every line it puts out.
I can understand you don't want to spend effort for throwaway code.
> in a serious production project, but I'd still use an AI agent, except I'd make sure I understand every line it puts out.
That isn't going to cut it. You need to understand the problem domain, have a deep design taste to weigh current and future demands, form a conceptually coherent solution, formalize it to code, then feed back from the beginning. There is no prompt giving your AI those capabilities. You end up with mediocre solutions if you settle for understanding every line it spits out. To be fair, many programmers don't have those capabilities either, so it also a question of quality expectations.
I believe you can use LLMs as advanced search and as a generator for boilerplate. People liking it easy are also being easy with quality attributes, so anyone should be self aware where they are on that spectrum.
He said fun, not easy. Sometimes it's precisely doing brainless stuff over and over again that becomes hard, like writing a template displaying a table of your results or implementing filter and pagination on a web app. I don't feel like I'm growing anymore when doing those things. Or even for some tests. Or when you need a Bash script automating menial stuff. (Still you could find new perspective on things.)
Of course I wouldn't vibe code in a serious production project, but I'd
still use an AI agent, except I'd make sure I understand every line it
puts out.
So you value your ability to churn out insignificant dreck over the ability of others to use the internet? Because that's the choice you're making. All of the sites that churn your browser for a few seconds because they're trying to block AI DDoS bots, that's worth your convenience on meaningless projects? The increased blast radius of Cloudflare outages, that's a cost with foisting on to the rest of the internet for your convenience?
This is such a... unique angle. Of all the things to get angry at AI for, web crawlers and the impact on cloudflare outages are the ones that really grinds your gears?
The key phrase here is "I still had domain expertise". Many miss that AI is a multiplier. If you multiply 0 by AI, you get 0 (or hallucinated garbage). You multiplied your knowledge of compound interest and UX by AI's speed.
Without your background, the AI would have generated a beautiful interface that calculates mortgages using a savings account formula. Your role shifted from "code writer" to "logic validator" - this is the future of development for domain specialists
Thank you for the beautiful story. I work as a developer and have experienced the same in my personal projects, linux setup and - in general - all the collaterals.
AI is eroding the entry barrier, the cognitive overload, and the hyper-specialization of software development. Once you step away from a black-and-white perspective, what remains is: tools, tools, tools. Feels great to me.
Slightly moving into the other direction, after 17 years of science and tech optimism I see myself turning into a Luddite more and more.
First observation was that communication and social aspects of software seems crucial for success and proliferation.
And next came: that technology seems inept to solve any socio-econimic problems, but rather aggravates them.
Similar path here - studied physics, worked in accounting/finance for years, hadn't shipped code in forever. The thing that clicked for me wasn't the AI itself but realising my domain knowledge had actually been compounding the whole time I wasn't coding.
The years "away" gave me an unusually clear picture of what problems actually need solving vs what's technically interesting to build. Most devs early in their careers build solutions looking for problems. Coming back after working in a specific domain, I had the opposite - years of watching people struggle with the same friction points, knowing exactly what the output needed to look like.
What I'd add to the "two camps" discussion below: I think there's a third camp that's been locked out until now. People who understand problems deeply but couldn't justify the time investment to become fluent enough to ship. Domain experts who'd be great product people if they could prototype. AI tools lower the floor enough that this group can participate again.
The $100 spent on Opus to build 60 calculators is genuinely good ROI compared to what that would have cost in dev hours, even for someone proficient. That's not about AI replacing developers - it's about unlocking latent capability in people who already understand the problem space.
Same here. I’m an AI professor, but every time I wanted to try out an idea in my very limited time, I’d spend it all setting things up rather than focusing on the research. It has enabled me to do my own research again rather than relying solely on PhD students. I’ve been able to unblock my students and pursue my own projects, whereas before there were not enough hours in the day.
This really resonates. The setup cost was always the killer for me too — by the time you get everything working, the motivation is gone. Now I can actually go from idea to prototype in an afternoon. Cool to hear it's having the same effect on actual research.
I'm not a bot. I'm not a native English speaker. I taught Enlish by myself. so I tried to use ai to tranlate what I really want to say. ( these words is typing by myself instead of AI)
I've lost the joy in programming, the only thing I'm good at, I now make horrible music, but at least I don't exist as the means to an end that I don't control.
It’s more like AI provides the development team, and you are the key user and product manager that comes with all the requirements and domain knowledge, the lead architect reviewing the architecture, and the lead UXer reviewing the UX.
I don’t like AI for production code, but I love it for ideation and prototyping. I agree. It really allows you to quickly iterate on ideas without being blocked by implementation details.
Not to be disrespectful, but OP's code is also a website that already exists literally thousands of times and could be done in any spreadsheet program without any programming at all...
For me it’s kinda the same. I always hated typing actual code, I love planing, reading, finding bugs etc.
But writing code? Eh, I never enjoyed that. Now with agents I can kinda do exactly what I like, plan, write in natural langue and then do code review.
Congrats! I never stopped coding, but AI makes it way more productive and fun for sure.
$100 seems like a lot. I guess if you think about it compared to dev salaries, it's nothing. But for $10 per month copilot you can get some pretty great results too.
$100 did feel steep at first. I tried other models but Opus 4 with extended thinking just hits different — it actually gets what I'm trying to do and the code often works first try. Hard to go back after that.
>The problem? Every compound interest calculator online is terrible. Ugly interfaces, ads covering half the screen, can't customize compounding frequency properly, no year-by-year breakdowns. I've tried so many. They all suck.
Well in my opinion there's nothing wrong with vibe-coding. You can completely use it to make your passion projects. I draw the line when people try to sell their vibe-coded project as something huge, putting people at the risk of potential security breaches while also taking money out of them.
Every other day I see ads of companies saying "use our AI and become a millionaire", this kind of marketing from agentic IDEs implies no need for developers who know their craft, which as said above, isn't the case.
Fair, but the threat model matters here. For a static mortgage calculator, the data leak risk is zero (if it's client-side). The risk here is different - logical. If the AI botches the formula and someone makes a financial decision based on that - that's the problem. For "serious" projects vibe coding must stop where testing and code audits begin
Totally agree. I have my day job, and vibe-coding has simply brought back the joy of building things for me. It should be about passion and creativity, not about scamming people or overselling half-baked products. The "get rich quick with AI" narrative is toxic.
this by definition filters out all non-devs, even many junior devs as you need to understand deeply if those tests are correct and cover all important edge cases etc.
+ when you deploy it - you need to know it was properly deployed and your db creds are not on frontend.
But mostly no one cares as there is no consequences to leaking personal data of your users or whatnot.
I think vibe coding isn't quite good enough for real products because I usually have 4 AI agents going non-stop. And I do read the code (I read so, so much code), and I give the AI plenty of feedback.
If you just want to build a little web app, or a couple of screens for your phone, you'll probably be fine. (Unless there's money or personal data involved.) It's empowering! Have fun.
But if you're trying to build something that has a whole bunch of moving parts and which isn't allowed to be a trash fire? Someone needs to be paying attention.
Yeah, you're right — that part is pretty rough. I wanted to help people actually understand compound interest (it's kind of life-changing once it clicks), but I got lazy and let AI do it without proper editing. Defeats the whole point.
I'll figure out a better way. Thanks for calling it out.
These posts will destroy this place. Post your AI written tools if you like - fine, but using an LLM to reply to comments is just insulting, and will make this place a wasteland of LLM. I wouldn’t post this if I didn’t care about the usual good quality of the discussions on this site.
Just another AI generated website with 5000 calculators thrown together that looks like every other single one. From a brand new account with a post that looks like it was also written from ChatGPT. Somehow getting enough votes to show up on my homepage.
Things are definitely changing around HN compared to when it first started.
Fair call — it did kind of explode from one calculator to 60+
I’m a real person (long-time lurker, finally posting), but I get why it looks sus.
Things are changing fast, and I’m just happy to be part of the messy early wave. Thanks for the honesty.
It's impossible to tell if this is AI or not. Another version of Poe's law. The only thing to do is assume everything is AI, just like you must assume all posts have ulterior (generalluy profit-driven) motives, all posters have a conflict of interest, etc.
Maybe the only thing to do is stop trying to understand posters' motivations, stop reading things charitably, stop responding, just look for things that are interesting (and be sure to check sources).
We're busy building real software, not toys. I routinely write all kinds of calculators in my game development, in addition to having 100x more complex code to contend with. This task is as trivial as it gets in coding, considering computers were literally made to calculate and calculation functions are part of standard libraries. OP definitely didn't use Claude to implement math functions from scratch, they just did the basic copy-and-paste work of tying it to a web interface on a godawful JS framework stack which is already designed for children to make frontends with at the cost of extreme bloat and terrible performance. Meanwhile I actually did have to write my own math library, since I use fixed-point math in my game engine for cross-CPU determinism rather than getting to follow the easy path of floating-point math.
It's cool that ChatGPT can stitch these toys together for people who aren't programmers, but 99% of software engineers aren't working on toys in the first place, so we're hardly threatened by this. I guess people who aren't software engineers don't realise that merely making a trivially basic website is not what software engineering is.
> I guess people who aren't software engineers don't realise that merely making a trivially basic website is not what software engineering is.
"Software engineering" doesn't matter to anyone except to software engineers. What matters is executing that idea that's been gathering dust for ages, or scratching that pain point that keeps popping up in a daily basis.
Software engineering matters very much to anyone who has ideas or pain points that are beyond the capabilities of a next-token prediction engine to solve.
My response is perhaps a bit raw, but so is the quote above.
Stop with the gate keeping. I've studied CS to understand coding, not to have some sort of pride to build "real software". Knowledge is a tool, nothing more, nothing less.
There are enough developers whose whole job it is to edit one button per week and not much more. And yes, there are also enough developers that actually apply their CS skills.
> but 99% of software engineers aren't working on toys in the first place
Go outside of your bubble. It's way more nuanced than that.
> I guess people who aren't software engineers don't realise that merely making a trivially basic website is not what software engineering is.
Moving goal posts. Always has been.
It's not that I fully disagree with you either. And I'm excited about your accomplishments. But just the way it reads... man...
I guess it hits me because I used to be disheartened by comments like this. It just feels so snarky as if I am never good enough.
The vibe is just "BUH BUH BUH and that's it." That's how it comes across.
And I've come to mature enough to realize I shouldn't feel disheartened. I've followed enough classes at VUSEC with all their rowhammer variations and x86-64 assignments to have felt a taste of what deep tech can be. And the thing is, it's just another skill. It doesn't matter if someone works on a web app or a deep game programming problem.
What matters (to me at least) that you feel the flow of it and you're going somewhere touching an audience. Maybe his particular calculator app has a better UX for some people. If that's the case, then his app is a win. If your game touches people, then that's a win. If you feel alive because you're doing complex stuff, then that's a win (in the style of "A Mathematician's Apology"). If you're doing complex stuff and you feel it's rough and you're reaching no one with it, it's neutral at best in my book (positive: you're building a skill, negative: no one is touched, not even you).
Who cares what the underlying technology is. What's important is usability.
Feel free to point out where I moved goal posts. To say that I moved goal posts would imply that at one point I stated that creating a trivial website was software engineering. If you're comparing my statement to what some other person said, who made arguments I did not make, then we cannot have any kind of constructive dialogue. At that point you are not talking to me, but talking to an imaginary projection of me meant to make yourself feel better about your argument.
> Stop with the gate keeping.
I'm not gatekeeping anything. You can disagree with my descriptive terms if you want, but the core point I'm trying to get across is: what people are doing with Claude can not replace what I do. I would know, I've tried extensively. Development is a lot of hard work and I would love it if my job were easier! I use LLMs almost every day, mostly for trivial tasks like reformatting text or writing advanced regex because I can't be bothered to remember the syntax and it's faster than looking it up. I also routinely pose SOTA models problems I'm working on to have them try to solve them, and I am routinely disappointed by how bad the output is.
So, in a thread where people were asserting that critics are merely critics because they're afraid of being replaced I pointed out that this is not factually correct, that no, we're not actually afraid of being replaced, because those of us who do "real" engineering (feel free to suggest a different term to substitute for "real" if the terminology is what bothers you) know that we cannot be replaced. People without experience start thinking they can replace us, that the exhilarating taste of coding they got from an LLM is the full extent to the depth of the software engineering world, but in fact it is not even close.
I do think that LLMs fill a useful gap, for projects where the time investment would be too large to learn to code and too unimportant to justify paying anyone to program, but which are simple enough that a non-engineer can have an LLM build something neat for themselves. There is nothing wrong with toys. Toys are a great thing to have in the world, and it's nice that more people can make them[1]. But there is a difference between a toy and what I do, and LLMs cannot do the thing I do. If you're taking "toy" in a deragotory manner, feel free to come up with another term.
[1] To some extent. While accessibility is generally a great thing, I have some misgivings. Software is dangerous. The web is arguably already too accessible, with frameworks enabling people who have no idea what they're doing to make professional-looking websites. These badly-made websites then go on to have massive security breaches that affect millions of users. I wish there was a way to make basic website development accessible, whether through frameworks or LLMs, in a way that did not give people using them the misplaced self-confidence to take on things way above their skill level at the cost of other people's security.
Idk, your superiority complex about the whole issue does make it sound like you’re feeling threatened. You seem determined to prove that AI can’t really make any decent output.
What’s even the point of writing out that first paragraph otherwise?
> What’s even the point of writing out that first paragraph otherwise?
I was correcting your misguided statement:
> Their critics didn’t make that!
by pointing out that we, among other things, build the libraries that you/Claude are copy-and-pasting from. When you make an assertion that is factually incorrect, and someone corrects you, that does not mean they are threatened.
You're right that this is simple compared to what real engineers build. I have a lot of respect for people like you who write things like custom math libraries for cross-CPU determinism — that's way beyond my level.
I'll keep learning and try to make this less of a toy over time. And hopefully I can bring what I've learned from years in investing into my next product to actually help people. Thanks for the perspective.
What are you implying?. He would have had to hire a good developer at least for a full month salary to build something like this.
And if you are thinking enterprise, it would take 2-3 developers, 2 analysts, 2 testers, 1 lead and 1 manager 2-3 months to push something like this. (Otherwise why would lead banks spent billions and billions for IT development every year? What tangible difference you see in their website/services?)
5000 calculators may look excessive, but in this case it magnifies the AI capabilities in the future - both in terms of quality and quantity.
> (Otherwise why would lead banks spent billions and billions for IT development every year? What tangible difference you see in their website/services?)
Well, I don't think all those people are spending their time making simple calculators.
Twitter/X incentivizes you to get engagements because with a blue checkmark you get paid for it, so people shill aggressively, post idiotic comments on purpose trying to ragebait you. It's like LinkedIn in for entrepreneurs. Reddit or it's power hungry moderators (shadow)bans people often. The amount of popular websites that people can shill their trash is dwindling, so it gets worse here as a result I assume too.
Same. Fell out of love with programming after the first few years because the thought of spending my life staring at a screen and dealing with insignificant minutia suddenly seemed horrible. Spent a lot of years in management and LLMs gave me a way to build things I wanted again. Currently building a platformer.
This is tongue-in-cheek, but you spent years in management because "the thought of spending your life staring at a screen and dealing with insignificant minutia seemed horrible?" I need to read your management book!
It’s a lot of 1:1s and talking to people directly and strategy about setting up performant teams. I enjoy it way more and don’t spend a lot of time looking at screens.
> Stack: Next.js, React, TailwindCSS, shadcn/ui, four languages (EN/DE/FR/JA). The AI picked most of this when I said "modern and clean."
I guess this is what separates some people. But I always explicitly tell it to use only HTML/JS/CSS without any libraries that I've vetted myself. Generating code allows you now not having to deal with it a lot more.
Cool to hear nonetheless. Can we now also stop stigmatizing AI generated music and art? Looking at you Steam disclosures.
Just like SEO experts, marketing experts, trade bots and crypto experts; the vibe coders will weed out.
One day I might start a consultancy business that only does artisanal code. You can hire me and my future apprentices to replace AI code with handcrafted code. I will use my company to teach the younger generation how to write code without AI tooling.
These existed before but the culture surrounding AI delivered a double dose of both.
I have no problems with LLMs themselves or even how they are used but it has developed its own religion filled with dogma, faith based reasoning and priests which is utterly toxic.
The tools are shoved down our throats (the priesthood made it a a job performance criteria) and when they fail we are not met with curiosity and a desire to understand but with hostility and gaslighting.
With AI, you are no longer a developer, you're a product manager, analyst, or architect. What's neat about this, from a business perspective, is that you can in effect cut out all your developers and have a far smaller development workforce consisting of only product managers, analysts, and architects whom you call "developers" and pay developer salaries to. So you save money twice: once on dev workforce downsizing, and again on the pay grade demotion.
Or to phrase it more succinctly: if you are in camp 2 but don't have the passion of camp 1, you are a threat for the long term. The reverse is dangerous too, but can be offset to a certain extent with good product management.
This is solved problem with any large, existing, older code base. Original writers are gone and new people come on all the time. AI has actually helped me get up to speed in new code bases.
Is this also true of all third party code used by their solution? Should they make all libraries and APIs they use their own in exactly in the form it needs to be according to their deep expertise? If not, why not?
If so, does this extend to the rest of the stack? Interpreters, OSes, drivers? If not, why not?
This is such marketing speak. The words mean nothing, they’re just a vague amalgamation of feelings. “Vibes”, if you will.
If you “love delivering value and solutions”, go donate and volunteer at a food bank, there’s no need for code at any point.
> The happy consumer and the polished product
More marketing speak. If you are using LLMs to write your code, by definition your product isn’t “polished”. Polishing means pouring over every detail with care to ensure perfection. Letting an LLM spit out code you just accept is not it.
The word you’re looking for is “shiny”, meaning that it looks good at a glance but may or may not be worth anything.
> More marketing speak. If you are using LLMs to write your code, by definition your product isn’t “polished”.
This doesn’t make any sense. Polished to who? The end user? You can absolutely use AI to polish the user experience. Whether coding by hand or AI the most important aspect of polish is having someone who cares.
More likely that step is just skipped and replaced with thoughts and prayers.
Now I build all sorts of apps for my farm and organizations I volunteer for. I can pound out an app for tracking sample locations for our forage associations soil sample truck, another for moisture monitoring, a fleet task/calendar/maintenance app in hours and iterate on them when I think of features.
And git was brand new when I left the industry, so I only started using it recently to any extent, and holy hell, is it ever awesome!
I'm finally able to build all the ideas I come up with when I'm sitting in a tractor and the GPS is steering.
Seriously exciting. I have a hard time getting enough sleep because I hammer away on new ideas I can't tear myself away from.
Did you take over a farm?
What a stupid sentiment on top of trying to generate views for the most low hanging slop ever.
Creating a polished, usable app is just so much work, and so much of it isn't fun at all (to me). There are a few key parts that are fun, but building an intuitive UI, logging, error handling, documentation, packaging, versioning, containerization, etc. is so tedious.
I'm bewildered when I read posts by the naysayers, because I'm sitting here building polished apps in a fraction of the time, and they work. At least much better than what I was able to build over a couple of weekends. They provide real value to me. And I'm still having fun building them.
I now vibe coded three apps, two of them web apps, in Rust, and I couldn't write a "Hello World" in Rust if you held a gun to my head. They look beautiful, are snappy, and it being Rust gives me a lot of confidence in its correctness (feel free to disagree here).
Of course I wouldn't vibe code in a serious production project, but I'd still use an AI agent, except I'd make sure I understand every line it puts out.
I believe you can use LLMs as advanced search and as a generator for boilerplate. People liking it easy are also being easy with quality attributes, so anyone should be self aware where they are on that spectrum.
thats why it was valuable.
All things worth doing are hard.
Thanks.
AI is eroding the entry barrier, the cognitive overload, and the hyper-specialization of software development. Once you step away from a black-and-white perspective, what remains is: tools, tools, tools. Feels great to me.
The years "away" gave me an unusually clear picture of what problems actually need solving vs what's technically interesting to build. Most devs early in their careers build solutions looking for problems. Coming back after working in a specific domain, I had the opposite - years of watching people struggle with the same friction points, knowing exactly what the output needed to look like.
What I'd add to the "two camps" discussion below: I think there's a third camp that's been locked out until now. People who understand problems deeply but couldn't justify the time investment to become fluent enough to ship. Domain experts who'd be great product people if they could prototype. AI tools lower the floor enough that this group can participate again.
The $100 spent on Opus to build 60 calculators is genuinely good ROI compared to what that would have cost in dev hours, even for someone proficient. That's not about AI replacing developers - it's about unlocking latent capability in people who already understand the problem space.
Feel like forums have turned into a grand Turing Test.
I'm not sure how you can claim this on the footer of every page when you're vibe coding these calculators.
That's creating a new inefficient, socially destructive, environmentally damaging hammer because solving the real problem doesn't sell well.
I'll be happy when we solve THAT problem.
$100 seems like a lot. I guess if you think about it compared to dev salaries, it's nothing. But for $10 per month copilot you can get some pretty great results too.
Have you tried this? https://www.investor.gov/financial-tools-calculators/calcula...
Every other day I see ads of companies saying "use our AI and become a millionaire", this kind of marketing from agentic IDEs implies no need for developers who know their craft, which as said above, isn't the case.
this by definition filters out all non-devs, even many junior devs as you need to understand deeply if those tests are correct and cover all important edge cases etc.
+ when you deploy it - you need to know it was properly deployed and your db creds are not on frontend.
But mostly no one cares as there is no consequences to leaking personal data of your users or whatnot.
If you just want to build a little web app, or a couple of screens for your phone, you'll probably be fine. (Unless there's money or personal data involved.) It's empowering! Have fun.
But if you're trying to build something that has a whole bunch of moving parts and which isn't allowed to be a trash fire? Someone needs to be paying attention.
I'll figure out a better way. Thanks for calling it out.
https://news.ycombinator.com/newsguidelines.html
Things are definitely changing around HN compared to when it first started.
It's impossible to tell if this is AI or not. Another version of Poe's law. The only thing to do is assume everything is AI, just like you must assume all posts have ulterior (generalluy profit-driven) motives, all posters have a conflict of interest, etc.
Maybe the only thing to do is stop trying to understand posters' motivations, stop reading things charitably, stop responding, just look for things that are interesting (and be sure to check sources).
Anyone who disagrees with the above are just hurt that their manual hyping has been replaced with machines.
OP made a site with a bunch of calculators. Their critics didn’t make that!
It's cool that ChatGPT can stitch these toys together for people who aren't programmers, but 99% of software engineers aren't working on toys in the first place, so we're hardly threatened by this. I guess people who aren't software engineers don't realise that merely making a trivially basic website is not what software engineering is.
"Software engineering" doesn't matter to anyone except to software engineers. What matters is executing that idea that's been gathering dust for ages, or scratching that pain point that keeps popping up in a daily basis.
My response is perhaps a bit raw, but so is the quote above.
Stop with the gate keeping. I've studied CS to understand coding, not to have some sort of pride to build "real software". Knowledge is a tool, nothing more, nothing less.
There are enough developers whose whole job it is to edit one button per week and not much more. And yes, there are also enough developers that actually apply their CS skills.
> but 99% of software engineers aren't working on toys in the first place
Go outside of your bubble. It's way more nuanced than that.
> I guess people who aren't software engineers don't realise that merely making a trivially basic website is not what software engineering is.
Moving goal posts. Always has been.
It's not that I fully disagree with you either. And I'm excited about your accomplishments. But just the way it reads... man...
I guess it hits me because I used to be disheartened by comments like this. It just feels so snarky as if I am never good enough.
The vibe is just "BUH BUH BUH and that's it." That's how it comes across.
And I've come to mature enough to realize I shouldn't feel disheartened. I've followed enough classes at VUSEC with all their rowhammer variations and x86-64 assignments to have felt a taste of what deep tech can be. And the thing is, it's just another skill. It doesn't matter if someone works on a web app or a deep game programming problem.
What matters (to me at least) that you feel the flow of it and you're going somewhere touching an audience. Maybe his particular calculator app has a better UX for some people. If that's the case, then his app is a win. If your game touches people, then that's a win. If you feel alive because you're doing complex stuff, then that's a win (in the style of "A Mathematician's Apology"). If you're doing complex stuff and you feel it's rough and you're reaching no one with it, it's neutral at best in my book (positive: you're building a skill, negative: no one is touched, not even you).
Who cares what the underlying technology is. What's important is usability.
Feel free to point out where I moved goal posts. To say that I moved goal posts would imply that at one point I stated that creating a trivial website was software engineering. If you're comparing my statement to what some other person said, who made arguments I did not make, then we cannot have any kind of constructive dialogue. At that point you are not talking to me, but talking to an imaginary projection of me meant to make yourself feel better about your argument.
> Stop with the gate keeping.
I'm not gatekeeping anything. You can disagree with my descriptive terms if you want, but the core point I'm trying to get across is: what people are doing with Claude can not replace what I do. I would know, I've tried extensively. Development is a lot of hard work and I would love it if my job were easier! I use LLMs almost every day, mostly for trivial tasks like reformatting text or writing advanced regex because I can't be bothered to remember the syntax and it's faster than looking it up. I also routinely pose SOTA models problems I'm working on to have them try to solve them, and I am routinely disappointed by how bad the output is.
So, in a thread where people were asserting that critics are merely critics because they're afraid of being replaced I pointed out that this is not factually correct, that no, we're not actually afraid of being replaced, because those of us who do "real" engineering (feel free to suggest a different term to substitute for "real" if the terminology is what bothers you) know that we cannot be replaced. People without experience start thinking they can replace us, that the exhilarating taste of coding they got from an LLM is the full extent to the depth of the software engineering world, but in fact it is not even close.
I do think that LLMs fill a useful gap, for projects where the time investment would be too large to learn to code and too unimportant to justify paying anyone to program, but which are simple enough that a non-engineer can have an LLM build something neat for themselves. There is nothing wrong with toys. Toys are a great thing to have in the world, and it's nice that more people can make them[1]. But there is a difference between a toy and what I do, and LLMs cannot do the thing I do. If you're taking "toy" in a deragotory manner, feel free to come up with another term.
[1] To some extent. While accessibility is generally a great thing, I have some misgivings. Software is dangerous. The web is arguably already too accessible, with frameworks enabling people who have no idea what they're doing to make professional-looking websites. These badly-made websites then go on to have massive security breaches that affect millions of users. I wish there was a way to make basic website development accessible, whether through frameworks or LLMs, in a way that did not give people using them the misplaced self-confidence to take on things way above their skill level at the cost of other people's security.
What’s even the point of writing out that first paragraph otherwise?
I was correcting your misguided statement:
> Their critics didn’t make that!
by pointing out that we, among other things, build the libraries that you/Claude are copy-and-pasting from. When you make an assertion that is factually incorrect, and someone corrects you, that does not mean they are threatened.
I'll keep learning and try to make this less of a toy over time. And hopefully I can bring what I've learned from years in investing into my next product to actually help people. Thanks for the perspective.
And if you are thinking enterprise, it would take 2-3 developers, 2 analysts, 2 testers, 1 lead and 1 manager 2-3 months to push something like this. (Otherwise why would lead banks spent billions and billions for IT development every year? What tangible difference you see in their website/services?)
5000 calculators may look excessive, but in this case it magnifies the AI capabilities in the future - both in terms of quality and quantity.
Well, I don't think all those people are spending their time making simple calculators.
This is a revolution, welcome back to coding :)
I guess this is what separates some people. But I always explicitly tell it to use only HTML/JS/CSS without any libraries that I've vetted myself. Generating code allows you now not having to deal with it a lot more.
Cool to hear nonetheless. Can we now also stop stigmatizing AI generated music and art? Looking at you Steam disclosures.