But as a platform for developers, ChatGPT is a joke.
I'm staring at a massive file it just generated, and I can't collapse it. This is a basic feature, maybe 10 lines of JavaScript, that would make the tool infinitely more usable. Instead, my middle finger is getting a workout from all the scrolling, and I'm starting to consider using it for something more sinister.
This isn't just a minor annoyance; it's a symptom of a larger problem: OpenAI is coasting on its early success and completely ignoring the developer experience. Meanwhile, competitors are eating their lunch.
I look at Claude's Artifacts system and it's miles ahead. It's clear other companies are actually thinking about the developer's workflow. It feels like OpenAI is so high on their own supply they've forgotten that people actually have to use the thing.
So, while they've built some of the most powerful models, they are failing to build a platform that respects a developer's time and workflow. They are not a serious company when it comes to serving the developer community.
TL;DR My fingers hurt ... at the very least, can you stop forcing us to scroll hundreds of times per response?
Google is myriad things.
And you are using it incorrectly even if it were the right tool. If you have a trivial function to write that should be 10 lines of code, why are you having AI do it in the first place? Just code it. Or, if you really want AI to do that for you and must have a short simple function, say so in your prompt - it will comply.
I have used their API for plenty of things already... just not coding.
Realistically, most developers aren't going to roll their own API IDEs.
Or are you using the web ui?
Sora 2 seemed like a pure hype move. There's no way they're bringing in even a tiny fraction of the revenue needed to cover the cost of it, but the videos do get a lot of viral attention. Investors probably arent actually using Sora 2, but they do see the rare watchable video when it pops off on social media.
Gen AI seems like the perfect tech to dupe non-tech or low-tech investors into believing they're building something intelligent. I can't think of any technology I've ever used that leaves such a disproportionally good first impression, while dramatically overselling its capabilities. If you put it in the hands of a non-tech investor who sort of knows how to code, he's going to ask it to do some truly trivial shit. And he's going to be amazed when it regurgitates some absurdly common pattern he could have Googled and found in one click. The real trade secret is that people just don't understand that any time LLMs seem intelligent, it's because they're regurgitating the work of intelligent humans (who almost certainly were not compensated for, or even aware of, their work being used in the training set).
LLMs are pretty neat, I use them daily for work. But the whole AGI grift and AI doom scenario we keep getting threatened with is really overshadowing the novelty of what is an often (situationally) useful, neat tool that makes me a little more productive some of the time.