Related: I know that many people use AI image generators to make pixel art, and recently I've stumbled upon a great tool to make a proper pixelart based on AI generated input see https://github.com/jenissimo/unfake.js and live demo on https://jenissimo.itch.io/unfaker
(disclaimer: I don't know the author, just thought I'd share as I find it amazing)
When I clicked I already thought about the comments that ask "is this vibe coded". So I kind of asked myself that question. As someone who manually codes as well as experiments with AI-assisted coding I ask myself what attitude we should develop towards AI-assisted coding in the long run. Right now on HN it almost seems like "AI shaming" at work. If you post a project that's a result of using AI you can expect a lot of critique around here. While I understand that to a certain extent I guess we also need to overcome that sentiment. After all we don't blame people using IDEs, code completion or other tools that have become the norm.
> After all we don't blame people using IDEs, code completion or other tools that have become the norm.
Because those don’t have the same issues. It’s not like IDEs, LSPs, and other tools were the target of warranted criticism and then we stopped. Rather, they never received this kind of backlash in the first place.
No IDE has ever caused millions of people absolutely unrelated to it to have to ration water.
To use an exaggerated analogy, it’s like saying “people are complaining about arsenic being added to food but we need to overcome that sentiment, after all we don’t blame people adding salt and pepper which have become the norm”.
If that's the reason why people dunk on ai-assisted programming, fine.
That's not the impression I had though, the criticism I usually see is around laziness, low-effort submissions, etc... Which are not inherent issues of using llms.
But they are exacerbated by them, so the criticism still stands. No one visits HN for low-quality same-loking submissions. It’s like frequenting r/toolgifs and suddenly almost every post is about one specific hammer. That’d be understandably annoying, and while not the inherent fault of the hammer, it would be an issue only possible because it exists.
TLDR: That article is pretty low quality, and the "caused millions of people absolutely unrelated to it to have to ration water" doesn't seem like a reasonable conclusion. It's not mentioned at all in the source article. I took some notes on this article and traced back the research to the original article by The Austin Chronicle which is significantly better: https://www.austinchronicle.com/news/2025-07-25/texas-is-sti... , would recommend.
Main takeaways:
- Why are we building data centres so close to the equator where it's hot.
- It's depressing to see the high quality reporting from The Austin Chronicle watered down into more and more clickbaity soundbytes as it gets recycled through other "news" orgs. But at the same time, I wouldn't have heard about it otherwise.
- The water evaporation was interesting to me, and would love to read more on what percentage evaporates, and whether the Stargate plans to build non-evaporative cooling will actually hold out and how that'll impact the water grid.
- Would love some more info/context on that 463 mil number, but stopping my research here for now. Combining this with when/how often Texas has to ratio it's water would provide a stronger argument in support/against the provided claim of water rationing.
- The fact that we don't have good number from how much water data centres are using is crazy, we need that level of granularity/regulation.
- Markers of poor reporting:
- Numbers without context/clarity. Would it kill these sites to include a bar chart.
- Citations of sites that market engaging/entertaining
- Ambiguous / contradictory data
- Ambiguous references
Notes:
Interesting article! A few weird things:
1. The most cited reference is to a site called "Techie + Gamers", which self-describes itself as "TechieGamers.com is a leading destination for engaging entertainment coverage, news, net worths and TV shows with a strong focus on Shark Tank." Makes me suspicious of the journalistic quality of this and that article.
2. In the headline it says "Texas AI centers guzzle 463 million gallons". Further down it says "According to a July 2025 investigation by The Austin Chronicle, data centers across Central Texas, including Microsoft and US Army Corps facilities in San Antonio, used a combined 463 million gallons of water in 2023 and 2024 alone, as reported by Techie + Gamers." Over 2023 and 2024? Odd that it's giving the sum over 2 years. And not sure what it means that it includes the US Army? Also without any context I don't know what this number means.
- I checked the TechieGamers article and this contradicts what is written there, which says the 463 million number is for San Antonio alone.
3. Robert Mace, executive director of The Meadows Center for Water, notes that "once water evaporates, it's gone." This is interesting, not sure how much water is actually evaporated vs returned to the grid.
4. "The scale of water use is massive, as the Texas Water Development Board projections estimate that data centers in the state will consume 49 billion gallons of water in 2025, soaring to nearly 400 billion gallons by 2030, as per Techie + Gamers report. That’s about 7% of Texas’s total projected water use, according to the report."
- Mixed citations here, not sure whether these numbers are from Texas Water Development Board or Techie + Gamers. Also they project an increase from ~232 million gallons/year in 2024 to 49 billion in 2025? That's a 200x increase. And they expect a further ~10x increase from 2025 to 2030 to 400 billion? Or is it because the original number was only for Central Texas?
- 7% of what? The 2025 number or the 2030 number?
- Again subtle contradictions with TechieGamer which says "a white paper submitted to the Texas Water Development Board projected that data centers in the state will consume 49 billion gallons of water in 2025. That number is expected to rise to 399 billion gallons by 2030, nearly 7% of the state’s total projected water use.". So it's not the Texas Water Development Board but a whitepaper submitted to the board? Not sure who made these numbers now.
5. "Much of the water these centers use evaporates during cooling and can’t be recycled, a critical issue in an area already grappling with scarce water resources, as reported by Techie + Gamers."
- Again really want more info/numbers on this.
The root article seems to be from "The Austin Chronicle" :
1. This starts with "After Donald Trump and Elon Musk’s public breakup, Sam Altman replaced Musk as the president’s new favorite tech guy. Altman, the CEO of OpenAI, has become something like Musk’s archnemesis on the rapidly developing stage of artificial intelligence in Texas." This doesn't seem accurate with my reading of the news, and is so colourful that it makes me question the journalistic quality of this article.
2. The reporting across the three sources is mixed on who they're blaming. Economic Times doesn't even mention OpenAI and calls it "Microsoft's Stargate campus". Techi Gamers uses this phrase, but also later says "Microsoft has partnered with OpenAI". And The Austin Chronicle doesn't mention Microsoft at all and focuses on OpenAI. And the Wikipedia page for Stargate says "joint venture created by OpenAI, SoftBank, Oracle, and investment firm MGX." ?
3. I take it back reading it further this article is _significantly_ better than the others, with many more reputable sources.
4. Finally we get some real sources!! The 49 billion 2025 and 400 billion 2030 numbers are from HARC, Houston Advanced Research Center. And the 7% is actually 6.6%, and relative to the 2030 projection.
5. Finally real info on evaporation!! Still no numbers but we get a description of the process:
> Most data centers use an evaporative cooling system, in which the servers’ heat is absorbed by water. The heat is then removed from the water through evaporation, causing the water to be lost as vapor in the air. The cooler water then goes back through the machines, and this loop is regularly topped off with fresh water. After all, evaporation renders the water saltier and unusable after four or five cycles. “Then they dump the water, and it goes down the sewer,” Mace said.
> ...
> The Abilene Stargate campus will reportedly use a closed-loop, non-evaporative liquid cooling system that requires an initial refill of around 1 million gallons of water, with “minor” maintenance refills. Cook is skeptical that such closed-loop systems will use as little water as they suggest. It’s not possible, Cook says, to use the same water over and over again, recycled infinitely, to cool servers.
6. This article doesn't mention the 463 mil anywhere, which makes me think that was original research from TechiGamers. They reference SAWS, San Antonio Water System, but again the numbers are without context, so would need to do some original research to get any meaningful insights from these numbers.
If I can tell something is "vibe coded", that means it's bad. It doesn't matter what tools people use as long as the output is good. Vibe coding smells include:
1. Tons of pointless comments outlining trivial low-level behaviour,
2. No understanding of abstraction levels,
3. No real architecture at all,
4. Not DRY, no helper functions or inconsistent use of said functions across project,
5. Way too many lines of code.
None of these are shaming for use of any particular tool, they are just shaming the output.
Ok, let's better not talk about "vibe coding" because we don't really have definition of what it means. "Historically" it means "just letting the AI code without looking at its output" while I often see people that are more diligently using AI using it kind of tongue in cheek. My mistake using the expression in the latter way.
It's really odd now that we look for more human code rather than AI Generated code, and I think this is going to be increasing in every form of data that's out there.
Thanks! Although I had to use it for some things (like the logo, for example, and I’m not a "graphic guy"), in the end, since it’s a simple project by design, I didn’t mind, and the result isn’t bad at all.
This may not be entirely the right metaphor but I kinda see it as the difference between fast food, a top rated restaurant, and home made cooking —with fast food being AI.
Generic, does the job, not the highest quality, bleak, fast repetitious output
> There are several Pixel Art Editors that do the same things and even much more, but many require an account registration or the insertion of an e-mail or have a certain business model.
Latest Aseprite is still available with free (as in beer) source code to compile, even if it is a bit heavy on the dependencies these days, including requiring that you install a special fork of Skia iirc. I paid for it to get the pre-compiled binaries for Windows, but on Linux and OSX I always compiled it myself anyway. On FreeBSD, that is my desktop OS of choice now, I use the ancient open source version of Aseprite since that is what is most convenient to install (from the port). Maybe I should try Libresprite instead.
For my programmer art I also use old (Autodesk) Animator (in DOSBox) a lot. It is small and runs anywhere. Perfect for doodling on my phone, with some configuration to add various on-screen buttons in DOSBox. Small enough (less than 1 MB) that the entire application plus all configuration and saved working files can go into every source code repository where I want to edit some pixel art. https://github.com/AnimatorPro
Also have VGA Paint 386 installed in DOSBox everywhere. Have not used it much, but it seems good (probably more interesting for those that want something closer to a Deluxe Paint clone). https://www.bttr-software.de/products/vp386/
Going to have a look at Tilf as well, to see if it is not too much work to get it to run in FreeBSD. Not being an expert in drawing anything, it helps to have many tools and switch between, as all tend to have something they do better (or easier) than the other ones.
Please provide github topics (tags) for the project. It may boost your project discoverability. I often use it with github search to find interesting projects in "topic".
I like that it really is simply built and packaged, I'm sure it was fun to hack away at. There's something about gluing together a million packages which sucks the fun out of tinkering (for me, at least).
That’s also why the project was built from scratch. The only real dependency of the project is PySide6. The icons don’t come from any package. PyInstaller is used solely for bundling purposes. As outlined in the README.md, running Tilf requires nothing more than an installed version of Python (3).
PySide6 is a solid choice for Python desktop apps - Qt's rendering capabilities make it ideal for pixel-perfect graphics manipulation while avoiding the performance issues that can plague Tkinter or the dependency complexities of wxPython.
I already have some experience with Python/PySide6, and I was mainly interested in having a working prototype as soon as possible (I’m experimenting with SDL3 and animating squares isn’t exactly thrilling!). Plus, Qt widgets integrate very well with Python, it is so easy to create a section, especially when the documentation is well written, that helps a lot. Also, with PyInstaller, the build process for each platform is fairly straightforward (although for customized icons, there are a few extra steps to take).
There are some downsides of course (like the bundle size, for example), but that's not a problem, the core idea is: double-click on Tilf and start drawing right away.
I recently discovered and have been fairly happy with PixelLab - an AI pixel art generator. I feel like they have a ways to go in features and UX, but it shows promise.
So, congrats on your release.
Because those don’t have the same issues. It’s not like IDEs, LSPs, and other tools were the target of warranted criticism and then we stopped. Rather, they never received this kind of backlash in the first place.
No IDE has ever caused millions of people absolutely unrelated to it to have to ration water.
https://archive.ph/20250731222011/https://m.economictimes.co...
To use an exaggerated analogy, it’s like saying “people are complaining about arsenic being added to food but we need to overcome that sentiment, after all we don’t blame people adding salt and pepper which have become the norm”.
That's not the impression I had though, the criticism I usually see is around laziness, low-effort submissions, etc... Which are not inherent issues of using llms.
But they are exacerbated by them, so the criticism still stands. No one visits HN for low-quality same-loking submissions. It’s like frequenting r/toolgifs and suddenly almost every post is about one specific hammer. That’d be understandably annoying, and while not the inherent fault of the hammer, it would be an issue only possible because it exists.
Main takeaways:
- Why are we building data centres so close to the equator where it's hot.
- It's depressing to see the high quality reporting from The Austin Chronicle watered down into more and more clickbaity soundbytes as it gets recycled through other "news" orgs. But at the same time, I wouldn't have heard about it otherwise.
- The water evaporation was interesting to me, and would love to read more on what percentage evaporates, and whether the Stargate plans to build non-evaporative cooling will actually hold out and how that'll impact the water grid.
- Would love some more info/context on that 463 mil number, but stopping my research here for now. Combining this with when/how often Texas has to ratio it's water would provide a stronger argument in support/against the provided claim of water rationing.
- The fact that we don't have good number from how much water data centres are using is crazy, we need that level of granularity/regulation.
- Markers of poor reporting:
- Numbers without context/clarity. Would it kill these sites to include a bar chart.
- Citations of sites that market engaging/entertaining
- Ambiguous / contradictory data
- Ambiguous references
Notes:
Interesting article! A few weird things:
1. The most cited reference is to a site called "Techie + Gamers", which self-describes itself as "TechieGamers.com is a leading destination for engaging entertainment coverage, news, net worths and TV shows with a strong focus on Shark Tank." Makes me suspicious of the journalistic quality of this and that article.
2. In the headline it says "Texas AI centers guzzle 463 million gallons". Further down it says "According to a July 2025 investigation by The Austin Chronicle, data centers across Central Texas, including Microsoft and US Army Corps facilities in San Antonio, used a combined 463 million gallons of water in 2023 and 2024 alone, as reported by Techie + Gamers." Over 2023 and 2024? Odd that it's giving the sum over 2 years. And not sure what it means that it includes the US Army? Also without any context I don't know what this number means.
- I checked the TechieGamers article and this contradicts what is written there, which says the 463 million number is for San Antonio alone.
3. Robert Mace, executive director of The Meadows Center for Water, notes that "once water evaporates, it's gone." This is interesting, not sure how much water is actually evaporated vs returned to the grid.
4. "The scale of water use is massive, as the Texas Water Development Board projections estimate that data centers in the state will consume 49 billion gallons of water in 2025, soaring to nearly 400 billion gallons by 2030, as per Techie + Gamers report. That’s about 7% of Texas’s total projected water use, according to the report." - Mixed citations here, not sure whether these numbers are from Texas Water Development Board or Techie + Gamers. Also they project an increase from ~232 million gallons/year in 2024 to 49 billion in 2025? That's a 200x increase. And they expect a further ~10x increase from 2025 to 2030 to 400 billion? Or is it because the original number was only for Central Texas?
- 7% of what? The 2025 number or the 2030 number?
- Again subtle contradictions with TechieGamer which says "a white paper submitted to the Texas Water Development Board projected that data centers in the state will consume 49 billion gallons of water in 2025. That number is expected to rise to 399 billion gallons by 2030, nearly 7% of the state’s total projected water use.". So it's not the Texas Water Development Board but a whitepaper submitted to the board? Not sure who made these numbers now.
5. "Much of the water these centers use evaporates during cooling and can’t be recycled, a critical issue in an area already grappling with scarce water resources, as reported by Techie + Gamers."
- Again really want more info/numbers on this.
The root article seems to be from "The Austin Chronicle" :
1. This starts with "After Donald Trump and Elon Musk’s public breakup, Sam Altman replaced Musk as the president’s new favorite tech guy. Altman, the CEO of OpenAI, has become something like Musk’s archnemesis on the rapidly developing stage of artificial intelligence in Texas." This doesn't seem accurate with my reading of the news, and is so colourful that it makes me question the journalistic quality of this article.
2. The reporting across the three sources is mixed on who they're blaming. Economic Times doesn't even mention OpenAI and calls it "Microsoft's Stargate campus". Techi Gamers uses this phrase, but also later says "Microsoft has partnered with OpenAI". And The Austin Chronicle doesn't mention Microsoft at all and focuses on OpenAI. And the Wikipedia page for Stargate says "joint venture created by OpenAI, SoftBank, Oracle, and investment firm MGX." ?
3. I take it back reading it further this article is _significantly_ better than the others, with many more reputable sources.
4. Finally we get some real sources!! The 49 billion 2025 and 400 billion 2030 numbers are from HARC, Houston Advanced Research Center. And the 7% is actually 6.6%, and relative to the 2030 projection.
5. Finally real info on evaporation!! Still no numbers but we get a description of the process:
> Most data centers use an evaporative cooling system, in which the servers’ heat is absorbed by water. The heat is then removed from the water through evaporation, causing the water to be lost as vapor in the air. The cooler water then goes back through the machines, and this loop is regularly topped off with fresh water. After all, evaporation renders the water saltier and unusable after four or five cycles. “Then they dump the water, and it goes down the sewer,” Mace said.
> ...
> The Abilene Stargate campus will reportedly use a closed-loop, non-evaporative liquid cooling system that requires an initial refill of around 1 million gallons of water, with “minor” maintenance refills. Cook is skeptical that such closed-loop systems will use as little water as they suggest. It’s not possible, Cook says, to use the same water over and over again, recycled infinitely, to cool servers.
6. This article doesn't mention the 463 mil anywhere, which makes me think that was original research from TechiGamers. They reference SAWS, San Antonio Water System, but again the numbers are without context, so would need to do some original research to get any meaningful insights from these numbers.
1. Tons of pointless comments outlining trivial low-level behaviour,
2. No understanding of abstraction levels,
3. No real architecture at all,
4. Not DRY, no helper functions or inconsistent use of said functions across project,
5. Way too many lines of code.
None of these are shaming for use of any particular tool, they are just shaming the output.
Generic, does the job, not the highest quality, bleak, fast repetitious output
https://libresprite.github.io/
For my programmer art I also use old (Autodesk) Animator (in DOSBox) a lot. It is small and runs anywhere. Perfect for doodling on my phone, with some configuration to add various on-screen buttons in DOSBox. Small enough (less than 1 MB) that the entire application plus all configuration and saved working files can go into every source code repository where I want to edit some pixel art. https://github.com/AnimatorPro
Also have VGA Paint 386 installed in DOSBox everywhere. Have not used it much, but it seems good (probably more interesting for those that want something closer to a Deluxe Paint clone). https://www.bttr-software.de/products/vp386/
Then there is https://orama-interactive.itch.io/pixelorama that is open source and seems to improve at a good pace. I just never took the time to look very close.
Going to have a look at Tilf as well, to see if it is not too much work to get it to run in FreeBSD. Not being an expert in drawing anything, it helps to have many tools and switch between, as all tend to have something they do better (or easier) than the other ones.
Why? What’s the problem with it?
I have one very silly question... Why is the elf logo not pixel art? :)
What made you decide to go with PySlide6?
There are some downsides of course (like the bundle size, for example), but that's not a problem, the core idea is: double-click on Tilf and start drawing right away.