But I've struggled with the IO parts, it seems every system is so different, it's hard to live without LLVM-IR as the middle layer abstracting away compiler and target differences.
This is an interesting project - kudos for executing it. I have to admit that when I was starting out in this field, I too fantasised about, "Would this software be faster, smaller and better in assembly?". Ofcourse, assembly programming made some sense in embedded electronics, which can be very resource constrained and even specialised for one particular application. Thinking from that aspect, perhaps you should consider making this a specialised program that runs on something like a Raspberry Pi - running such a web server directly on it, without an OS (or a very minimal OS), would make for a real cool and interesting project.
I was thinking more of the memory usage. As another project posted on HN described, they were running the web server in the RAM of an RPi ( https://news.ycombinator.com/item?id=48064312 ). Without a full-fledged OS, and a tiny (and fast) webserver coded in Assembly, more RAM would be available for the webserver and they could serve more users.
I did actually make an attempt at that once for BGGP5 [0]. (That is, making a minimal, horribly insecure 'client' implementing just enough behavior to get a response from a server.) But I got demoralized by how much space the binary blobs for the crypto algorithms took up, in comparison to the actual machine code.
I'd really like to see a TCP/IP stack written in native forth (if anyone needs a really good therapist, that sounds like a _great_ project to try ;)
I mean, it doesn't look _that_ daunting, but the fact that noone seems to ever have release an open source version (there are rumours of proprietary stacks though) speaks for itself.
Yes, well aware of it, that's actually very nice for building higher levels of the webserver.
I'd really like to have a complete forth machine dealing with everything, say on an esp32. I guess there's FreeRTOS, so I could use that network layer, but bare metal would be so much cooler. I admit I don't even understand how it would work - would I have to bit-bang the ethernet lines?
I never interface with the peripheral in an ESP32 directly. I guess I really need to read the Free-RTOS code. Micropython just uses that, last I checked.
What on earth are you talking about? Assembly makes sense in desktop computing as well. Have you ever, for example, watched a video? What do you think powers the codecs, JSX?
The statistics reported by GitLab for the x264 repo (https://code.videolan.org/videolan/x264) report that the project is 13.5% assembly; common utilities used in the inner loops of the codec have optimized assembly implementations for several CPU architectures.
A lot of the encoding side on ffmpeg now uses hand-coded assembly optimizations to take advantage of avx512 instructions on newer x64 processors for "100x speed increase" since February 2025 in a stable form
Yes, I do know that some Assembly is used in systems programming and other niches where it makes sense. To be clear, I was talking about the phase some of us go through (as amateurs) when we think everything would be "faster, smaller, better" if written in assembly - Python is slow. What about C or Pascal? Wouldn't asm be faster? ... but, as we all realise sooner or later, there's a reason we prefer to code in high-level or very high-level languages, and why pre-mature optimisation can be a real handicap.
Ah yes, the niche that is video, audio, game and systems programming.
When those three to four amateurs still doing those niche things grow up they’ll move on to real programming, like putting together a solid skills.md file.
Your snide comments make me think that you are trying to create a parody account that matches your nick here (@hatefulheart). Just a heads up, unlike Reddit or other SMs, such things are not appreciated much here on HN.
Sounds like I was absolutely right alright, and so was @thisislife2. Just yet another pitiful case of falling for the "hard truths <-> hurt feelings" reversal.
Not even sure why. Something being niche is not a knock, not one way, not another. Such a weird thing to throw a sad fit like this about.
I interact with Assembly everyday and many of those around me do too. So I wouldn’t label it as niche. If there are thousands of people doing something in computing, it ain’t niche. It’s niche when the numbers are much smaller than that.
Judging by the downvotes that other guy got, he wasn’t “absolutely correct!” as you seem to claim.
> If there are thousands of people doing something in computing, it ain’t niche.
Quick web search suggests there are 4.4 million software developers in the US alone. For the record, I think a lot more people touch assembly weekly than your bellyfeel figure, but it's still nevertheless worlds apart. Such a weird thing to try and deny. Not even sure why you'd feel compelled to per se, it's not the coat that wears you.
> Judging by the downvotes that other guy got, he wasn’t “absolutely correct!” as you seem to claim.
Something funny has happened. I didn't submit this link today, as is shown in the front page. I submitted three days ago, before the original author (above submission) actually.
HN has a "second chance queue" that sometimes revives posts and gives them another shot. Not entirely sure how it works, but it sounds like what happened here.
I’m curious what the performance of this implementation is versus a server written in C, C++, or Rust. How much performance can a human still squeeze out at the assembly level versus today’s state of the art compilers?
Today's state of the art compilers can't even do vectorized integer division by a compile time known constant very well. They definitely can't map high-level constructs onto low-level patterns, and they don't carry anywhere near enough semantic information through the different optimization passes to be able to take even very safe, simple, sane shortcuts with zero possibility of UB or other issues. There's a lot of performance being left on the table.
Mind you, somebody who's sympathetic to the machine's needs can easily scrape most of that performance back by writing C/C++/Zig in a way that easily maps to the optimal assembly. The optimizer won't make your code drastically worse too often, so if you start with something nice then actually dropping down into assembly has limited use cases and usually limited benefits...if you know what you're doing and throw out every style guide as you do so.
As to this server in particular? At first blush it looks more like a learning exercise. You'll go a lot further with clever incremental routines and appropriately leveraging your OS's async API than you will by shaving a few instructions here and there.
As to servers in general? Your kernel is the real bottleneck. If you need all of its features then you don't have a lot of options, but if you're like most applications then you're leaving a ton of performance on the floor not going for kernel bypass (not that using your kernel for network is a _bad_ decision, but you are nevertheless incurring a 10x-50x performance hit as the cost). Assembly shenanigans literally don't matter in comparison.
> How much performance can a human still squeeze out at the assembly level versus today’s state of the art compilers?
Most of the squeezing is to be had in the parts where the compiler can’t help. (Which I guess is logically equivalent to saying that you can’t often do meaningfully better than the compiler on the things that the compiler is concerned with, but you have to admit it reads very differently.) Two important widely-applicable examples are data layout (locality, in particular getting rid of large and costly-to-traverse pointers) and vectorization; what they have in common is that you may well have to redesign the entire flow of data in your program around the issue before you get meaningful improvements. (And there is often an order-of-magnitude improvement to be had on a CPU-bound task, if you are willing to spend the time and effort to optimize.)
There are also specific situations where the approaches used by modern compilers work badly. The straightforward switch-based interpreter is a well-known example: modern Clang essentially turns into Clippy and goes “looks like you’re writing an interpreter, would you like me to duplicate your dispatch for you” so branch prediction works out as well as in manual assembly, but it still allocates registers a function at a time, so when the function in question is the entirety of the interpreter including the slowpaths, the regalloc sucks. Tail-call interpreters and __attribute__((cold, noinline, preserve_most)) amount to expressing the exact same control-flow graph in such a way that the compiler can digest it better, ironically by understanding less of it at any given time. This is one way that the dumb fundamental nature of the admittedly quite smart modern compiler shines through.
And in very tight loops there are still places when doing things by hand can help. For instance, when computing a histogram of byte values over a large block (for which I’m not aware of any public vectorized code that would go faster than the best scalar options) I’ve seen Clang lose as much as 20% to (contemporary) GCC on the best C implementation[1] or its straightforward manual translation to assembly, because Clang had decided it knew better which order the instructions should go in. As a less exotic case, I’ve seen GCC lose out by about 20% to (contemporary) Clang in vectorized loops because it had decided that having half the loop body be MOVs
(or rather VMOVDQAs) would be a better idea than taking advantage of AVX’s ability to not overwrite either of the input arguments, and though MOVs are basically free on a superscalar they’re not that free. I’ve even seen both GCC and Clang ignore an explicit __builtin_expect() and compile a very predictable (but unavoidable) inner-loop branch into a CMOV, once again costing me about 20% in performance.
So if you do in fact care about the difference between 1.1 cycles/byte and 1.3 cycles/byte, yes you can beat a compiler even on a micro level. You just probably don’t have the, depending on your point of view, fortune or misfortune of working on code like that.
1. What is the performance of this implementation of an assembly server? The comment you replied to answered it: It's likely crap. Hand written assembly is almost always worse than compiler.
2. How much performance can a human still squeeze out at the assembly level? That question is different and remains unanswered. But in my experience the answer is the same.
I love this. I'm wondering: what skills did this project hone? What are you better at now than you were before you undertook it? Or was it just for fun?
But I've struggled with the IO parts, it seems every system is so different, it's hard to live without LLVM-IR as the middle layer abstracting away compiler and target differences.
Well. Interesting choice of side project!
[0] https://binary.golf/5/
https://github.com/openssl/openssl/blob/master/crypto/aes/as...
I mean, it doesn't look _that_ daunting, but the fact that noone seems to ever have release an open source version (there are rumours of proprietary stacks though) speaks for itself.
One of these days ...
I'd really like to have a complete forth machine dealing with everything, say on an esp32. I guess there's FreeRTOS, so I could use that network layer, but bare metal would be so much cooler. I admit I don't even understand how it would work - would I have to bit-bang the ethernet lines?
https://www.techspot.com/news/108715-ffmpeg-gets-100x-faster...
When those three to four amateurs still doing those niche things grow up they’ll move on to real programming, like putting together a solid skills.md file.
What?
Not even sure why. Something being niche is not a knock, not one way, not another. Such a weird thing to throw a sad fit like this about.
Judging by the downvotes that other guy got, he wasn’t “absolutely correct!” as you seem to claim.
Stop constantly editing your message.
Quick web search suggests there are 4.4 million software developers in the US alone. For the record, I think a lot more people touch assembly weekly than your bellyfeel figure, but it's still nevertheless worlds apart. Such a weird thing to try and deny. Not even sure why you'd feel compelled to per se, it's not the coat that wears you.
> Judging by the downvotes that other guy got, he wasn’t “absolutely correct!” as you seem to claim.
I was referring to this comment of theirs, obviously: https://news.ycombinator.com/item?id=48106378
> Stop constantly editing your message
Wait 2 hours before responding, then I can no longer make edits.
https://news.ycombinator.com/item?id=48080587
Mind you, somebody who's sympathetic to the machine's needs can easily scrape most of that performance back by writing C/C++/Zig in a way that easily maps to the optimal assembly. The optimizer won't make your code drastically worse too often, so if you start with something nice then actually dropping down into assembly has limited use cases and usually limited benefits...if you know what you're doing and throw out every style guide as you do so.
As to this server in particular? At first blush it looks more like a learning exercise. You'll go a lot further with clever incremental routines and appropriately leveraging your OS's async API than you will by shaving a few instructions here and there.
As to servers in general? Your kernel is the real bottleneck. If you need all of its features then you don't have a lot of options, but if you're like most applications then you're leaving a ton of performance on the floor not going for kernel bypass (not that using your kernel for network is a _bad_ decision, but you are nevertheless incurring a 10x-50x performance hit as the cost). Assembly shenanigans literally don't matter in comparison.
Most of the squeezing is to be had in the parts where the compiler can’t help. (Which I guess is logically equivalent to saying that you can’t often do meaningfully better than the compiler on the things that the compiler is concerned with, but you have to admit it reads very differently.) Two important widely-applicable examples are data layout (locality, in particular getting rid of large and costly-to-traverse pointers) and vectorization; what they have in common is that you may well have to redesign the entire flow of data in your program around the issue before you get meaningful improvements. (And there is often an order-of-magnitude improvement to be had on a CPU-bound task, if you are willing to spend the time and effort to optimize.)
There are also specific situations where the approaches used by modern compilers work badly. The straightforward switch-based interpreter is a well-known example: modern Clang essentially turns into Clippy and goes “looks like you’re writing an interpreter, would you like me to duplicate your dispatch for you” so branch prediction works out as well as in manual assembly, but it still allocates registers a function at a time, so when the function in question is the entirety of the interpreter including the slowpaths, the regalloc sucks. Tail-call interpreters and __attribute__((cold, noinline, preserve_most)) amount to expressing the exact same control-flow graph in such a way that the compiler can digest it better, ironically by understanding less of it at any given time. This is one way that the dumb fundamental nature of the admittedly quite smart modern compiler shines through.
And in very tight loops there are still places when doing things by hand can help. For instance, when computing a histogram of byte values over a large block (for which I’m not aware of any public vectorized code that would go faster than the best scalar options) I’ve seen Clang lose as much as 20% to (contemporary) GCC on the best C implementation[1] or its straightforward manual translation to assembly, because Clang had decided it knew better which order the instructions should go in. As a less exotic case, I’ve seen GCC lose out by about 20% to (contemporary) Clang in vectorized loops because it had decided that having half the loop body be MOVs (or rather VMOVDQAs) would be a better idea than taking advantage of AVX’s ability to not overwrite either of the input arguments, and though MOVs are basically free on a superscalar they’re not that free. I’ve even seen both GCC and Clang ignore an explicit __builtin_expect() and compile a very predictable (but unavoidable) inner-loop branch into a CMOV, once again costing me about 20% in performance.
So if you do in fact care about the difference between 1.1 cycles/byte and 1.3 cycles/byte, yes you can beat a compiler even on a micro level. You just probably don’t have the, depending on your point of view, fortune or misfortune of working on code like that.
[1] https://github.com/powturbo/Turbo-Histogram
Almost certainly crap.
As the author states, it's a simple fork-on-request server, which was state-of-the-art in about 1996. But that's not the point.
1. What is the performance of this implementation of an assembly server? The comment you replied to answered it: It's likely crap. Hand written assembly is almost always worse than compiler.
2. How much performance can a human still squeeze out at the assembly level? That question is different and remains unanswered. But in my experience the answer is the same.