> To solve the distribution and isolation problem, Linux engineers built a set of kernel primitives (namespaces, cgroups, seccomp) and then, in a very Linux fashion, built an entire ecosystem of abstractions on top to “simplify” things: [...] Somehow we ended up with an overengineered mess of leaky abstractions
Not sure I like the value judgement here. I think it's more of a consequence of Linux' success. I am convinced that if it was reversed (Linux was niche and *BSD the norm), then a ton of abstractions would come, and the average user would "use an overengineered mess" because they don't know better (or don't care or don't have a need to care).
Not that I like it when people ship their binary in a 6G docker image. But I don't think it's fair to put that on "those Linux engineers".
I don't think it's necessarily true, compare the BSD utils to the GNU utils and the style difference is very visible.
On the other hand, I don't think the comparison between jails and docker is fair. What made Docker popular is the reusability of the containers, certainty not the sandboxing which in the early days was very leaky.
And for the whole world, too. I don't need to build my own local stripped down version of Alpine Linux with python, somebody's already dike that for me.
The jails vs containers framing is interesting but I think it misses why Docker actually won. It wasn't the isolation tech. It was the ecosystem: Dockerfiles as executable documentation, a public registry, and compose for local dev. You could pull an image and have something running in 30 seconds without understanding anything about cgroups or namespaces.
FreeBSD jails were technically solid years before Docker existed, but the onboarding story was rough. You needed to understand the FreeBSD base system first. Docker let you skip all of that.
That said, I've been seeing more people question the container stack complexity recently. Especially for smaller deployments where a jail or even a plain VM with good config management would be simpler and more debuggable. The pendulum might be swinging back a bit for certain use cases.
I'm using either Docker Compose or Docker Swarm without Kubernetes, and there's not that much of it, to be honest. My "ingress" is just an Apache2 container that's bound to 80/443 and my storage is either volumes or bind mounts, with no need for more complexity there.
> The jails vs containers framing is interesting but I think it misses why Docker actually won. It wasn't the isolation tech. It was the ecosystem: Dockerfiles as executable documentation, a public registry, and compose for local dev. You could pull an image and have something running in 30 seconds without understanding anything about cgroups or namespaces.
So where's Jailsfiles? Where's Jail Hub (maybe naming needs a bit of work)? Where's Jail Desktop or Jail Compose or Jail Swarm or Jailbernetes?
It feels like either the people behind the various BSDs don't care much for what allowed Docker to win, or they're unable to compete with it, which is a shame, because it'd probably be somewhere between a single and double digit percent userbase growth if they decided to do it and got it right. They already have some of the foundational tech, so why not the UX and the rest of it?
Docker's client/server design also allowed for things like Docker Desktop, which made the integration seamless with non-linux systems. Jails have nothing like that, so the only system that will ever run jails is FreeBSD. Also, I'm not up to speed enough to know, but do jails even have a concept of container images?
> Jails solve the isolation problem beautifully, but they don't have a native answer to shipping. That gap is real, and it's one of the main reasons the ecosystem around jails feels underdeveloped compared to Docker's world.
The link literally uses the term ecosystem. Several times actually.
I frequently see freeBSD jails as a highlighted feature, lauding their simplicity and ease of use. While I do admire them, there are benefits to the container approach used commonly on linux. (and maybe soon freebsd will better support OCI).
First it's important to clarify "containers" are not an abstraction in the linux kernel. Containers are really an illusion achieved by use of a combination of user/pid/networking namespaces, bind mounts, and process isolation primitives through a userspace application(s) (podman/docker + a container runtime).
OCI container tooling is much easier to use, and follows the "cattle not pets" philosophy, and when you're deploying on multiple systems, and want easy updates, reproducibility, and mature tooling, you use OCI containers, not LXC or freebsd jails. FreeBSD jails can't hold a candle to the ease of use and developer experience OCI tooling offers.
> To solve the distribution and isolation problem, Linux engineers built a set of kernel primitives (namespaces, cgroups, seccomp) and then, in a very Linux fashion, built an entire ecosystem of abstractions on top to “simplify” things.
This was an intentional design decision, and not a bad one! cgroups, namespaces, and seccomp are used extensively outside of the container abstraction. (See flatpak, systemd resource slices, firejail). By not tieing process isolation to the container abstraction, we can let non-container applications benefit from them. We also get a wide breadth of container runtime choices.
I still see FreeBSD as being great for things like networking devices and storage controllers. You can apply a lot of the "cattle vs pets" design one level above that using VMs and orchestration tools.
I ran a whole company on top of FreeBSD back in the day (2005 ish). It was great, and ran all my personal pcs the same way (hell, refusing to install windows to try out this bitcoin idea is even now a good idea).
But somehow Linux still took over my personal and professional life.
Going back seems nice but there need to be a compelling reason -docker is fine, the costs don’t add up any more. I do t have a real logical argument beyond that.
Yeah, I have a similar situation; FreeBSD is a great operating system, but the sheer amount of investment in Linux makes all the warts semi-tolerable.
I'm sure some people have a sunk-cost feeling with Linux and will get defensive of this, but ironically this was exactly the argument I had heard 20 years ago - and I was defensive about it myself then.. This has only become more true though.
It's really hard to argue against Linux when even architecturally poor decisions are papered over by sheer force of will and investment; so in a day-to-day context Linux is often the happy path even though the UX of FreeBSD is more consistent over time.
I know this comment is effectively a side tangent on a side tangent. but that was always the strangest thing to me as well. I remember in 2012 when I was debating fiddling around with Bitcoin. that was one of the things that turned me off. I was sure that there was no way something as brilliant as this was supposed to be was developed by windows user.
Which surely says something about all these ideological purity tests
Windows developers (like sysadmins) are of two kinds in my experience.
People who don't understand shit about how the system behaves and are comfortable with that. "I install a package, I hit the button, it works"
.. and
People who understand very deeply how computers work, and genuinely enjoy features of the NT Kernel, like IOCP and the performance counters they offer to userland.
What's weird to me is that the competence is bimodal; you're either in the first camp or the second. With Linux (+BSD/Solaris etc;) it's a lot more of a spectrum.
I've never understood exactly why this is, but it's consistent. There's no "middle-good" Windows developer.
Is there any technical writeup which explains how the isolation exactly works, on containers and VMs? I have always heard the high level arguments of weak isolation, same kernel, etc but never the implementation details.
I was looking at TrueNAS CORE to see if it was a viable way to bsd-jail Linux containers. I'm really only doing this to get some protection from supply chain attacks given I'm fairly promiscuous at git-clone-and-run-a-build. Before that I was aiming for the same with Bastille and had got to the give up stage because it felt too fiddly to set up. This was a year ago. Maybe its better now
I switched my startup’s whole infra to FreeBSD a couple months ago. Found a use after free bug that Linux’s memory management was just fine with in Gnome XSLT lib that FreeBSD properly refused. Other than that smooth sailing, jails work great.
After IBM destroyed CentOS, all the Xorg politics nonsense, the list goes on with Linux, not interested. I just want something quiet and boring and stable and correctly designed. NetBSD would be my first choice but they don’t get the $ they need for drivers.
Done the same since 2018 circa, never looked back.
For a while even used it on the desktop, but was too much trouble due to specific tools we need that weren't supported properly. so we're using Linux on the desktop.
FreeBSD is stable, lightweight, gets out of the way, and without drama.
I do follow the news cycle and if I’m hearing about a software package in it, something is wrong with the people making the software and I don’t trust them. Software is an engineering discussion or at least it’s supposed to be. Here’s my community guidelines: everyone be nice and respectful engage in good faith and focus on the math. Being social is fine so long that it doesn’t become a diversion from the engineering discussion. We’re talking about code not a philosophical treatise. There are civil ways to settle disagreements. I’m so sick to death of the politics.
Really the whole theme that (from the article) "FreeBSD ships as a complete, coherent OS" is belied by this kind of nonsense. No, it's not. Or, sure, it is, but in exactly the same way that Debian or whatever is. It's a big soup of some local software and a huge ton of upstream dependencies curated for shipment together. Just like a Linux distro.
And, obviously, almost all those upstream dependences are exactly the same. Yet somehow the BSD folks think there's some magic to the ports stuff that the Linux folks don't understand. Well, there isn't. And honestly to the extent there's a delta in packaging sophistication, the Linux folks tend to be ahead (c.f. Nix, for example).
The key thing is that on freebsd you do not risk bricking your system by installing a port. Even though this guarantee has become less true with PkgBase
> The key thing is that on freebsd you do not risk bricking your system by installing a port
What specifically are you trying to cite here? Which package can I install on Debian or Fedora or whatever that "bricks the system"? Genuinely curious to know.
I think you missed the point in my original comment. I explained I moved my platform with all dependencies and had 1 bug which was actually a silent bug in Linux.
In other words, it works. Your particular stack might have a different snag profile but if I can move my giant complex app there, yours is worth a shot.
FreeBSD is more complete than you make out. They also have hard working ports maintainers.
I am not quite sure what this means. I had a jail a few years ago and I remember there was a utility to "back" the jail up so you could put it on another system. Are there constraints with that utility. It seemed to work, maybe I am forgetting something ?
In any case I still think Jails are much better than the things Linux has. To me, it is creating a jail that is more difficult. There were ports that made it easier, I used one of them, but that port was abandoned at some point. I think it was "ezjail".
Getting the same thing, "Failed to verify your browser. Code 11". Some noise about WebGL in the browser console, getExtension() invoked on a null reference. LibreWolf on Linux + resist fingerprinting.
Maybe opting for a better-written WAF could boost the reach?
> FreeBSD is worth a brief aside here, because it differs from Linux in a fundamental way. Linux is a kernel. What most people call "Linux" is actually that kernel combined with a GNU userland, a package ecosystem, and a set of choices that vary from distro to distro — Ubuntu, Fedora, and Arch are all running the same kernel but are meaningfully different systems underneath.
It is not incorrect but ... do people really care about that distinction?
Because in most situations I know of, when people refer to Linux, they almost never refer to the linux kernel. They refer to the whole operating system stack, which is typically put down via a distribution. So, Fedora, Gentoo, Arch, and so forth, are all "kind of" Linux. Barely anyone refers to the linux kernel if you look at all the discussions on the world wide web.
> FreeBSD ships as a complete, coherent OS
The BSDs often promote that aka "Linux is chaos, we are coherent and consistent operating system following intelligent design". Well ... this is the rise of worse is better, repeated: https://dreamsongs.com/WorseIsBetter.html
It is a great analogy that works on so many levels. Broken down to Linux versus the BSDs, I think 500 out of 500 top supercomputers running Linux kind of show which philosophy is better. The one that works better. That does not mean the BSDs are useless, but I am getting tired of the promo used by the BSD as "we are order, Linux is chaos". I compare this more to Lego building blocks. With Linux there is a stronger focus on having building blocks available. You can build up things. You have projects such as LFS/BLFS (Linux from scratch). The BSDs do not have something comparable. Which operating system is the better tinker OS? Which community created git? (Ok ok that was Linus so not really a community per se, but it originated from Linux and perhaps that was not an accident either.)
> FreeBSD pioneered the practical implementation of what we now call containers.
Ok great. Many modern programming languages learned from older languages; many of these older languages are dead now. You need to keep on innovating. Why is BSD so dead set on the past?
> FreeBSD reached that third stage in 2000. Linux wouldn't get there until 2008 with LXC.
Dumdedum ... it kind of sounds as if the FreeBSD guys are sad that Linux went on to dominate. It reminds me of NetBSD aka "we work on every toaster in the world". Then suddenly on a mailing list many years ago "wait a moment ... Linux now works on more toasters than we do". The BSDs don't seem to understand how momentum can be dominating.
> Technical superiority doesn't win ecosystem wars. Linux won through a combination of fast decisions, the viral GPL licence, and strong enterprise backing from Red Hat and IBM. Then Google, Facebook, and Amazon happened — hungry for datacenters, developing tools to manage growing infrastructure at scale. They set the direction for the entire industry.
Ok that flat out is incorrect. First - GPL worked well for the linux kernel, that is true. But the ecosystem includes many BSD-licences programs too, on Linux. So that explanation fails already here. LLVM has Apache License 2.0 which I kind of feel is a mix between GPL and BSD (not quite true but this is how I remember it).
Then the claim is Linux won because of Red Hat. I actually find Red Hat annoying and I am glad to not depend on it. Linux is way bigger than Red Hat. IBM? I don't see what IBM did for Linux really. So that explanation also does not work.
Google, Facebook, and Amazon - well, they profited from Linux. They didn't really ENABLE Linux. They would not have used Linux if Linux would have been useless. So that part came afterwards.
So none of those explanations really work well here.
> Linux rapidly went from "the free OS for people who can't afford commercial licences" to "the only acceptable OS for servers".
That is true but not for the claims made, e. g. "because of Google". The more important question is: why did the BSDs fail?
> To solve the distribution and isolation problem, Linux engineers built a set of kernel primitives (namespaces, cgroups, seccomp) and then, in a very Linux fashion, built an entire ecosystem of abstractions on top to “simplify” things
> Somehow we ended up with an overengineered mess of leaky abstractions for cloud-based, vendor-locked infrastructure.
Wait a moment - he cites Docker. That's owned by a private company. What does this have to do with Linux? If company xyz does something based on FreeBSD, we would then say company xyz is responsible for FreeBSD failing or not failing? How does that work?
> And this complexity has quietly reshaped how the industry thinks about deploying software. Today, if you want to run an application in a larger system, the implicit assumption is that you containerise it with Docker and orchestrate it with Kubernetes.
Personally I find all this abstraction crap. With all their failures, though, things such as docker kind of present a "download this one file, then it will work fine". And that is kind of true. I saw that in in-campus use for life science faculty clusters and what not. It simplifies things for the admin there. People give a similar rationale for systemd. Personally I don't think systemd should exist, but there are people who benefit from it - that simply is a factual statement.
All in all this is a very strange point of view from FreeBSD folks. At the least the NetBSD folks back then on the mailing list acknowledged the situation and then tried to find alternative strategies and in some ways succeeded (although I am not sure whether NetBSD right now runs on more toasters than Linux does - anyone has updated statistics for that?).
>>I think 500 out of 500 top supercomputers running Linux kind of show which philosophy is better.
Or is it because it's what they're used to. I saw this argument elsewhere where the respondent went on to show that the users were Linux specialists and that's why Linux was used.
> To solve the distribution and isolation problem, Linux engineers built a set of kernel primitives (namespaces, cgroups, seccomp) and then, in a very Linux fashion, built an entire ecosystem of abstractions on top to “simplify” things: [...] Somehow we ended up with an overengineered mess of leaky abstractions
Not sure I like the value judgement here. I think it's more of a consequence of Linux' success. I am convinced that if it was reversed (Linux was niche and *BSD the norm), then a ton of abstractions would come, and the average user would "use an overengineered mess" because they don't know better (or don't care or don't have a need to care).
Not that I like it when people ship their binary in a 6G docker image. But I don't think it's fair to put that on "those Linux engineers".
On the other hand, I don't think the comparison between jails and docker is fair. What made Docker popular is the reusability of the containers, certainty not the sandboxing which in the early days was very leaky.
Well, what style difference exactly? GNU utils tend to be more verbose. Other than that, what is the difference in style?
Is it really so necessary to put Vercel Security Checkpoint that apparently not working here?
FreeBSD jails were technically solid years before Docker existed, but the onboarding story was rough. You needed to understand the FreeBSD base system first. Docker let you skip all of that.
That said, I've been seeing more people question the container stack complexity recently. Especially for smaller deployments where a jail or even a plain VM with good config management would be simpler and more debuggable. The pendulum might be swinging back a bit for certain use cases.
But it's not a competition. FreeBSD does its thing and Linux does another. That's why I use FreeBSD.
I'm using either Docker Compose or Docker Swarm without Kubernetes, and there's not that much of it, to be honest. My "ingress" is just an Apache2 container that's bound to 80/443 and my storage is either volumes or bind mounts, with no need for more complexity there.
> The jails vs containers framing is interesting but I think it misses why Docker actually won. It wasn't the isolation tech. It was the ecosystem: Dockerfiles as executable documentation, a public registry, and compose for local dev. You could pull an image and have something running in 30 seconds without understanding anything about cgroups or namespaces.
So where's Jailsfiles? Where's Jail Hub (maybe naming needs a bit of work)? Where's Jail Desktop or Jail Compose or Jail Swarm or Jailbernetes?
It feels like either the people behind the various BSDs don't care much for what allowed Docker to win, or they're unable to compete with it, which is a shame, because it'd probably be somewhere between a single and double digit percent userbase growth if they decided to do it and got it right. They already have some of the foundational tech, so why not the UX and the rest of it?
The link literally uses the term ecosystem. Several times actually.
https://youtu.be/HV-wUUzRCMo
Fixed that for you ;)
First it's important to clarify "containers" are not an abstraction in the linux kernel. Containers are really an illusion achieved by use of a combination of user/pid/networking namespaces, bind mounts, and process isolation primitives through a userspace application(s) (podman/docker + a container runtime).
OCI container tooling is much easier to use, and follows the "cattle not pets" philosophy, and when you're deploying on multiple systems, and want easy updates, reproducibility, and mature tooling, you use OCI containers, not LXC or freebsd jails. FreeBSD jails can't hold a candle to the ease of use and developer experience OCI tooling offers.
> To solve the distribution and isolation problem, Linux engineers built a set of kernel primitives (namespaces, cgroups, seccomp) and then, in a very Linux fashion, built an entire ecosystem of abstractions on top to “simplify” things.
This was an intentional design decision, and not a bad one! cgroups, namespaces, and seccomp are used extensively outside of the container abstraction. (See flatpak, systemd resource slices, firejail). By not tieing process isolation to the container abstraction, we can let non-container applications benefit from them. We also get a wide breadth of container runtime choices.
I still see FreeBSD as being great for things like networking devices and storage controllers. You can apply a lot of the "cattle vs pets" design one level above that using VMs and orchestration tools.
Spawning a linux container is much simpler and faster than spawning a freebsd jail.
I don’t know why i keep hearing about jails being better, they clearly aren’t.
But somehow Linux still took over my personal and professional life.
Going back seems nice but there need to be a compelling reason -docker is fine, the costs don’t add up any more. I do t have a real logical argument beyond that.
I'm sure some people have a sunk-cost feeling with Linux and will get defensive of this, but ironically this was exactly the argument I had heard 20 years ago - and I was defensive about it myself then.. This has only become more true though.
It's really hard to argue against Linux when even architecturally poor decisions are papered over by sheer force of will and investment; so in a day-to-day context Linux is often the happy path even though the UX of FreeBSD is more consistent over time.
Which surely says something about all these ideological purity tests
People who don't understand shit about how the system behaves and are comfortable with that. "I install a package, I hit the button, it works"
.. and
People who understand very deeply how computers work, and genuinely enjoy features of the NT Kernel, like IOCP and the performance counters they offer to userland.
What's weird to me is that the competence is bimodal; you're either in the first camp or the second. With Linux (+BSD/Solaris etc;) it's a lot more of a spectrum.
I've never understood exactly why this is, but it's consistent. There's no "middle-good" Windows developer.
Gamers tend to be somewhere in the middle though.
OpenVZ and Linux vserver are older than LXC and were commonly used, though they required a patched kernel.
https://ericfortis.com/blog/freebsd-jails-network-setup
After IBM destroyed CentOS, all the Xorg politics nonsense, the list goes on with Linux, not interested. I just want something quiet and boring and stable and correctly designed. NetBSD would be my first choice but they don’t get the $ they need for drivers.
For a while even used it on the desktop, but was too much trouble due to specific tools we need that weren't supported properly. so we're using Linux on the desktop.
FreeBSD is stable, lightweight, gets out of the way, and without drama.
Uh... Xorg is packaged by FreeBSD too...
Really the whole theme that (from the article) "FreeBSD ships as a complete, coherent OS" is belied by this kind of nonsense. No, it's not. Or, sure, it is, but in exactly the same way that Debian or whatever is. It's a big soup of some local software and a huge ton of upstream dependencies curated for shipment together. Just like a Linux distro.
And, obviously, almost all those upstream dependences are exactly the same. Yet somehow the BSD folks think there's some magic to the ports stuff that the Linux folks don't understand. Well, there isn't. And honestly to the extent there's a delta in packaging sophistication, the Linux folks tend to be ahead (c.f. Nix, for example).
What specifically are you trying to cite here? Which package can I install on Debian or Fedora or whatever that "bricks the system"? Genuinely curious to know.
I think you missed the point in my original comment. I explained I moved my platform with all dependencies and had 1 bug which was actually a silent bug in Linux.
In other words, it works. Your particular stack might have a different snag profile but if I can move my giant complex app there, yours is worth a shot.
FreeBSD is more complete than you make out. They also have hard working ports maintainers.
I am not quite sure what this means. I had a jail a few years ago and I remember there was a utility to "back" the jail up so you could put it on another system. Are there constraints with that utility. It seemed to work, maybe I am forgetting something ?
In any case I still think Jails are much better than the things Linux has. To me, it is creating a jail that is more difficult. There were ports that made it easier, I used one of them, but that port was abandoned at some point. I think it was "ezjail".
Maybe opting for a better-written WAF could boost the reach?
It is not incorrect but ... do people really care about that distinction?
Because in most situations I know of, when people refer to Linux, they almost never refer to the linux kernel. They refer to the whole operating system stack, which is typically put down via a distribution. So, Fedora, Gentoo, Arch, and so forth, are all "kind of" Linux. Barely anyone refers to the linux kernel if you look at all the discussions on the world wide web.
> FreeBSD ships as a complete, coherent OS
The BSDs often promote that aka "Linux is chaos, we are coherent and consistent operating system following intelligent design". Well ... this is the rise of worse is better, repeated: https://dreamsongs.com/WorseIsBetter.html
It is a great analogy that works on so many levels. Broken down to Linux versus the BSDs, I think 500 out of 500 top supercomputers running Linux kind of show which philosophy is better. The one that works better. That does not mean the BSDs are useless, but I am getting tired of the promo used by the BSD as "we are order, Linux is chaos". I compare this more to Lego building blocks. With Linux there is a stronger focus on having building blocks available. You can build up things. You have projects such as LFS/BLFS (Linux from scratch). The BSDs do not have something comparable. Which operating system is the better tinker OS? Which community created git? (Ok ok that was Linus so not really a community per se, but it originated from Linux and perhaps that was not an accident either.)
> FreeBSD pioneered the practical implementation of what we now call containers.
Ok great. Many modern programming languages learned from older languages; many of these older languages are dead now. You need to keep on innovating. Why is BSD so dead set on the past?
> FreeBSD reached that third stage in 2000. Linux wouldn't get there until 2008 with LXC.
Dumdedum ... it kind of sounds as if the FreeBSD guys are sad that Linux went on to dominate. It reminds me of NetBSD aka "we work on every toaster in the world". Then suddenly on a mailing list many years ago "wait a moment ... Linux now works on more toasters than we do". The BSDs don't seem to understand how momentum can be dominating.
> Technical superiority doesn't win ecosystem wars. Linux won through a combination of fast decisions, the viral GPL licence, and strong enterprise backing from Red Hat and IBM. Then Google, Facebook, and Amazon happened — hungry for datacenters, developing tools to manage growing infrastructure at scale. They set the direction for the entire industry.
Ok that flat out is incorrect. First - GPL worked well for the linux kernel, that is true. But the ecosystem includes many BSD-licences programs too, on Linux. So that explanation fails already here. LLVM has Apache License 2.0 which I kind of feel is a mix between GPL and BSD (not quite true but this is how I remember it).
Then the claim is Linux won because of Red Hat. I actually find Red Hat annoying and I am glad to not depend on it. Linux is way bigger than Red Hat. IBM? I don't see what IBM did for Linux really. So that explanation also does not work.
Google, Facebook, and Amazon - well, they profited from Linux. They didn't really ENABLE Linux. They would not have used Linux if Linux would have been useless. So that part came afterwards.
So none of those explanations really work well here.
> Linux rapidly went from "the free OS for people who can't afford commercial licences" to "the only acceptable OS for servers".
That is true but not for the claims made, e. g. "because of Google". The more important question is: why did the BSDs fail?
> To solve the distribution and isolation problem, Linux engineers built a set of kernel primitives (namespaces, cgroups, seccomp) and then, in a very Linux fashion, built an entire ecosystem of abstractions on top to “simplify” things
No, that is also incorrect. cgroups are also very different to seccomp and the latter is even maintained independently: https://github.com/seccomp/libseccomp/releases
> Somehow we ended up with an overengineered mess of leaky abstractions for cloud-based, vendor-locked infrastructure.
Wait a moment - he cites Docker. That's owned by a private company. What does this have to do with Linux? If company xyz does something based on FreeBSD, we would then say company xyz is responsible for FreeBSD failing or not failing? How does that work?
> And this complexity has quietly reshaped how the industry thinks about deploying software. Today, if you want to run an application in a larger system, the implicit assumption is that you containerise it with Docker and orchestrate it with Kubernetes.
Personally I find all this abstraction crap. With all their failures, though, things such as docker kind of present a "download this one file, then it will work fine". And that is kind of true. I saw that in in-campus use for life science faculty clusters and what not. It simplifies things for the admin there. People give a similar rationale for systemd. Personally I don't think systemd should exist, but there are people who benefit from it - that simply is a factual statement.
All in all this is a very strange point of view from FreeBSD folks. At the least the NetBSD folks back then on the mailing list acknowledged the situation and then tried to find alternative strategies and in some ways succeeded (although I am not sure whether NetBSD right now runs on more toasters than Linux does - anyone has updated statistics for that?).
Or is it because it's what they're used to. I saw this argument elsewhere where the respondent went on to show that the users were Linux specialists and that's why Linux was used.