> Docker is essentially a sandwich of disk images where you can shove absolutely anything, and then these images get executed by running whatever legacy software you’ve crammed in there, regardless of how horrific or inconsistent it might be, with zero behavioral controls.
Is this a problem with Docker, or a problem with people who use Docker? Without much controversy I feel this argument could be made about literally any abstract entity in society from `get_foo` to national institutions. Abstractions seem to be a necessary evil in corporative societies.
Also, the article is way too short and meatless to provoke any real thoughts, and comes across as "If you don't already know what Kubernetes is, just know that it's bad". I don't see how that's going to sway anyone's opinion on anything.
caymanjim · 13h ago
It's not a problem with anything. Docker can be misused like anything else. It's a massive net win for security and maintainability because it completely eliminates barriers to updating software.
lmm · 12h ago
It's a problem with Docker. It's a real lowest-common-denominator interface, worse-is-better triumphing all over again. Heck, the fundamental flaw is essentially the same as make. Just when most people have finally accepted that a structured dependency/project management system is a better way to build your software than running random shell commands, we recapitulate the whole thing one level higher.
ffsm8 · 10h ago
But that's a pretty much entirely separate concern?
Docker is about creating an image that can then be run on any supporting Linux server. It's not aimed at dependency management, really. It's aimed at producing an artifact you can then test and deploy. Sure, in most languages dependency management is a core part of this process, but it doesn't have to be.
Most downloaded images are petty fundamental and only include things that can be installed via a systems package manager, e.g nginx to get a web server, or node to get a nodejs runtime etc.
This significantly improved the issue of things working only in certain environments, because each environment was actually running exactly the same way.
Then came k8s, which tried to solve the issue of deploying across clusters of machines, PaaS. That's not a trivial problem to solve, and consequently is still hard, even with it. But if you need to deploy countless services across hundreds of nodes... It's ultimately a great tool.
As k8s became more and more widespread, it made sense to also use it for smaller deployments, because it solves a pretty fundamental issue you still have to solve, and if everyone is using it ... It's easier to onboard new people
lmm · 10h ago
> This significantly improved the issue of things working only in certain environments, because each environment was actually running exactly the same way.
It doesn't though, because it's still just running some random unmanaged commands. If you download a Dockerfile that's a couple of years old and hasn't been maintained, it probably won't work. If you download the built image then it might work for a bit longer, but you won't be able to make any changes to it, and it will still break sooner or later due to e.g. certificate expiry, or being built for the wrong CPU architecture.
> Then came k8s, which tried to solve the issue of deploying across clusters of machines, PaaS. That's not a trivial problem to solve, and consequently is still hard, even with it. But if you need to deploy countless services across hundreds of nodes... It's ultimately a great tool.
Declarative deployment definitely has a lot of value. But k8s shoots itself in the foot by being Docker based, even as the world gradually moves onto better models (language-specific serverless deployments).
ffsm8 · 9h ago
You seem to have a fundamental misunderstanding about what docker actually does, and how it's achieving it
> If you download a Dockerfile that's a couple of years old and hasn't been maintained, it probably won't work
The dockerfile is not the produced artifact, that's merely the most common way to build the artifact.
The artifact is the image. Which itself is only a zip file of a linux filesystem, basically. This will behave the same, no matter when you downloaded it.
And I might add: the goal you seem to have: to make an image runnable forevermore is not the goal of docker, again. But if you made a new image, you can then test the exact same deployment in another environment. You do not need to e.g. run apt update on prod and pray to the gods that nothing breaks. Instead you build the image, deploy to test, see that nothing breaks and then deploy to prod.
Also k8s isn't technically backed by the docker backend anymore. It hasn't for a pretty long time now. It's currently using containerd by default
https://github.com/containerd/containerd
> You seem to have a fundamental misunderstanding about what docker actually does, and how it's achieving it
On the contrary, I understand what it actually does (or tries to) for very well, you're hung up on what its proponents say it's meant to do (or perhaps retreat to in motte-and-bailey fashion).
> The artifact is the image. Which itself is only a zip file of a linux filesystem, basically. This will behave the same, no matter when you downloaded it.
But it won't. It's architecture-specific, it may rely on other interfaces that are ultimately moving targets, and you'll face things like certificate expiration. And modifying it in place is not sustainable, you need to be able to rebuild it.
> if you made a new image, you can then test the exact same deployment in another environment. You do not need to e.g. run apt update on prod and pray to the gods that nothing breaks. Instead you build the image, deploy to test, see that nothing breaks and then deploy to prod.
That's not nothing but it's something decent language deployment systems already had for years. Build your structured app image, it's a stable artifact, and you can deploy it any number of times and places.
> But supports other backends too
But they're all using that same model of unpack some random unmanaged bytes and try to run them as a Linux executable, right? Yes you can swap out the implementation, but the interface is fundamentally at the wrong level of abstraction.
ffsm8 · 4h ago
I entered the tech industry with an ops role, which is why I had - from the perspective of most devs - a lot of experience with the kind of issues k8s solves.
In this context, your responses are very puzzling. I don't disagree with the core of your opinion - there are languages around that solve the issue of sourcing your language specific dependencies a lot better for it's ecosystem then docker ever could.
Some languages, such as erlang/elixir also come with highly advanced and stable clustering basically buildin, which solves running your own software on your own hardware in a distributed manner.
However, that's a completely different issue to what docker and k8s solves.
From devops centered enterprise: K8s basically replaced over provisioned hypervisors running countless VMs with running services in namespaces, no longer needing to over-commit RAM to multiple machines.
And Docker replaced the VMs we were building before that with OCI.
In a less advanced enterprise, OCI replaced the dreaded "patchday" update process with a unified deployment pipeline that simply rebuild the image every so often, giving you high confidence that the deployment will encounter less adhoc issues then youd have to solve before.
I'm saying this as from the perspective of a not particularly enthusiastic user of k8s.
I would suggest everyone to first just run single node with high availability fail over if necessary... But once you're signing SLA with high availability guarantees, deploying hundreds of services per day: doing anything but k8s is a bad idea. You're not doing that with the built-in tooling of e.g. Erlang/elixir either, because you're still missing all the other parts of your infrastructure.
jauntywundrkind · 12h ago
There are Distroless images! You can make a container with nothing but one executable in it. https://github.com/GoogleContainerTools/distroless has some helpers but you can just put your rust or go program into an image & ship it! For if you really want to go super far super hard against layer sandwiches.
Personally, I think you're making awful tradeoffs that radically hamper your team's ability to act operationally. That you are showing enormously low respect by handcuffing your team like so.
I hope you have lots of help for how to bring out debug containers to apply tooling that your images are incapable of providing themselves. I beg you to not, but for the uptight, the freaks, the paranoids and those who must go on griping about containers, it's really very easy to do! Anyhow.
What's extra funny and sick is that some of this article valorizes VMs. Which have far far more of a bucket of bits what-is-all-this-nonsense problem, that requires providing far more scripting, tailoring, configuration make. Unless you are running bootc or zerg or unikernel stuff, it feels like VM'ers are in in no moral position to look down on containers, my heavens!
This article has a couple little gems in it here & there. You call it short, but it's just short on understanding thought concern and perspective. It wrings the same blood from the stone again and again and again. Complexity bad, Kubernetes complex, this is garbage, again and again and again. Without really having much to say, just inferring, just trying to condescend you into belief. But there are a couple little good gems here & there (pro CORBA?! Wasm/wasi hype...)
nisa · 13h ago
This (ragebaity/ai?) post kind of mixes things up. Kubernetes is fine I think but almost everything around it and the whole consulting business is the problem. You can build a single binary, use a single layer oci container and run it with a single config map with memory quota on 30 machines just fine.
Take a look at the early borg papers what problem does it solves. Helm is just insane but you can use jsonnet that is modelled after Google's internal system.
Only use the minimal subset and have an application that is actually build to work fine in that subset.
sidewndr46 · 13h ago
how does jsonnet relate to Helm? It appears to be a template language.
nisa · 13h ago
You can use it with tanka or kubecfg, argo and flux also have native support. You can also just render yaml and use kubectl apply
WD-42 · 12h ago
You claim to just “keep it simple” but then rattle off about 10 k8s related tech nouns. No, I don’t think it’s simple.
happytoexplain · 12h ago
The granularity of the web stack is ruinous. Of course the opposite also has issues - e.g. iOS, a monolithic stack where there's one standard, but often good tool for every common requirement. But I'd happily take even a semi-competent benevolent dictator over the insane multiplication of options for every single microscopic piece of the stack, each with its own obscure proper noun, phrase, or acronym.
nisa · 7h ago
Simple is not easy.
lixtra · 13h ago
Maybe thought provoking. But sad to read ai garbage. It’s easy to imagine a better world. But you also need to provide a way to reach it.
For example a lot of things are wrong with docker. But it enables us to run yesterday’s software in the cloud. Tomorrow’s software is not written yet.
busterarm · 13h ago
The better world looks like a pretty close approximation of the BEAM VM.
lambdasquirrel · 13h ago
The problem with this thesis is that there’s no other easy way to make microservices composed of different language stacks work well together, even if they are distributed-first. What happens when you jump from Elixir to Scala actors? It can’t easily be said that k8s is just an assembler of garbage.
Ultimately, the AI content is much like so much of content in this internet age: really good at convincing people who lack the background to call BS, but easily disproved by anyone with basic background.
jauntywundrkind · 12h ago
The article feels so intolerant to complexity, desparages again and again the complexity and accumulation of concerns.
But amid that there is this wild sublime hopeful optimism. This is maybe the only even slightly pro-CORBA piece any person machine has written in probably 20 years now. It/they try to pitch WASI WebAssembly as the next coming, as some magical relief that may somehow perhaps (entirely unspecifiedly) cure the complexity issues of today up.
And WASI is cross language! You can make wasm modules in whatever language, throw them into your wit world, hit go, and get a runtime that's running all your languages.
> The problem with this thesis is that there’s no other easy way to make microservices composed of different language stacks work well together
This is actually far easier. Because who cares if you are running micro-services? Every microservice exposes some http services. With some schema (json schema, openapi, type spec, GraphQL, whatever). One of the real true gains of micro-services is that language is (mostly) irrelevant.
lambdasquirrel · 3h ago
I don’t think WASI is going to make k8s (and its ecosystem) go away. All the other problems tackled by docker and k8s were not solved by things like GraphQL and protobuf.
Uehreka · 13h ago
The “Unpopular Opinion framing” is such a wonderful cheat code. You get to seem like an independent thinker for free, your argument gets bonus moral superiority points, and if anyone points out that your argument is actually very popular you get to claim it’s all a joke and mock them for taking you seriously (even though you actually mean what you’re saying).
turtlebits · 12h ago
The OP's stated ideal world is Erlang/Unison, WASI and OpenServerless. Why not champion those instead of being a contrarian? I'm always open to seeing whats better.
cwillu · 12h ago
Can we start marking predominately ai submissions with an [AI] when it's not already obvious from the title?
remram · 12h ago
At least the article is honest about being a clickbaity AI-generated attack...
charles_f · 9h ago
I remember a world where you had to manually connect to a physical machine then copy and paste DLLs around tp deploy stuff. A batch script (that occasionally worked) was great innovation when it came to deployment. Not that long ago I worked with a company where it took persuasion to stop the madness of manually deploying to "the cloud" from an IDE at night when the traffic was slower.
Kubernetes is borderline painful, because it's specific, you need to learn it, you'll get burnt a bunch of times and unlike a VM you can't connect and reboot. And it has this tendency like Javascript to have breaking changes every 9 months. But if you're doing anything at some scale, it's also pretty good once it's set. If you don't, and what you're looking for is simplicity, go for one of the functions / lambda / heroku alternatives. Yes, lock-in, but also not that much if you abstain from leaning into the proprietary too much. Which brings me to that
> Kubernetes has also created a powerful lock-in effect
What? Isn't lock-in to kubernetes the same lock-in you have with your programming language, databases and largely architecture decisions? I mean if you're using one of these cloud resources sure, but generally when just relying on the orchestration, it actually provides the opposite.
kkfx · 10h ago
The cloud is someone else computer and the deploy solution alternatives are declarative systems like NixOS or Guix, finally adopted by a large community instead of wasting resources and enlarging attack surfaces by using container tech in general.
The underneath core problem is that OSes are not a single user-programmable application where any "program" is just a set of functions like original Smalltalk workstations or LispM. They are hard to evolve, but much quicker to experiment and gives ground for much quick innovation instead of wasting resources with giant boilerplate code and walled gardens.
Another underneath core problem is hardware where vendors in 2025 still do not care about light-out management on anything. These days an IP-KVM should and must be built-in in any computer.
jauntywundrkind · 12h ago
What a diseased miserable propaganda hit job, "by AI" and also they confess sort of not.
I actually like a lot do the hope & wants expressed. Yes, WASI has huge promise to be a much better CORBA that I too am incredibly excited to watch develop, with hope. But it's not here, wasi preview2 is much better, but is still incredibly sync only with an even more bare bones than Rust (!) "poll" (do all of the actual work now, not poll.for completion) system.
I'm trying to cool my jets. So much feels not just biased, but outright idiotic & willing to lie like crazy. I can see this hate being so popular, such a dark side fan base lapping up the disdain that drips off such a lopsided ungraceful hatchet job.
> whatever random stuff you decided to put inside it!" The only reason this approach gained traction is because it saves upload space compared to creating full virtual machines. That’s it.
Generously, this is written by an ignorant idiot: best case. Its hard to credit the AI/person even ignorance. Seems everyone in the last 5 years at least oughtta be damned well aware that containers have pretty real advantages versus VMs.
Just talking purely about how your machine runs VMs vs containers, there's so many wins. Considerably less CPU & latency overhead because you only need to run one kernel, rather than dozens or hundreds which are all trying to schedule but which don't actually have scheduling authority. And that one kernel can allocate memory more efficiently and potentially share page cache across images, radically improving both memory and cache utilization.
Manageability is so much better. There are probably some attempts out there for base VMs, but a full OS tends to have a lot of corp config and twiddling. Where-as you can grab a node or python or whatever base image, built on a nice slim OS or distroless (no starter required but https://github.com/GoogleContainerTools/distroless is a resource), and add just a little more of your stuff on top.
The profile of what needs to be managed with containers is much much much less; the host and your container runtime provide much of what a VM would need configured on it.
It's stunning to find this variety of simple uninformed rage bait, that didn't pass muster 5 years ago! Yes, the convenience of layers pulling in what you want is high. But that's 1. Good and 2. Also backed by many significant technical wins vs VMs.
> From this fundamentally broken concept of componentization, we’ve constructed the entire Kubernetes ecosystem, which exists primarily to solve the problem of managing these semantically meaningless horrors by aggregating all possible kinds of digital garbage into something that resembles a functioning system.
Like, alas, yes maybe? Actually running modern services requires an amalgamation of more than just code.
Maybe we are just talking about the container itself, in which case this is absurd. You can fill a container up with whatever. I think VM's are much worse bucket of bit problems, usually without simple Dockerfiles to explain themselves. The container can be extremely minimal, again, as Distroless shows. Containers feel far less of an offense than what happens with modern package management, which companies typically just don't do a good job of at all.
But more generously I think we take this snipe as a bigger view, about Kubernetes breadth and scope and all the other pieces.
And there, it is a ridiculously preposterously one sized critique, that 100% is locked on to the thing developers are closest to (code) and which ignores everything else that's cloud.
Perhaps yes there are better ways to send code to systems. Perhaps this is far more complex than what you'd want to send up code.
But what makes Kubernetes popular and different from the many "ways to run code" whose corpses decay along the path of history is that none of them provided much other than code. I talked about what containers provide versus needing to configure VMs, and that goes 5x for Kubernetes. You can pass in secrets, pass in your various aws resources with ACK, pass in databases from psotgres-operator, pass in storage, pass in fancy networking with multus, now with new Kube you can pass on GPUs.
Kube provides. It's like dependency injection for computing (rather than code) at large. Your manifest asks for some stuff it needs, and the platform makes it available. This is such a powerful distinguishing advancement over where we were. And it's so extensible, so readily able to model and provide such an endless variety of manageability across the whole breadth of cloud stuff. Cloud stuff that is in no way unique to Kubernetes. This criticism isn't against Kubernetes, it's against system design that depends on anything, it's criticism that hates CloudFormation and Terraform at least as bitterly for daring to let your service have dependencies.
> Deploying a system on Kubernetes is always a nightmare of managing a collection of unhinged components that can explode at any moment for any reason.
So is the real world. So is running things.
Having a consistent place where everything is managed via similar tools is a great relief, versus every system blowing up in its own way. Versus every dependency having its own bespoke operational paradigm.
And it heals so much better! Kubernetes's biggest problem is that it's so damned autonomic, it self heals so well, that we never even have to get the intimate gory bloody experience we had to claw through. Teams aren't setting up their own postgres HA and backup and replication by hand, over multiple sprints. They pull down a very professional very tested operator, and the controller works, it operates the cluster across a huge range of failures incredibly well. We are bad at debugging and understanding Kubernetes because it mostly works extremely fantastically robustly and well without us.
> How do we create software components for distributed computing that can be easily deployed in the cloud?
And,
> And here’s the kicker: universal components that work across all languages already exist. Look at WASI (WebAssembly System Interface), components built on WebAssembly that provide a platform-independent, efficient format that any language can compile to.
I talked to this some. I too have huge hopes in WASI. I want that future so so bad. But it's not here yet (in any meanginfully usable async fashion), and it's speculative that it's all going to work well someday (fingers very crossed, I want it). Watch a Luke Wagner talk and get hype, get involved, find out! https://youtu.be/W3f8AAte0LM
But even when WASI is here, i 100% am going to be using wasmCloud (or something else) to manage WebAssembly runtimes, doing more and more, on Kubernetes, using the storage, config, networking, and secret primitives provided by the base Kubernetes platform, using other services like postgres and nats and redis, which are managed by very good very professional Kubernetes controllers.
The paradigm of Kubernetes brings incredible clarity to help us manage the sea of things. In an miraculously-extensible cross type-of-thing way we have never enjoyed the luxury of before. With a paradigm of reliability we've rarely enjoyed.
Wasm will do a lot, but it's just code. We still ought want & be working on management layers. There's so much value to the powerful platform providing world Kubernetes advanced us towards, that containers advanced us beyond VMs with. Looking forward to more. Loving seeing the managed stuff.
The symptom is that the world is complex. Eliminating or never going to settled agriculture would have resulted in none of these problems happening. But that's not the sort of solution we really want, that lets us do amazing things, that creates a mature capable world. We have to straddle and cope with complexity, and simply retreating from it and calling it bad like this article does again and again is such a retro-naivety that so darkly refuses to acknowledge that while complexity sometimes hurts, while we say we want simplicity, we have to also balance that we are an advanced civilization & we want the nice things that advanced civilizations come with, and that simply pop poo-ing and scouring the complexity, while hip and edgy, isn't an sufficient practical or coherent line of thought that merits giving up like that. On we go.
Is this a problem with Docker, or a problem with people who use Docker? Without much controversy I feel this argument could be made about literally any abstract entity in society from `get_foo` to national institutions. Abstractions seem to be a necessary evil in corporative societies.
Also, the article is way too short and meatless to provoke any real thoughts, and comes across as "If you don't already know what Kubernetes is, just know that it's bad". I don't see how that's going to sway anyone's opinion on anything.
Docker is about creating an image that can then be run on any supporting Linux server. It's not aimed at dependency management, really. It's aimed at producing an artifact you can then test and deploy. Sure, in most languages dependency management is a core part of this process, but it doesn't have to be.
Most downloaded images are petty fundamental and only include things that can be installed via a systems package manager, e.g nginx to get a web server, or node to get a nodejs runtime etc.
This significantly improved the issue of things working only in certain environments, because each environment was actually running exactly the same way.
Then came k8s, which tried to solve the issue of deploying across clusters of machines, PaaS. That's not a trivial problem to solve, and consequently is still hard, even with it. But if you need to deploy countless services across hundreds of nodes... It's ultimately a great tool.
As k8s became more and more widespread, it made sense to also use it for smaller deployments, because it solves a pretty fundamental issue you still have to solve, and if everyone is using it ... It's easier to onboard new people
It doesn't though, because it's still just running some random unmanaged commands. If you download a Dockerfile that's a couple of years old and hasn't been maintained, it probably won't work. If you download the built image then it might work for a bit longer, but you won't be able to make any changes to it, and it will still break sooner or later due to e.g. certificate expiry, or being built for the wrong CPU architecture.
> Then came k8s, which tried to solve the issue of deploying across clusters of machines, PaaS. That's not a trivial problem to solve, and consequently is still hard, even with it. But if you need to deploy countless services across hundreds of nodes... It's ultimately a great tool.
Declarative deployment definitely has a lot of value. But k8s shoots itself in the foot by being Docker based, even as the world gradually moves onto better models (language-specific serverless deployments).
> If you download a Dockerfile that's a couple of years old and hasn't been maintained, it probably won't work
The dockerfile is not the produced artifact, that's merely the most common way to build the artifact.
The artifact is the image. Which itself is only a zip file of a linux filesystem, basically. This will behave the same, no matter when you downloaded it.
And I might add: the goal you seem to have: to make an image runnable forevermore is not the goal of docker, again. But if you made a new image, you can then test the exact same deployment in another environment. You do not need to e.g. run apt update on prod and pray to the gods that nothing breaks. Instead you build the image, deploy to test, see that nothing breaks and then deploy to prod.
Also k8s isn't technically backed by the docker backend anymore. It hasn't for a pretty long time now. It's currently using containerd by default https://github.com/containerd/containerd
But supports other backends too
https://kubernetes.io/docs/setup/production-environment/cont...
On the contrary, I understand what it actually does (or tries to) for very well, you're hung up on what its proponents say it's meant to do (or perhaps retreat to in motte-and-bailey fashion).
> The artifact is the image. Which itself is only a zip file of a linux filesystem, basically. This will behave the same, no matter when you downloaded it.
But it won't. It's architecture-specific, it may rely on other interfaces that are ultimately moving targets, and you'll face things like certificate expiration. And modifying it in place is not sustainable, you need to be able to rebuild it.
> if you made a new image, you can then test the exact same deployment in another environment. You do not need to e.g. run apt update on prod and pray to the gods that nothing breaks. Instead you build the image, deploy to test, see that nothing breaks and then deploy to prod.
That's not nothing but it's something decent language deployment systems already had for years. Build your structured app image, it's a stable artifact, and you can deploy it any number of times and places.
> But supports other backends too
But they're all using that same model of unpack some random unmanaged bytes and try to run them as a Linux executable, right? Yes you can swap out the implementation, but the interface is fundamentally at the wrong level of abstraction.
In this context, your responses are very puzzling. I don't disagree with the core of your opinion - there are languages around that solve the issue of sourcing your language specific dependencies a lot better for it's ecosystem then docker ever could.
Some languages, such as erlang/elixir also come with highly advanced and stable clustering basically buildin, which solves running your own software on your own hardware in a distributed manner.
However, that's a completely different issue to what docker and k8s solves.
From devops centered enterprise: K8s basically replaced over provisioned hypervisors running countless VMs with running services in namespaces, no longer needing to over-commit RAM to multiple machines.
And Docker replaced the VMs we were building before that with OCI.
In a less advanced enterprise, OCI replaced the dreaded "patchday" update process with a unified deployment pipeline that simply rebuild the image every so often, giving you high confidence that the deployment will encounter less adhoc issues then youd have to solve before.
I'm saying this as from the perspective of a not particularly enthusiastic user of k8s.
I would suggest everyone to first just run single node with high availability fail over if necessary... But once you're signing SLA with high availability guarantees, deploying hundreds of services per day: doing anything but k8s is a bad idea. You're not doing that with the built-in tooling of e.g. Erlang/elixir either, because you're still missing all the other parts of your infrastructure.
Personally, I think you're making awful tradeoffs that radically hamper your team's ability to act operationally. That you are showing enormously low respect by handcuffing your team like so.
I hope you have lots of help for how to bring out debug containers to apply tooling that your images are incapable of providing themselves. I beg you to not, but for the uptight, the freaks, the paranoids and those who must go on griping about containers, it's really very easy to do! Anyhow.
What's extra funny and sick is that some of this article valorizes VMs. Which have far far more of a bucket of bits what-is-all-this-nonsense problem, that requires providing far more scripting, tailoring, configuration make. Unless you are running bootc or zerg or unikernel stuff, it feels like VM'ers are in in no moral position to look down on containers, my heavens!
This article has a couple little gems in it here & there. You call it short, but it's just short on understanding thought concern and perspective. It wrings the same blood from the stone again and again and again. Complexity bad, Kubernetes complex, this is garbage, again and again and again. Without really having much to say, just inferring, just trying to condescend you into belief. But there are a couple little good gems here & there (pro CORBA?! Wasm/wasi hype...)
Take a look at the early borg papers what problem does it solves. Helm is just insane but you can use jsonnet that is modelled after Google's internal system.
Only use the minimal subset and have an application that is actually build to work fine in that subset.
Ultimately, the AI content is much like so much of content in this internet age: really good at convincing people who lack the background to call BS, but easily disproved by anyone with basic background.
But amid that there is this wild sublime hopeful optimism. This is maybe the only even slightly pro-CORBA piece any person machine has written in probably 20 years now. It/they try to pitch WASI WebAssembly as the next coming, as some magical relief that may somehow perhaps (entirely unspecifiedly) cure the complexity issues of today up.
And WASI is cross language! You can make wasm modules in whatever language, throw them into your wit world, hit go, and get a runtime that's running all your languages.
> The problem with this thesis is that there’s no other easy way to make microservices composed of different language stacks work well together
This is actually far easier. Because who cares if you are running micro-services? Every microservice exposes some http services. With some schema (json schema, openapi, type spec, GraphQL, whatever). One of the real true gains of micro-services is that language is (mostly) irrelevant.
Kubernetes is borderline painful, because it's specific, you need to learn it, you'll get burnt a bunch of times and unlike a VM you can't connect and reboot. And it has this tendency like Javascript to have breaking changes every 9 months. But if you're doing anything at some scale, it's also pretty good once it's set. If you don't, and what you're looking for is simplicity, go for one of the functions / lambda / heroku alternatives. Yes, lock-in, but also not that much if you abstain from leaning into the proprietary too much. Which brings me to that
> Kubernetes has also created a powerful lock-in effect
What? Isn't lock-in to kubernetes the same lock-in you have with your programming language, databases and largely architecture decisions? I mean if you're using one of these cloud resources sure, but generally when just relying on the orchestration, it actually provides the opposite.
The underneath core problem is that OSes are not a single user-programmable application where any "program" is just a set of functions like original Smalltalk workstations or LispM. They are hard to evolve, but much quicker to experiment and gives ground for much quick innovation instead of wasting resources with giant boilerplate code and walled gardens.
Another underneath core problem is hardware where vendors in 2025 still do not care about light-out management on anything. These days an IP-KVM should and must be built-in in any computer.
I actually like a lot do the hope & wants expressed. Yes, WASI has huge promise to be a much better CORBA that I too am incredibly excited to watch develop, with hope. But it's not here, wasi preview2 is much better, but is still incredibly sync only with an even more bare bones than Rust (!) "poll" (do all of the actual work now, not poll.for completion) system.
I'm trying to cool my jets. So much feels not just biased, but outright idiotic & willing to lie like crazy. I can see this hate being so popular, such a dark side fan base lapping up the disdain that drips off such a lopsided ungraceful hatchet job.
> whatever random stuff you decided to put inside it!" The only reason this approach gained traction is because it saves upload space compared to creating full virtual machines. That’s it.
Generously, this is written by an ignorant idiot: best case. Its hard to credit the AI/person even ignorance. Seems everyone in the last 5 years at least oughtta be damned well aware that containers have pretty real advantages versus VMs.
Just talking purely about how your machine runs VMs vs containers, there's so many wins. Considerably less CPU & latency overhead because you only need to run one kernel, rather than dozens or hundreds which are all trying to schedule but which don't actually have scheduling authority. And that one kernel can allocate memory more efficiently and potentially share page cache across images, radically improving both memory and cache utilization.
Manageability is so much better. There are probably some attempts out there for base VMs, but a full OS tends to have a lot of corp config and twiddling. Where-as you can grab a node or python or whatever base image, built on a nice slim OS or distroless (no starter required but https://github.com/GoogleContainerTools/distroless is a resource), and add just a little more of your stuff on top.
The profile of what needs to be managed with containers is much much much less; the host and your container runtime provide much of what a VM would need configured on it.
It's stunning to find this variety of simple uninformed rage bait, that didn't pass muster 5 years ago! Yes, the convenience of layers pulling in what you want is high. But that's 1. Good and 2. Also backed by many significant technical wins vs VMs.
> From this fundamentally broken concept of componentization, we’ve constructed the entire Kubernetes ecosystem, which exists primarily to solve the problem of managing these semantically meaningless horrors by aggregating all possible kinds of digital garbage into something that resembles a functioning system.
Like, alas, yes maybe? Actually running modern services requires an amalgamation of more than just code.
Maybe we are just talking about the container itself, in which case this is absurd. You can fill a container up with whatever. I think VM's are much worse bucket of bit problems, usually without simple Dockerfiles to explain themselves. The container can be extremely minimal, again, as Distroless shows. Containers feel far less of an offense than what happens with modern package management, which companies typically just don't do a good job of at all.
But more generously I think we take this snipe as a bigger view, about Kubernetes breadth and scope and all the other pieces.
And there, it is a ridiculously preposterously one sized critique, that 100% is locked on to the thing developers are closest to (code) and which ignores everything else that's cloud.
Perhaps yes there are better ways to send code to systems. Perhaps this is far more complex than what you'd want to send up code.
But what makes Kubernetes popular and different from the many "ways to run code" whose corpses decay along the path of history is that none of them provided much other than code. I talked about what containers provide versus needing to configure VMs, and that goes 5x for Kubernetes. You can pass in secrets, pass in your various aws resources with ACK, pass in databases from psotgres-operator, pass in storage, pass in fancy networking with multus, now with new Kube you can pass on GPUs.
Kube provides. It's like dependency injection for computing (rather than code) at large. Your manifest asks for some stuff it needs, and the platform makes it available. This is such a powerful distinguishing advancement over where we were. And it's so extensible, so readily able to model and provide such an endless variety of manageability across the whole breadth of cloud stuff. Cloud stuff that is in no way unique to Kubernetes. This criticism isn't against Kubernetes, it's against system design that depends on anything, it's criticism that hates CloudFormation and Terraform at least as bitterly for daring to let your service have dependencies.
> Deploying a system on Kubernetes is always a nightmare of managing a collection of unhinged components that can explode at any moment for any reason.
So is the real world. So is running things.
Having a consistent place where everything is managed via similar tools is a great relief, versus every system blowing up in its own way. Versus every dependency having its own bespoke operational paradigm.
And it heals so much better! Kubernetes's biggest problem is that it's so damned autonomic, it self heals so well, that we never even have to get the intimate gory bloody experience we had to claw through. Teams aren't setting up their own postgres HA and backup and replication by hand, over multiple sprints. They pull down a very professional very tested operator, and the controller works, it operates the cluster across a huge range of failures incredibly well. We are bad at debugging and understanding Kubernetes because it mostly works extremely fantastically robustly and well without us.
> How do we create software components for distributed computing that can be easily deployed in the cloud?
And,
> And here’s the kicker: universal components that work across all languages already exist. Look at WASI (WebAssembly System Interface), components built on WebAssembly that provide a platform-independent, efficient format that any language can compile to.
I talked to this some. I too have huge hopes in WASI. I want that future so so bad. But it's not here yet (in any meanginfully usable async fashion), and it's speculative that it's all going to work well someday (fingers very crossed, I want it). Watch a Luke Wagner talk and get hype, get involved, find out! https://youtu.be/W3f8AAte0LM
But even when WASI is here, i 100% am going to be using wasmCloud (or something else) to manage WebAssembly runtimes, doing more and more, on Kubernetes, using the storage, config, networking, and secret primitives provided by the base Kubernetes platform, using other services like postgres and nats and redis, which are managed by very good very professional Kubernetes controllers.
The paradigm of Kubernetes brings incredible clarity to help us manage the sea of things. In an miraculously-extensible cross type-of-thing way we have never enjoyed the luxury of before. With a paradigm of reliability we've rarely enjoyed.
Wasm will do a lot, but it's just code. We still ought want & be working on management layers. There's so much value to the powerful platform providing world Kubernetes advanced us towards, that containers advanced us beyond VMs with. Looking forward to more. Loving seeing the managed stuff.
The symptom is that the world is complex. Eliminating or never going to settled agriculture would have resulted in none of these problems happening. But that's not the sort of solution we really want, that lets us do amazing things, that creates a mature capable world. We have to straddle and cope with complexity, and simply retreating from it and calling it bad like this article does again and again is such a retro-naivety that so darkly refuses to acknowledge that while complexity sometimes hurts, while we say we want simplicity, we have to also balance that we are an advanced civilization & we want the nice things that advanced civilizations come with, and that simply pop poo-ing and scouring the complexity, while hip and edgy, isn't an sufficient practical or coherent line of thought that merits giving up like that. On we go.