I ditched Docker for Podman

782 codesmash 454 9/5/2025, 11:56:59 AM codesmash.dev ↗

Comments (454)

ttul · 2h ago
Back in 2001/2002, I was charged with building a WiFi hotspot box. I was a fan of OpenBSD and wanted to slim down our deployment, which was running on Python, to avoid having to copy a ton of unnecessary files to the destination systems. I also wanted to avoid dependency-hell. Naturally, I turned to `chroot` and the jails concept.

My deployment code worked by running the software outside of the jail environment and monitoring the running processes using `ptrace` to see what files it was trying to open. The `ptrace` output generated a list of dependencies, which could then be copied to create a deployment package.

This worked brilliantly and kept our deployments small and immutable and somewhat immune to attack -- not that being attacked was a huge concern in 2001 as it is today. When Docker came along, I couldn't help but recall that early work and wonder whether anyone has done a similar thing to monitor file usage within Docker containers and trim them down to size after observing actual use.

sroerick · 1h ago
The best CI/CD pipeline I ever used was my first freelance deployment using Django. I didn't have a clue what I was doing and had to phone a friend.

We set up a git post receive hook which built static files and restarted httpd on a git receive. Deployment was just 'git push live master'.

While I've used Docker a lot since then, that remains the single easiest deployment I've ever had.

I genuinely don't understand what docker brings to the table. I mean, I get the value prop. But it's really not that hard to set up http on vanilla Ubuntu (or God forbid, OpenBSD) and not really have issues.

Is the reproducibility of docker really worth the added overhead of managing containers, docker compose, and running daemons on your devbox 24/7?

rcv · 7m ago
> I genuinely don't understand what docker brings to the table. I mean, I get the value prop. But it's really not that hard to set up http on vanilla Ubuntu (or God forbid, OpenBSD) and not really have issues.

Sounds great if you're only running a single web server or whatever. My team builds a fairly complex system that's comprised of ~45 unique services. Those services are managed by different teams with slightly different language/library/etc needs and preferences. Before we containerized everything it was a nightmare keeping everything in sync and making sure different teams didn't step on each others dependencies. Some languages have good tooling to help here (e.g. Python virtual environments) but it's not so great if two services require a different version of Boost.

With Docker, each team is just responsible for making sure their own containers build and run. Use whatever you need to get your job done. Our containers get built in CI, so there is basically a zero percent chance I'll come in in the morning and not be able to run the latest head of develop because someone else's dev machine is slightly different from mine. And if it runs on my machine, I have very good confidence it will run on production.

IanCal · 2m ago
Managing and running some containers is really easy though. And running daemons? Don’t we all have loads of things running all the time?

I find it easier to have the same interface for everything, where I can easily swap around ports.

Shog9 · 1h ago
Reproducibility? No.

Not having to regularly rebuild the whole dev environment because I need to work on one particular Python app once a quarter and its build chain reliably breaks other stuff? Priceless.

janjongboom · 1h ago
This false sense of reproducability is why I funded https://docs.stablebuild.com/ some years ago. It lets you pin stuff in dockerfiles that are normally unpinnable like OS package repos, docker hub tags and random files on the internet. So you can go back to a project a year from now and actually get the same container back again.
jselysianeagle · 45m ago
Isn't this problem usually solved by building an actual image for your specific application, tagging that and pushing to some docker repo? At least that's how it's been at placec I've worked at that used docker. What am I missing?
bolobo · 1h ago
> I genuinely don't understand what docker brings to the table. I mean, I get the value prop. But it's really not that hard to set up http on vanilla Ubuntu (or God forbid, OpenBSD) and not really have issues.

For me, as an ex-ops, the value proposition is to be able to package a complex stack made of one or more db, several services and tools (ours and external), + describe the interface of these services with the system in a standard way (env vars + mounts points).

It massively simplify the onboarding experience, make updating the stack trivial, and also allow devs, ci and prod to run the same version of all the libraries and services.

roozbeh18 · 1h ago
Someone wrote a PHP7 script to generate some of our daily reports a while back that nobody wants to touch. Docker happily runs the PHP7 code in the container and generates the reports on any system. its portable, and it doesnt require upkeep.
bmgoau · 1h ago
First result on Google, 22k stars https://github.com/slimtoolkit/slim
t43562 · 8h ago
To provide 1 contrary opinion to all the others saying they have a problem:

Podman rocks for me!

I find docker hard to use and full of pitfalls and podman isn't any worse. On the plus side, any company I work for doesn't have to worry about licences. Win win!

nickjj · 8h ago
> On the plus side, any company I work for doesn't have to worry about licences. Win win!

Was this a deal breaker for any company?

I ask because the Docker Desktop paid license requirement is quite reasonable. If you have less than 250 employees and make less than $10 million in annual revenue it's free.

If you have a dev team of 10 people and are extremely profitable to where you need licenses you'd end up paying $9 a year per developer for the license. So $90 / year for everyone, but if you have US developers your all-in payroll is probably going to be over $200,000 per developer or roughly $2 million dollars. In that context $90 is practically nothing. A single lunch for the dev team could cost almost double that.

To me that is a bargain, you're getting an officially supported tool that "just works" on all operating systems.

csours · 7h ago
Companies aren't monoliths, they're made of teams.

Big companies are made of teams of teams.

The little teams don't really get to make purchasing decisions.

If there's a free alternative, little teams just have to suck it up and try to make it work.

---

Also consider that many of these expenses are born by the 'cost center' side of the house, that is, the people who don't make money for the company.

If you work in a cost center, the name of the game is saving money by cutting expenses.

If technology goes into the actual product, the cost for that is accounted for differently.

citizenpaul · 1h ago
It always amazes me how hostile most large companies are to paying for developer tools that have a trivial cost. Then they will approve the budget for some yay quartly profit party no one cares about that cost $100k for the venue rental alone.

I do understand that this mostly is because management wants staff to be replaceable and disposable having specialty tools suggests that a person can be unique.

akerl_ · 8h ago
The problem isn’t generally the cost, it’s the complexity.

You end up having to track who has it installed. Hired 5 more people this week? How many of them will want docker desktop? Oh, we’ve maxed the licenses we bought? Time to re-open the procurement process and amend the purchase order.

nickjj · 8h ago
A large company who is buying licenses for tools has to deal with this for many different things. Docker is not unique here.

An IT department for a company of that size should have ironed out workflows and automated ways to keep tabs on who has what and who needs what. They may also be under various compliance requirements that expect due diligence to happen every quarter to make sure everything is legit from a licensing perspective.

Even if it's not automated, it's normal for a team to email IT / HR with new hire requirements. Having a list of tools that need licenses in that email is something I've seen at plenty of places.

I would say there's lots of other tools where onboarding is more complicated from a license perspective because it might depend on if a developer wants to use that tool and then keeping tabs on if they are still using it. At least with Docker Desktop it's safe to say if you're on macOS you're using it.

I guess I'm not on board with this being a major conflict point.

Aurornis · 5h ago
> An IT department for a company of that size should have ironed out workflows and automated ways to keep tabs on who has what and who needs what. They may also be under various compliance requirements that expect due diligence to happen every quarter to make sure everything is legit from a licensing perspective.

Correct, but every additional software package and each additional license adds more to track.

Every new software license requires legal to review it.

These centralized departments add up all of the license and SaaS costs and it shows up as one big number, which executives start pushing to decrease. When you let everyone get a license for everything they might need, it gets out of control quickly (many startups relearn this lesson in their growth phase)

Then they start investigating how often people use software packages and realize most people aren't actually using most software they have seats for. This happens because when software feels 'free' people request it for one-time use for a thing or to try it out and then forget about it, so you have low utilization across the board.

So they start making it harder to add new software. They start auditing usage. They may want reports on why software is still needed and who uses it.

It all adds up. I understand you don't think it should be this way, but it is at big companies. You're right that that the $24/user per month isn't much, but it's one of dozens of fees that get added, multiplied by every employee in the company, and now they need someone to maintain licenses, get them reviewed, interact with the rep every year, do the negotiation battles, and so on. It adds up fast.

oooyay · 3h ago
> Correct, but every additional software package and each additional license adds more to track.

This is going to differ company to company but since we're narrowing it to large companies I disagree. Usually there's a TPM that tracks license distribution and usage. Most companies provide that kind of information as part of their licensing program (and Docker certainly does.)

> Every new software license requires legal to review it.

Yes, but this is like 90% of what legal does - contract review. It's also what managers do but more on the negotiation end. Most average software engineers probably don't realize it but a lot of cloud services, even within a managed cloud provider like AWS, require contract and pricing negotiation.

> These centralized departments add up all of the license and SaaS costs and it shows up as one big number, which executives start pushing to decrease. When you let everyone get a license for everything they might need, it gets out of control quickly (many startups relearn this lesson in their growth phase)

As I said earlier, I can't speak for other companies but at large companies I've worked at this just simply isn't true. There's metrics for when the software isn't being used because the corporation is financially incentivized to shrink those numbers or consolidate on software that achieves similar goals. They're certainly individually tracked fairly far up the chain even if they do appear as a big number somewhere.

akerl_ · 8h ago
Idk what to tell you other than that it is.

Large companies do have ways to deal with this: they negotiate flat rates or true-up cadences with vendors. But now you’ve raised the bar way higher than “just use podman”.

dec0dedab0de · 6h ago
It becomes a pain point when the IT team never heard of docker, all new licenses need to be approved by the legal department, and your manager is afraid to ask for any extra budget.

Also, I don't want to have to troubleshoot why the docker daemon isn't running every time I need it

regularfry · 4h ago
I'll see your "IT team never heard of docker" and raise you "security want to ban local containers because they allow uncontrolled binaries onto corporate hardware.". But that's not something podman solves...
mgkimsal · 4h ago
Every single developer is running 'uncontrolled source code' on corporate hardware every single day.
cyberpunk · 1h ago
The defence isn't against malicious developers writing evil code, but some random third party container launched via a curl | bash which mounts ~/ into it and posts all your ssh keys to some server in china... Or whatever.

Or so I was told when I made the monumental mistake of trying to fight such a policy once.

So now we just have a don't ask don't tell kind of gig going on.

I don't really know what the solution is, but dev laptops are goldmines for haxxors, and locking them down stops them from really being dev machines. shrug

0cf8612b2e1e · 5h ago
I have personally given up trying to get a $25 product purchased through official channels. The process can make everything painful.
johannes1234321 · 3h ago
Congrats, the process fulfilled it's purpose. Another small cost saved :)
0cf8612b2e1e · 3h ago
Trust me, the thought crossed my mind. They definitely beat me.
reaperducer · 5h ago
It becomes a pain point when the IT team never heard of docker

Or when your IT department is prohibited from purchasing anything that doesn't come from Microsoft or CDW.

axlee · 5h ago
>It becomes a pain point when the IT team never heard of docker

Where do you work ? Is that even possible in 2025?

dec0dedab0de · 51m ago
I work at a cool place now that is well aware of it, but in 2023 I worked at a very large insurance company with over a thousand people in IT. Some of the gatekeepers were not aware of docker. Luckily another team had set up Openshift, but then the approval process for using it was a nightmare.
cyberpunk · 1h ago
'corp IT' in a huge org typically all outsourced MCSEs who are seemingly ignorant of every piece of technology outside of azure.

Or so it seems to me whenever I have to deal with them. We ended up with Microsoft defender on our corp Macs even.. :|

anakaine · 1h ago
Its absolutely possible. Weve also had them unaware of github, and had them label Amazon S3 as a risk since it specifically wasn't Microsoft.

There is no bottom to the barrel, and incompetence and insensitivity can rise quite high in some cases.

tracker1 · 4h ago
Apparently they work in the past...
Dennip · 7h ago
Not sure on docker desktops specifics but usually large companies have enterprise/business licencing available and specifically do not deal with this, and do not want to manually deal with this, because they can use SSO & dynamically assign licenses to user groups etc.
unethical_ban · 5h ago
>An IT department for a company of that size should have ironed out workflows

I'm in IT consulting. If most companies could even get the basic best practices of the field implemented, I wouldn't have a job.

stronglikedan · 7h ago
Or just use Podman and don't worry about licenses, since it's just as good but sooo much easier.
reaperducer · 5h ago
Some day I hope to work for a company small enough that I can "just" use any software I feel like for whatever reasons I want.

But I have to feed my family.

worik · 1h ago
> I can "just" use any software I feel like for whatever reasons I want.

What could possibly go wrong?

zbrozek · 6h ago
Yeah all of that is a huge pain and fantastic to avoid.
reaperducer · 5h ago
An IT department for a company of that size should have ironed out workflows

The business world is full of things that "should" be a certain way, but aren't.

For the technology world, double the number.

We'd all like to live in some magical imaginary HN "should" world, but none of us do. We all work in companies that are flawed, and sometimes those flaws get in the way of our work.

If you've never run into this, buy a lottery ticket.

itsdrewmiller · 7h ago
You're arguing against a straw man here - no one but you used the term "dealbreaker" or "major" conflict point. It can be true that it is not a dealbreaker but still a downside.
worik · 1h ago
Not just large companies

OT because not docker

In the realm of artistic software (thinking Alberton Live and Adobe suites) licensing hell is a real thing. In my recent experience it sorts the amateurs from the pros, in favour of amateurs

The time spent learning the closed system includes hours and dollars wrestling licenses. Pain++. Not just the unaffordable price, but time that could be spent creating

But for an aspiring professional it is the cost of entry. These tools must be mastered (if not paid for, ripping is common) as they have become a key part of the mandated tool chains, to the point of enshittification

The amateur is able to just get on with it, and produce what they want when they want with a dizzying array of possible tools

weberc2 · 2h ago
I'm of the opinion that large companies should be paying for the software they use regardless of whether it's open source or not, because software isn't free to develop. So assuming you're paying for the software you use, you still have the problem that you are subject to your internal procurement processes. If your internal procurement processes make it really painful to add a new seat, then maybe the processes need to be reformed. Open source only "fixes" the problem insofar as there's no enforcement mechanism, so it makes it really easy for companies to stiff the open source contributors.
akerl_ · 2h ago
So, I'm of two thoughts here:

1. As parallel commenters have pointed out, no. Plenty of open source developers exist who aren't interested in getting paid for their open source projects. You can tell this because some open source projects sell support or have donation links or outright sell their open source software and some do not. This line of thinking seems to come out of some utopian theoretical world where open source developers shouldn't sell their software because that makes them sell-outs but users are expected to pay them anyways.

2. I do love the idea of large companies paying for open source software they use because it tends to set up all kinds of good incentives for the long term health of software projects. That said, paying open source projects tends to be comically difficult. Large companies are optimized for negotiating enterprise software agreements with a counterparty that is primed to engage in that process. They often don't have a smooth way to like, just feed money into a Donate form, or make a really big Github or Patreon Sponsorship, etc. So even people in large companies that really want to give money to open source devs struggle to do so.

rlpb · 2h ago
> so it makes it really easy for companies to stiff the open source contributors

I don't think there's any stiffing going on, since the open source contributors knowingly contributed with a license that specifically says that payment isn't required. It is not reasonable for them to take the benefits of doing that but then expect payment anyway.

bityard · 2h ago
"stiff the open source contributors"

I'm not sure you realize that "open source" means anyone anywhere is free to use, modify, and redistribute the software in any way they see fit? Maybe you're thinking of freeware or shareware which often _do_ come with exceptions for commercial use?

But anyway, as an open source contributor, I have never felt I was being "stiffed" just because a company uses some software that I helped write or improve. I contribute back to projects because I find them useful and want to fix the problems that I run into so I don't have to maintain my own local patches, help others avoid the same problems, and because making the software better is how I give back to the open source community.

devjab · 7h ago
> You end up having to track who has it installed. Hired 5 more people this week? How many of them will want docker desktop? Oh, we’ve maxed the licenses we bought? Time to re-open the procurement process and amend the purchase order.

I don't quite get this argument. How is that different from any piece of software that an employee will want in any sort of enterprise setting? From an IT operations perspective it is true that Docker Desktop on Windows is a little more annoying than something like an Adobe product, because Docker Desktop users need their local user to be part of their local docker security group on their specific machine. Aside from that I would argue that Docker Desktop is by far one of the easiest developer tools (and do note that I said developer tools) to track licenses for.

In non-enterprise setups I can see why it would be annoying but I suspect that's why it's free for companies with fewer than 250 people and 10 million in revenue.

akerl_ · 7h ago
I touched on this in my parallel reply, but to expand on it:

The usual way that procurement is handled, for the sake of everybody's sanity, is to sign a flat-rate / tiered contract, often with some kind of true-up window. That way the team that's trying to buy software licenses doesn't have their invoices swinging up/down every time headcount or usage patterns shifts, and they don't have to go back to the well every time they need more seats.

This is a reasonably well-oiled machine, but it does take fuel: setting up a new enterprise agreement like that takes humans and time, both of which are not free. So companies are incentivized to be selective in when they do it. If there's an option that requires negotiating a license deal, and an option that does not, there's decent inertia towards the latter.

All of which is a long way to say: many large enterprises are "good" at knowing how many of their endpoints are running what software, either by making getting software a paperwork process or by tracking with some kind of endpoint management (though it's noteworthy that there are also large enterprises that suck at endpoint management and have no clue what's running in their fleet). The "hard" part (where "hard" means "requires the business to expend energy they'd rather not) is getting a deal that doesn't involve the license seat counter / invoice details having to flex for each individual.

Aurornis · 5h ago
You're right that it's no different than other software, but when you reach the point where the average employee has 20-30 different licenses for all the different things they might use, managing it all becomes a job for multiple people.

Costs and management grow in an O(n*m) manner where n is employees and m is numbers of licenses per employee. It seems like nothing when you're small and people only need a couple licenses, but a few years in the aggregate bills are eye-popping and you realize the majority of people don't use most of the licenses they've requested (it really happens).

Contrast this with what it takes for an engineer to use a common, free tool: They can just use it. No approval process. No extra management steps for anyone. Nothing to argue that you need to use it every year at license audit time. Just run with it.

maigret · 7h ago
> How is that different from any piece of software that an employee will want in any sort of enterprise setting?

Open source is different in exactly that, no procurement.

Finance makes procurement annoying so people are not motivated to go through it.

mgkimsal · 4h ago
That assumes that you can, in fact, install that software in the first place. "Developers" sometimes get a bit of a pass, but I've been inside more than a few companies where... no one could install anything at all, regardless of whether there was a cost. Requesting some software would usually get someone with too much time on their hands (who would also complain about being overworked) asking what you need, why you need it, why you didn't try something else, do you really need it, etc. In some scenarios the 'free' works against, because "there's no support". I was seeing this as late as 2019 at a company - it felt like being back in 1997.
nightpool · 3h ago
Cool. Then keep using Docker Desktop if you want to. That's not the situation most of the people in this thread are talking about though.
thinkingtoilet · 7h ago
Are you complaining about buying 5 licenses? It seems extremely easy to handle. It feels like sometimes people just want to complain.
almosthere · 7h ago
Everything is hard in a large company and they have hired teams to manage procurement so this is just you over thinking.
malnourish · 7h ago
How often have you dealt with large org procurement processes? I've spent weeks waiting on the one person needed to approve something that cost less than something I could readily buy on my T&E card.
akerl_ · 7h ago
What a strangely hostile reply.
dboreham · 7h ago
Typically the team they hired is focused on you not procuring things.
akerl_ · 6h ago
I think a lot of this boils down to Procurement's good outcome generally being quite different than the good outcome for each team that wants a purchase.

To draw a parallel: imagine a large open source project with a large userbase. The users interact with the project and a bunch of them have ideas for how to make it better! So they each cut feature requests against the project. The maintainers look at them. Some of the feature requests they'll work on, some of them they'll take well-formed pull requests. But some they'll say "look, we get that this is helpful for you, but we don't think this aligns with the direction we want the project to go".

A good procurement team realizes that every time the business inks a purchase agreement with a vendor, the company's portfolio has become incrementally more costly. For massive deals, most of that cost is paid in dollars. For cheaper software, the sticker price is low but there's still the cost of having one more plate to juggle for renewals / negotiations / tracking / etc.

So they're incentivized to be polite but firm and push back on whether there's a way to get the outcome in another way.

(this isn't to suggest that all or even most procurement teams are good, but there is a kernel of sanity in the concept even though it's often painful for the person who wants to buy something)

wiether · 1h ago
> If you have a dev team of 10 people and are extremely profitable to where you need licenses you'd end up paying $9 a year per developer for the license.

It doesn't quite change your argument, but where have you seen $9/year/dev?

The only way I see a $9 figure is the $9/month for Docker Pro with a yearly sub, so it's 12*$9=$108/year/dev or $1080/year for your 10 devs team.

Also it should be noted that Docker Pro is intended for individual professionals, so you don't have collaboration features on private repos and you have to manage each licence individually, which, even for only 10 licences, implies a big overhead.

If you want to work as a team you need to take the Docker Team licence, at $15/month/dev on a yearly sub, so now you are at $1800/year for your 10 devs team.

Twenty times more than your initial figure of $90/year. Still, $1800 is not that much in the grand scheme of things, but then you still have to add a usual Atlassian sub, an Office365/GWorkspace sub, an AI sub... You can end-up paying +$200/month/dev just in software licences, without counting the overhead of managing them.

ejoso · 7h ago
This math sounds really simple until you work for a company that is “profitable” yet constantly turning over every sofa cushion for spare change. Whuch describes most publicly traded companies.

It can be quite difficult to get this kind of money for such a nominal tool that has a lot of free competition. Docker was very critical a few years ago, but “why not use podman or containerd or…” makes it harder to stand up for.

troyvit · 5h ago
> I ask because the Docker Desktop paid license requirement is quite reasonable. If you have less than 250 employees and make less than $10 million in annual revenue it's free.

It is for now, but I can't think of a player as large as Docker that hasn't pulled the rug out from under deals like this. And for good reason, that deal is probably a loss leader and if they want to continue they need to convert those free customers into paying.

dice · 7h ago
> Was this a deal breaker for any company?

It is at the company I currently work for. We moved to Rancher Desktop or Podman (individual choice, both are Apache licensed) and blocked Docker Desktop on IT's device management software. Much easier than going through finance and trying to keep up with licenses.

regularfry · 4h ago
Deal breaker for us too, now in my second org where that's been true.

It's not just that you need a licence now, it's that even if we took it to procurement, until it actually got done we'd be at risk of them turning up with a list of IP addresses and saying "are you going to pay for all of these installs, then?". It's just a stupid position to get into. The Docker of today might not have a record of doing that, but I wouldn't rule out them getting bought by someone like Oracle who absolutely, definitely would.

SushiMon · 3h ago
Were there any missing/worse functional capabilities that drove you over to Podman/alternatives? Or just the licensing / pricing?
orochimaaru · 6h ago
If your enterprise with a large engineering team that isn’t a software company, you are a cost center. So anything related to developer tools is rarely funded. It will mostly be - use the free stuff and suck it up.

Either that or you have a massive process to acquire said licenses with multiple reporting requirements. So, you manager doesn’t need the headache and says just use the free stuff and move on.

I used to use docker. I use podman now. Are there teams in my enterprise who have docker licenses - maybe. But tracking them down and dealing with the process of adding myself to that “list” isn’t worth the trouble.

codesmash · 6h ago
The problem is not the cost. It's complexity. From a buyer perspective literally fighting with the procurement team is a nightmare.

And usually the need is coming from someone below C-level. So you have to: convince your manager and his manager convince procurement team it has to be in a budget (and usually it's much easier to convince to pay for the dinner) than you have a procurement team than you need to go through vendor review process (or at least chase execution)

This is reality in all big companies that this rule applies to. It's at least a quarter project.

Once I tried to buy a $5k/yr software license. The Sidekiq founder told me (after two months of back and forth) that he's done and I have to pay by CC (which I didn't had as miserable team lead).

jandrese · 3h ago
> Was this a deal breaker for any company?

It's not the money, it's the bureaucracy. You can't just buy software, you need a justification, a review board meeting, marketplace survey with explanations of why this particular vendor was chosen over others with similar products, sign off from the management chain, yearly re-reviews for the support contract, etc...

And then you need to work with the vendor to do whatever licensing hoops they need to do to make the software work in an offline environment that will never see the Internet, something that more often than not blows the minds of smaller vendors these days. Half the time they only think in the cloud and situations like this seem like they come from Mars.

The actual cost of the product is almost nothing compared to the cost of justifying its purchase. It can be cheaper to hire a full time engineer to maintain the open source solutions just to avoid these headaches. But then of course you get pushback from someone in management that goes "we want a support contract and a paid vendor because that's best practices". You just can't win sometimes.

DerArzt · 4h ago
I work at a fortune 250 and cost of the licence was the given reason for moving to podman for the whole org.
papageek · 1h ago
You need a compliance department and attorneys to look over licenses and agreements. It's a real hassle and not really related to cost of the license itself.
maxprimer · 2h ago
Even large companies with thousands of developers have budgets to manage and often times when the CT/IO sees free as an option that's all that matters.
firesteelrain · 8h ago
We only run Podman Desktop if ever because for large companies it is cost prohibitive. We also found that most people don’t need *Desktop at all. Command line works fine
tclancy · 1h ago
Yes. I worked for a company with a few thousand developers and we swapped away from Docker one week with almost no warning. It was a memorable experience.
t43562 · 8h ago
I don't particularly care if it's worth it or not. I don't need to do it. Getting money for things is not easy in all companies.
k4rli · 7h ago
Docker Desktop is also (imo) useless and helps be ignorant.

Most Mac users I see using it struggle to see the difference between "image" and "container". Complete lack of understanding.

All the same stuff can easily be done from cli.

com2kid · 4h ago
> Most Mac users I see using it struggle to see the difference between "image" and "container". Complete lack of understanding.

Because they just want their software package to run and they have been given some magic docker incantation that, if they are lucky, actually launches everything correctly.

The first time I used Docker I had so many damn issues getting anything to work I was put off of it for a long time. Heck even now I am having issues getting GPU pass through working, but only for certain containers, other containers it is working fine for. No idea what I am even supposed to do about that particular bit of joy in my life.

> All the same stuff can easily be done from cli.

If a piece of technology is being forced down a user's throat, users just wants it to work and go out of their way so they can get back to doing their actual job.

johnmaguire · 5h ago
I don't believe it's possible to run Docker on macOS without Docker Desktop (at least not without something like lima.) AFAIUI, Docker Desktop contains not just the GUI, but also the hypervisor layer. Is my understanding mistaken?
cduzz · 4h ago
It's pretty easy to run docker on macos -- colima[1] is just a brew command away...

It runs qemu under the hood if you want to run x86 (or sparc or mips!) instead of arm on a newer mac.

[1]https://formulae.brew.sh/formula/colima

dakiol · 4h ago
I cannot run docker in macos without docker desktop. I use the cli to manage images, containers, and everything else.
j45 · 7h ago
Not everyone uses software the same way.

Not everyone becomes a beginner to using software the same way or the one way we see.

taormina · 4h ago
Yep! What startup has the goal of making less than $10 million in annual revenue? That sentence was absolutely a deal breaker for the CEO and CTO of our last company.

And since when has Docker Desktop "just worked"?

m463 · 1h ago
I hated the docker desktop telemetry. I remember it happened in the macos installer even before you got any dialog box
lucyjojo · 7h ago
for reference a jp dev will be paid around $50,000. most of the world will probably be in the 10k-50k range except a few places (switzerland, luxembourg, usa?).

atlassian and google and okta and ghe and this and that (claude code?). that eventually starts to stack up.

throwaway0236 · 31m ago
I think you are underestimating the pay in other "developed" countries, but you are right that US pay is much higher than any other country (at least in Silicon Valley)

You do have a valid point in that many HN commentators seem to live in a bubble where spending thousands of dollars on a developer for "convenience" is seen as a non-brainer. Usually they work in companies that don't make a profit, but are paid for by huge VC investments. I don't blame them, as it is a valid choice given the circumstances. If you have the money, why not? But they may start thinking differently if the flow of VC money starts to slow down.

It's similar to how some rich people buy a private jet. Their time is valuable, and they feel it makes sense (at least if you don't care about the environment).

I believe that frugality is actually the default mode of business, but many companies in SV are protected from the consequences by the VCs.

arunc · 5h ago
$90 is also like 1.5 hours of work that I would've spent debugging podman anyway. And I've spent more than a few hours every time podman breaks, it to be honest.
tecleandor · 6h ago
I've seen a weird thing on their service agreement:

Use Restrictions. Customer and its Users may not and may not allow any third party to: [...] 10. Access the Service for the purpose of developing or operating products or services intended to be offered to third parties in competition with the Services[...]

Emphasis mine on 'operating'.

So I cannot use Docker Desktop to operate, for example: ECR, GCR or Harbor?

chuckadams · 3h ago
I think the Service in question is services like Docker Hub that they don't let you use as the infrastructure for your competing site.
fkyoureadthedoc · 7h ago
At my job going through procurement for something like Docker Desktop when there are free alternatives is not worth it.

It takes forever, so long that I'll forget that I asked for something. Then later when they do get around to it, they'll take up more of my time than it's worth on documentation, meetings, and other bullshit (well to me it's bullshit, I'm sure they have their reasons). Then when they are finally convinced that yes a Webstorm license is acceptable, they'll spend another inordinate amount of time trying to negotiate some deal with Jetbrains. Meanwhile I gave up 6 months ago and have been paying the $5 a month myself.

zer00eyz · 3h ago
> you'd end up paying $9 a year per developer for the license

It's only 9 bucks a year, its only 5 bucks a month, its less than a dollar a day.

Docker, ide, ticking system, GitHub, jira, sales force, email, office suit, Figma.... all of a sudden your spending 1000 bucks a month per staff member for a small 10 person office.

Meanwhile AWS is charging you .01xxxx for bandwidth, disk space, cpu time, s3 buckets, databases. All so tiencent based AI clients from China hammer your hardware and run up your bill....

The rent seeking has gotten out of hand.

j45 · 2h ago
The loaded cost is truly something else, and most understood by people who had to find a way to pay for it all, or paid for it all for others.

The majority of businesses in the world, (and the majority of jobs) are created and delivered by small business, not big.

And then the issues when a service goes down it takes everyone else down with it.

smileysteve · 6h ago
To bring up AI, and the eventual un-subsidizing of costs; if $9 a year is too much for docker... Then even the $20/mo (June) price tag is too high for AI, much less $200 (August), or $2000? (post subsidizing)
pmontra · 7h ago
I think that I never saw somebody using Docker Desktop. I saw running containers with the command line everywhere, but I maybe I did not notice. No licenses for the command line tools, right?
throwaway0236 · 14m ago
I sometimes use Docker Desktop on my Mac to view logs. It's more convenient.
akerl_ · 7h ago
On a Mac or Windows machine, you generally need something to get you a Linux environment on which to run the containers.

You can run your own VM via any number of tools, or you can use WSL now on Windows, etc etc. But Docker Desktop was one of the first push-button ways to say "I have a Mac and I want to run Docker containers and I don't want to have to moonlight as a VM gardener to do it.

chuckadams · 2h ago
The command-line tools on a Mac usually come from Docker Desktop. The homebrew version of docker is bare-bones and requires the virtualbox-based docker-machine package, whereas Desktop is using Apple's Virtualization Framework. Nobody runs the homebrew version as far as I can tell.

On Windows, you can use the docker that's built in to the default WSL2 image (ubuntu), and Docker Desktop will use it if available, otherwise it uses its own backend (probably also Hyper-V based).

I use Orbstack myself, but that's also a paid product.

patmcc · 3h ago
It's not the cost, it's the headache. Do I need to worry about setting up SSO, do I need to work with procurement, do I need to do something in our SOC2 audit, do I need to get it approved as an allowed tool, etc.

Whether it's $100/year or $10k/year it's all the same headache. Yes, this is dumb, but it's how the process works at a lot of companies.

Whereas if it's a free tool that just magically goes away. Yes, this is also dumb.

bastardoperator · 3h ago
Docker has persuaded several big shops to purchase site licenses.
secondcoming · 1h ago
Yes. Our company no longer allows use of Docker Desktop
xyzzy_plugh · 8h ago
It's a deal breaker because it was previously free to use, and frankly it's not worth $1 a month given there are better paid alternatives, let alone better free alternatives.
bongodongobob · 6h ago
I work for a $2 billion/yr company and we need three levels of approval for a Visio license. I've never been at a large corp where you could just order shit like that. You'll have to fill out forms , have a few meetings about it, business justification spreadsheets etc, then get told it's not in the budget.
smokel · 7h ago
Reading through the comments here, it looks like there is an opportunity for a startup to streamline software licensing. Just a free tip.
eehoo · 6h ago
There are already software licensing providers such as 10Duke that do exactly that. Pretty much all of the licensing related problems mentioned here would either disappear or at the very least get dramatically simpler if more companies used 10Duke Enterprise as their licensing solution to issue and manage licenses. There is a better way, but sadly most businesses overlook licensing.

(the company I work for uses them, our licensing used to be a mess similar to what's described here)

adolph · 5h ago
Yeah, at a big enterprise the larger challenge ahead of even payment is the legal arrangements. They typically sign some "master license" agreement with an aggregator like CDW. Those places don't seem well set up for software redistribution though. Setting up a Steam or AppStore clone for various utility-ware would go a long way to enabling people to access the software an enterprise doesn't mind paying for if the legal and financial stuff wasn't applying friction.
debarshri · 7h ago
You can always negotiate the price
flerchin · 8h ago
"officially supported" is not a value.

It's not the price, it's that there is one. 1 penny would be too much because it prevents compose-ability of dev workstations.

Izmaki · 8h ago
None of your companies need to worry about licenses. Docker ENGINE is free and open source. Docker DESKTOP is a software suite that requires you to purchase a license to use in a company.

But Docker Engine, the core component which works on Linux, Mac and Windows through WSL2, that is completely and 1000% free to use.

xhrpost · 8h ago
From the official docs:

>This section describes how to install Docker Engine on Linux, also known as Docker CE. Docker Engine is also available for Windows, macOS, and Linux, through Docker Desktop.

https://docs.docker.com/engine/install/

I'm not an expert but everything I read online says that Docker runs on Linux so with Mac you need a virtual environment like Docker Desktop, Colima, or Podman to run it.

LelouBil · 8h ago
Docker desktop will run a virtual machine for you. But you can simply install docker engine in wsl or in a VM on mac exactly like you would on linux (you give up maybe automatic port forwarding from the VM to your host)
rovr138 · 6h ago
> But you can simply install docker engine in wsl or in a VM on mac exactly like you would on linux (you give up maybe automatic port forwarding from the VM to your host)

and sharing files from the host, ide integration, etc.

Not that it can't be done. But doing it is not just, 'run it'. Now you manage a vm, change your workflow, etc.

linuxftw · 8h ago
This. I run docker in WSL. I also do 100% of my development in WSL (for work, anyway). Windows is basically just my web browser.
CuriouslyC · 7h ago
Ironic username. As a die hard, WSL aint bad though. I just can't deal with an OS that automatically quarantines bittorrent clients, decides to override local administrator policies via windows updates and pops up ad notifications.
mmcnl · 2h ago
I personally use Windows + WSL2 and for work use macOS. I prefer Windows + WSL2 by a longshot. It just "works". macOS never "just works" for me. Colima is fine but requires a static memory allocation for the VM, it doesn't have the level of polish that WSL2 has. Brew is awful compared to apt (which you get with WSL2 because it's just Linux).

And then there's the windowing system of macOS that feels like it's straight from the 90s. "System tray" icons that accumulate over time and are distracting, awful window management with clunky animations, the near useless dock (clicking on VS Code shows all my 6 IDEs, why?). Windows and Linux are much modern in that regard.

The Mac hardware is amazing, well worth its price, but the OS feels like it's from a decade ago.

croon · 7h ago
+1

I use WSL for work because we have no linux client options. It's generally fine, but both forced windows update reboots as well as seemingly random wsl reboots (assuming because of some component update?) can really bite you if you're in the middle of something.

linuxftw · 7h ago
All my personal machines run linux. At work my choices are Mac or Windows. If Macs were still x86_64 I might choose that and run a VM, but I have no interest in learning the pitfalls of cross arch emulation or dealing with arm64 linux distro for a development machine.
chuckadams · 2h ago
I never notice the difference between arm64 and x86 environments, since I'm flipping between them all the time just because the arm boxes are so much cheaper. The only time it matters to me is building containers, and then it's just a matter of passing `--platform=linux/amd64,linux/arm64` to `docker buildx`.

If you're building really arch-specific stuff, then I could see not wanting to go there, but Rosetta support is pretty much seamless. It's just slower.

iainmerrick · 7h ago
If you're already paying for Macs, is paying for Docker Desktop really a big problem?
chrisweekly · 5h ago
I think the point is that Docker Desktop for macOS is bad.
chuckadams · 2h ago
It's not all that bad these days ever since they added virtio support. Orbstack is well worth paying for as an alternative, but that won't solve anyone's procurement headaches either.
matsemann · 8h ago
If you've installed Docker on Windows you've most likely done that by using Docker Desktop, though.
mmcnl · 2h ago
I just follow the official Linux instructions on the Docker website. It just works.
Izmaki · 2h ago
That's just one way. The alternative is WSL 2 with Docker Engine.
GrantMoyer · 7h ago
Docker Engine without Docker Desktop is available through winget as "Docker CLI"[1].

[1]: https://github.com/microsoft/winget-pkgs/tree/master/manifes...

t43562 · 8h ago
Right, we were using macs - same story.
t43562 · 8h ago
Those companies use docker desktop on their dev's machines.
connicpu · 8h ago
There's no need if all your devs use desktop Linux as their primary devices like we do where I work :)
t43562 · 8h ago
On Mac we just switched to podman and didn't have anything to worry about.
krferriter · 4h ago
I am using MacOS and like a year ago I uninstalled docker and docker desktop, installed podman and podman-compose, and have changed literally nothing else about how I use containers and docker image building/running locally. It was a drop-in replacement for me.
nickthegreek · 8h ago
Anyone have opinions on OrbStack for mac over these other alternatives?
elliottr1234 · 7h ago
It's well worth it its much more than a gui for it supports running k8s locally, managing custom vm instances, resource monitoring of containers, built in local domain name support with ssl mycontainer.orb, a debug shell that gives you ability to install packages that are not available in the image by default, much better and automated volume mounting and view every container in finder, ability to query logs, an amazing ui, plus it is much, much faster and more resource efficient.

The above features really do make it worth it especially when using existing services that have complicated failure logs or are resource intensive like redis, postgres, livekit, etc or you have a lot of ports running and want to call your service without having to worry about remembering port numbers or complicated docker network configuration.

Check it out https://docs.orbstack.dev/

chuckadams · 2h ago
Docker Desktops also supports a local kubernetes stack, but it takes several minutes to start up, and I think in the end it's just minikube? Haven't tried Orbstack's k8s stack myself since I'm good with k3d. I did have cause though to spin up a VM a while back, and that was buttah.
johncoltrane · 7h ago
I tried all the DD alternatives (on macOS) and I think OrbStack is the easiest to use and least invasive of them all.

But it is not cross-platform, so we settled on Podman instead, which came (distant) second in my tests. The UI is horrible, IMO but hey… compromises.

I use OrbStack for my personal stuff, though.

veidr · 6h ago
Yes, Orbstack is significantly better than Docker Desktop, and probably also better than any other Docker replacement out there right now (for macOS), if you aren't bothered by the (reasonable) pricing.

It costs about $100/year per seat for commercial use, IIRC. But it is significantly faster than Docker Desktop at literally everything, has a way better UI, and a bunch of QoL features that are nice. Plus Linux virtualization that is both better and (repeating on this theme) significantly more performant than Parallels or VMWare Fusion or UTM.

fernandotakai · 4h ago
orbstack is absolutely amazing. not only the docker side works much better than docker desktop but their lightweight linux vms are just beyond great.

i've been using an archlinux vm for everything development over the past year and a half and i couldn't be happier.

karlshea · 7h ago
Been using it for a year or so now and it’s amazing. Noticeably faster than DD and the UI isn’t Electron or whatever’s going on there.
allovertheworld · 5h ago
Cant imagine being forced to use a linux PC for work lmao
connicpu · 3h ago
I happily embraced it, to each their own I guess. There are folks who mainly work on their mac/windows laptops and just ssh into their workstation, but IT gives us way more freedom (full sudo access) on Linux so I can customize a lot more which makes me a lot happier.
Almondsetat · 8h ago
That's their completely optional prerogative
firesteelrain · 8h ago
Podman is inside the Ubuntu WSL image. No need for docker at all
kordlessagain · 7h ago
This is not correct, at least when looking at my screen:

(base) kord@DESKTOP-QPLEI6S:/mnt/wsl/docker-desktop-bind-mounts/Ubuntu/37c7f28..blah..blah$ podman

Command 'podman' not found, but can be installed with:

sudo apt install podman

firesteelrain · 7h ago
Hmm maybe it’s what our admins provided to us then. I actually have never run it at home only airgapped
anakaine · 1h ago
On a few machines now ive had Podmans windows uninstaller fail to remove all its components and cause errors on start up due to postman not being found. Even manually removing leftover services and start up items didn't fix the issue. Its a constant source of annoyance.
xedrac · 1h ago
I vastly prefer Podman over Docker. No user/group fuss, no security concerns over a root process. No having to send data to a daemon.
ac130kz · 3h ago
It's works great until you need that one option from Docker Compose that is missing in Podman Compose (which is written in Python for whatever reason, yeah...).
carwyn · 2h ago
You can use the real compose (Go) with Podman now. The Python clone is not your only option.
goldman7911 · 7h ago
You only have to worry about licences if you use Docker DESKTOP. Why not use RANCHER Desktop?

I have been using it by years. Tested it in Win11 and Linux Mint. I can have even a local kubernetes.

xrd · 9h ago
I love podman, and, like others have said here, it does not always work with every container.

I often try to run something using podman, then find strange errors, then switch back to docker. Typically this is with some large container, like gitlab, which probably relies on the entirety of the history of docker and its quirks. When I build something myself, most of the time I can get it working under podman.

This situation where any random container does not work has forced me to spin up a VM under incus and run certain troublesome containers inside that. This isn't optimal, but keeps my sanity. I know incus now permits running docker containers and I wonder if you can swap in podman as a replacement. If I could run both at the same time, that would be magical and solve a lot of problems.

There definitely is no consistency regarding GPU access in the podman and docker commands and that is frustrating.

But, all in all, I would say I do prefer podman over docker and this article is worth reading. Rootless is a big deal.

nunez · 6h ago
I presume that the bulk of your issues are with container images that start their PID 1s as root. Podman is rootless by default, so this causes problems.

What you can do if you don't want to use Docker and don't want to maintain these images yourself is have two Podman machines running: one in rootful mode and another in rootless mode. You can, then, use the `--connection` global flag to specify the machine you want your container to run in. Podman can also create those VMs for you if you want it to (I use lima and spin them myself). I recommend using --capabilities to set limits on these containers namespaces out of caution.

Podman Desktop also installs a Docker compatibility layer to smooth over these incompatibilities.

xrd · 6h ago
This is terrific advice and I would happily upvote a blog post on this! I'll look into exactly this.
bsder · 23m ago
Is there a blog post on this somewhere? I'd really love to read more about it beyond just the official documentation.
gorjusborg · 8h ago
> I love podman, and, like others have said here, it does not always work with every container.

Which is probably one of the motivations for the blog post. Compatibility will only be there once a large enough share of users use podman that it becomes something that is checked before publish.

firesteelrain · 8h ago
Weird, we run GitLab server and runners all on podman. Honestly I wish we would switch to putting the runners in k8s. But it works well. We use Traefik.
xrd · 6h ago
Yeah, I had it running using podman, but then had some weird container restarts. I switched back to docker and those all went away. I am sure the solution is me learning more and troubleshooting podman, but I just didn't spend the time, and things are running well in an isolated VM under docker.

That's good to know it works well for you, because I would prefer not to use docker.

dathinab · 5h ago
in my experience (at least rootless) podman does enforce resource limits much better/stricter

we had some similar issues and it was due to containers running out of resources (mainly RAM/memory, by a lot, but only for a small amount of time). And it happens that in rootless this was correctly detected and enforced, but on non rootless docker (in that case on a Mac dev laptop) it didn't detect this resource spikes and hence "happened to work" even through it shouldn't have.

k_roy · 6h ago
I use a lot of `buildx` stuff. It ostensibly works in podman, but in practice, I haven't had much luck
awoimbee · 6h ago
The main issue is podman support on Ubuntu. Ubuntu ships outdated podman versions that don't work out of the box. So I use podman v5, GitHub actions uses podman v3, and my coworkers on Ubuntu use docker. So now my script must work with old podman, recent podman and docker
rsyring · 5h ago
Additionally, there aren't even any trusted repos out there building/publishing a .deb for it. The ones that I could find when I searched last were all outdated or indicated they were not going to keep moving forward.

I could get over this. But, IMO, it lends itself to asking the "why" question. Why wouldn't Podman make installing it easier? And the only thing that makes sense to me is that RedHat doesn't want their dev effort supporting their competitor's products.

That's a perfectly reasonable stance, they owe me nothing. But, it does make me feel that anything not in the RH ecosystem is going to be treated as a second-class citizen. That concerns me more than having to build my own debs.

gucci-on-fleek · 55m ago
They publish statically-linked binaries on GitHub [0], so to install it, you just need to download and unpack a single file. But you don't get any automatic updates like you would if they provided an apt repository.

[0]: https://github.com/containers/podman/releases

rsyring · 25m ago
Wow! I can't believe I missed that. Thanks.
dathinab · 5h ago
> Why wouldn't Podman make installing it easier?

What else can they do then having a package for every distro?

https://podman.io/docs/installation#installing-on-linux

Including instructions to build from source (including for Debian and Ubuntu):

https://podman.io/docs/installation#building-from-source

I don't know about this specific case but Debian and or Ubuntu having outdated software is a common Debian/Ubuntu problem which nearly always is cause by Debian/Ubuntu itself (funnily if it's outdated in Ubuntu doesn't mean it's outdated in Debian and the other way around ;=) ).

rsyring · 5h ago
> What else can they do...

They can do what Docker and many other software providers do that are committed to cross OS functionality. They could build packages for those OSes. Example:

https://docs.docker.com/engine/install/ubuntu/#install-using...

The install instructions you link to are relying on the OS providers to build/package Podman as part of their OS release process. But that is notoriously out-of-date.

You could argue, "Not Podman's Problem", and, in one sense, you'd be right. But, again, it leads to the question "Why wouldn't they make it their problem like so many other popular projects have?" and I believe I answered that previously.

dathinab · 4h ago
> build/package Podman as part of their OS release process. But that is notoriously out-of-date.

providing duplicate/additional non official builds for other OS is

- undermining the OSes package curation

- confusing for the user

- cost additional developer time, which for most OSS is fairly limited

- for non vendorable system dependencies this additional dev time cost can be way higher in all kinds of surprising ways

- obfuscate if a Linux distro is in-cable of properly maintaining their packages

- lead to a splitting of the target OS specific eco system of software using this as a dependency

etc.

it's a lose lose lose for pretty much everyone involved

so as long as you don't have a have a monetary reason that you must do it (like e.g. docker has) it's in my personal opinion a very dump thing to do

I apologize for being a bit blunt but in the end why not use a Linux distribution which works with modern software development cycles?

Blaming others for problems with the OS you decided to use when there are alternatives seems not very productive.

rsyring · 3h ago
> cost additional developer time, which for most OSS is fairly limited

Mostly agree. But something like Podman w/ RedHat behind it is unlikely to be limited in the same way a lot of community OSS projects are.

Unfortunately, I disagree with just about every other point you made but don't think it's worth responding point-by-point. In short, I think a project having dedicated builds for popular OSes is a win-win for just about everyone, excepting that it does take, sometimes a considerable amount of, effort to support those cross OS builds. Additionally, there are now options like Snap/Flatpack/AppImage that can be targets instead of the OS itself, although there is admittedly a tradeoff there as well.

For some projects, say something like ripgrep, just using what is in the OS repo is fine because having the latest and greatest features/bug-fixes is unlikely to matter to most people using the tool.

But, on something like Podman, where there is so many pieces, it's a relatively new technology, and the interaction between Kernel, OS, and user space is so high, being stuck with a non-current OS provided release for a couple years is a non-starter.

> why not use a Linux distribution which works with modern software development cycles?

Because I like my OS to be stable, widely supported, and I also like some of my applications to be evergreen. I find Ubuntu is usually a really good mix that way and I'm going on 15+ years of use. There are other solutions for that that I could use, but I'm mostly happy where I am and don't want to spend the kind of time it would take to adopt a different OS and everything that would follow from that.

That leads _me_ to avoid Podman currently. I can appreciate that you have a different opinion, I just think you are a overplaying your perspective a bit in the comment above.

dathinab · 2h ago
> like Snap/Flatpack/AppImage that can be targets instead of the OS itself [..] ripgrep

sure I agree that where it's easily doable (like e.g. ripgrep) having non distro specific builds is a must have

But sadly this doesn't fully work for podman AFIK as it involves a lot of subtle interactions with things which aren't consistently setup across Linux distros with probably the worst offender being the Linux security modules system (e.g. SELinux, AppArmor etc.). But thinking about probably sooner or later you probably could have a mostly OS independent postman setup (limited to newer OS versions). Or to be more specific 3 one with SELinux one with AppArmor and neither with neither, so I guess maybe not :/

kiney · 5h ago
debian trixie has podman 5 packages in official repos. Good chance thqt those work on ubuntu
bityard · 2h ago
Not a good idea: https://wiki.debian.org/DontBreakDebian

(It's titled "Don't Break Debian" but might also be called "Don't Break Ubuntu" as it applies there just as well.)

gm678 · 3h ago
Also on Ubuntu 25.04, which I updated a homeserver too despite it not being LTS just for the easy access to Podman 5. Once Ubuntu 26.04 comes out the pain described by some sibling comments should end. Podman 4 is a workable version but 5.0 is where I'd say it really became a complete replacement for Docker and quadlets fully matured.
alyandon · 6h ago
Yeah, the lack of an official upstream .deb that is kept up to date (like the official Docker .deb repos) for Ubuntu really kills using podman for most of my internal use cases.
troyvit · 5h ago
This is my biggest problem too, and it's not just my problem but Podman's problem. Lack of name recognition is big for sure compared to Docker, but to me this version mismatch problem is higher on the list and more sure to keep Podman niche. Distros like Ubuntu always ship with older versions of software, it's sadly up to the maintainer to release newer versions, and Podman just doesn't seem interested in doing that. I don't know if it was their goal but it got me to use some RedHat derivative on my home server just to get a later version.
ramon156 · 5h ago
One of the reasons I don't use Ubuntu/debian is because it's just too damn slow with updates. I'm noticing that to this day it's still an issue.

Yes I could use flatpack on ubuntu, however I feel like this is partly something Ubuntu/Debian should provide out-of-the-box

alyandon · 5h ago
LTS in general being slow to uptake new versions of software is a feature not a bug. It gives predictability at the cost of having to deal with older versions of software.

With Ubuntu at least, some upstreams publish official PPAs so that you aren't stuck on the rapidly aging versions that Canonical picks when they cut an LTS release.

Debian I found out recently has something similar now via "extrepo".

skydhash · 5h ago
I use debian specifically for things to be kept the same. Once I got things setup, I don’t really want random updates to come and break things.
rsyring · 5h ago
Ubuntu is committed to the Snap ecosystem and there is a lot of software that you can get from a snap if you need it to be evergreen.
bityard · 2h ago
Since Podman is open source, Ubuntu (and others) are able to package it and and update it themselves if they choose. But I think it's understandable that Red Hat would not want to pay their development teams to package their software for a direct competitor.
ac130kz · 3h ago
That's an Ubuntu issue though, they ship lots of outdated software. Nginx, PHP, PostgreSQL, Podman, etc, the critical software that must be updated asap, even with stable versions they all require a PPA to be properly updated.
miki123211 · 5h ago
I've been dealing with setting up Podman for work over the last week or so, and I wouldn't wish that on my worst enemy.

If you use rootless Podman on a Redhat-derived distribution (which means Selinux), along with a non-root user in your container itself, you're in for a world of pain.

Nextgrid · 5h ago
I've never seen the benefit of rootless.

Either the machine is a single security domain, in which case running as root is no issue, or it's not and you need actual isolation in which case run VMs with Firecracker/Kata containers/etc.

Rootless is indeed a world of pain for dubious security promises.

mbreese · 5h ago
One of the major use cases was multi-user HPC systems. Because they can be complicated, it’s not uncommon for bioinformatics data analysis programs to be distributed as containers. Large HPC clusters are multi-tennant by nature, so running these containers needs to be rootless.

There are existing tools that fill this gap (Singularity/Apptainer). But, there is always friction when you have to use a specialized tool versus the default. For me, this is a core usecase for rootless containers.

For the reduced feature set we need from containers in bioinformatics, rootless is pretty straightforward. You could get largely the same benefits from chroots.

Where I think the issues start is when you start to use networking, subuids, or other features that require root-level access. At this level, rootless because a tedious exercise in configuration that probably isn’t worth the effort. The problem is, the features I need will be different from the features you need. Satisfying all users in a secure was may not be worth it.

bbkane · 5h ago
I see your point but I wouldn't let the perfect be the enemy of the good.

If I just want to run a random Docker container, I'm grateful I can get at least "some security" without paying as much in setup/debugging/performance.

Of course, ideally I wouldn't have to choose and the thing that runs the container would be able to run it perfectly securely without me having to know that. But I appreciate any movement in that direction, even if it's not perfect.

pkulak · 3h ago
Rootless is nice because if you mount some directory in, all the files don't end up owned by root. You can get around that by custom building every image so the user has your user id, but that's a pain.
jwildeboer · 4h ago
Sure. Constructing the case to shoot yourself in the foot is not a big problem. But in reality things mostly just work. I’m happily running a bunch of services behind a (nginx) reverse proxy as rootless containers. Forgejo, the forgejo runner to build stuff, uptime-kuma and more on a bunch of RHEL10 machines with SELinux enabled.
preisschild · 3h ago
Do you do OCI/container builds inside your forgejo-runner container?
mfenniak · 3h ago
People having trouble getting this configured is a common issue for self-hosting Forgejo Runner. As a Forgejo contributor, I'm currently polishing up new documentation to try to support people with configuring this; here's the draft page: https://forgejo.codeberg.page/@docs_pull_1421/docs/next/admi...

(Should live at https://forgejo.org/docs/v12.0/admin/actions/docker-access/ once it is finished up, if anyone runs into the comment after the draft is gone.)

preisschild · 2h ago
Im not hosting a Forgejo instance (yet), but self-hosted Gitlab with gitlab-runner in Kubernetes, so I was wondering how you solved this.

I'm using dind too, but this requires privileged runners...

marcel_hecko · 5h ago
I have done the same. It's not too bad - just don't rely on LLMs to design your quadlets or systemd unit files. Read the docs for the exact podman version you use and it's pretty okay.
prmoustache · 5h ago
How so? I have been using exlusively podman on Fedora for the most part of the last 7 years or so.
goku12 · 5h ago
That surprises me too. Podman is spearheaded by Redhat and Fedora/RHEL was one of the earliest distros to adopt it and phase out docker. Why wouldn't they have the selinux config figured out?
znpy · 4h ago
They have.

Most likely gp is having issues with volumes and hasn’t figured out how to mix the :z and :Z attribute to bind mounts. Or the containers are trying to do something that security-wise is a big no-no.

In my experience SELinux defaults have been much wiser than me and every time i had issues i ended up learning a better way to do what i wanted to do.

Other than that… it essentially just works.

Insanity · 4h ago
We went through an org wide Docker -> Podman migration and it went _relatively_ smooth. Some hiccups along the way but nothing that the SysDev team couldn't overcome.
jimjimwii · 2h ago
My anecdote: I've been using rootless podman on Ubuntu in production environments in multiple organizations (both startup and enterprise) for years without encountering a single issue related to podman itself.

I'm sure what you wrote here is true but i cant fathom how. Maybe its a rh specific issue? (Like how ubuntu breaks rootless bwrap by default)

YorickPeterse · 3h ago
Meanwhile it works perfectly fine without any fuzz on my two Fedora Silverblue setups. This sounds less like a case of "Podman is suffering by definition" and more a case of a bunch of different variables coming together in a less than ideal way.
ThatMedicIsASpy · 3h ago
SELinux has good errors and all I usually need is :z and :Z on mounts
gm678 · 3h ago
Can confirm, have been doing exactly what GP says is a world of pain with no problems as soon as I learned what `:z` and `:Z` do and why they might be needed.

A good reference answer: https://unix.stackexchange.com/questions/651198/podman-volum...

TL;DR: lowercase if a file from the host is shared with a container or a volume is shared between multiple containers. Uppercase in the same scenario if you want the container to take an exclusive lock on the volumes/files (very unlikely).

sigio · 5h ago
I've setup a few podman machines (on debian), and generally liked it. I've been struggling for 2 days now to get a k8s server up, but that's not giving my any joy. (It doesn't like our nftables setup)
znpy · 4h ago
Your issue is selinux then, not podman. It’s not correct to blame it on podman.
mixmastamyk · 5h ago
Sounds like you need to grant the user sufficient permissions. What else might go wrong?
marcel_hecko · 5h ago
It's mostly the subgid subuid mapping of ids between guest and host which is non trivial to understand in rootless envs. Add selinux in the mix....
galangalalgol · 4h ago
What actual issues do you run into? We have selinux and rootless and I didn't notice the transition from docker as a user.
strbean · 4h ago
> subgid subuid mapping

trigger warning please D:

iTokio · 5h ago
Mounting Volume and dealing with FS permissions.

They are many different workarounds but it’s a known pain point.

zamalek · 4h ago
As a huge fan of podman this is definitely one of my disappointments. In the event that you're still struggling with this, the answer is using a --user systemd quadlet. You'll need to use machinectl (machinectl shell <user>@.host) for systemd commands to work, and you'll want to enable linger for that user.

One thing which just occurred to me, maybe it's possible to have a [container] and a [service].user in a quadlet?

thyristan · 5h ago
Yes, but the reason for that pain is SElinux. The first, second and third law of RedHat sysadmin work is "disable SElinux".
preisschild · 3h ago
> The first, second and third law of RedHat sysadmin work is "disable SElinux".

Must not be a good sysadmin then. SELinux improves the security and software like podman can be relatively easily be made to work with it.

I use podman on my Fedora Workstation with selinux set to enforce without issues

mrighele · 8h ago
> If your Docker Compose workflow is overly complex, just convert it to Kubernetes YAML. We all use Kubernetes these days, so why even bother about this?

I find that kubernetes yaml are a lot more complex than docker compose. And while I do, no, not everybody uses kubernetes.

vbezhenar · 59m ago
I disagree with you on that. Kubernetes YAML is on the same level of complexity as docker compose and sometimes even easier.

But verbosity - yeah, kubernetes is absolutely super-verbose. Some 100-line docker-compose could easily end up as 20 yamls of 50 lines each. kubectl really needs some sugar to convert yamls from simple form to verbose and back.

esseph · 8h ago
Having an LLM function as a translation layer from docker compose to k8s yaml works really well.

On another note, podman can generate k8s yaml for you, which is a nice touch and easy way to transition.

politelemon · 8h ago
Use an LLM is not a solution. It's effectively telling you to switch your brain off and hope nothing goes wrong in the future. In reality things do go wrong and any conversation should be done with a good understanding of the system involved.
hallway_monitor · 7h ago
While I agree with this concept, I don't think it is applicable here. Docker compose files and k8s yaml are basically just two different syntaxes, saying the same thing. Translating from one syntax to another is one of the best use cases for an LLM in my opinion. Like anything else you should read it and understand it after the machine has done the busy work.
catlifeonmars · 7h ago
I bet there’s already a conversion library for it. Translating from one syntax to another _reliably_ should be done with a dedicated library. That being said, I don’t disagree that using an LLM can be helpful to generate code to do the same.
KronisLV · 4h ago
> Translating from one syntax to another is one of the best use cases for an LLM in my opinion.

Have a look at https://kompose.io/ as well.

brennyb · 2h ago
"Using an IDE is not a solution" same arguments, same counter arguments. An abstraction being leaky does not mean it's useless. You will always need to drop down a layer occasionally, but there's value in not having to live on the lower layer all the time.
SoftTalker · 8h ago
When things go wrong, you just ask the LLM about that too. It's 2025.

/s

IHLayman · 7h ago
You don’t need an LLM for this. Use `kubectl` to create a simple pod/service/deployment/ingress/etc, run `kubectl get -o yaml > foo.yaml` to bring it back to your machine in yaml format, then edit the `foo.yaml` file in your favorite editor, adding the things you need for your service, and removing the things you don’t, or things that are automatically generated.

As others have said, depending on an LLM for this is a disaster because you don’t engage your brain with the manifest, so you aren’t immediately or at least subconsciously aware of what is in that manifest, for good or for ill. This is how bad manifest configurations can drift into codebases and are persisted with cargo-cult coding.

[edit: edit]

esseph · 6h ago
> You don't need an LLM for this

I guess that depends on how many you need to do

BTW, I'm talking about docker/compose files. kubectl doesn't have a conversion there. When converting from podman, it's super simple.

Docker would be wise to release their own similar tool.

compose syntax isn't that complex, nor would it take advtange of many k8s features out of the box, but it's a good start for a small team looking to start to transition platforms

(Have been running k8s clusters for 5+ years)

hamdingers · 6h ago
This assumes everyone who wants to run containers via podman has kubectl and a running cluster to create resources in which is a strange assumption.
osigurdson · 8h ago
I don't know how to create a compose file, but I do know how to create a k8s yaml. Therefore, compose is more "complex" for me.
0_gravitas · 7h ago
This is a conflation of "Simple" and "Easy" (rather, "complex" and "hard"). 'Simple vs Complex' is more or less objective, 'Easy vs Hard' is subjective, and changes based on the person.

And of course, Easy =/= Simple, nor the other way around.

hamdingers · 6h ago
I'm a CKA and use docker compose exclusively in my homelab. It's simpler.
diarrhea · 8h ago
One challenge I have come across is mapping multi-UID containers to a single host user.

By default, root in the container maps to the user running the podman container on the host. Over the years, applications have adopted patterns where containers run as non-root users, for example www-data aka UID 33 (Debian) or just 1000. Those no longer map to your own user on the host, but subordinate IDs. I wish there was an easy way to just say "ALL container UIDs map to single host user". The uidmap and userns options did not work for me (crun has failed executing those containers).

I don’t see the use case for mapping to subordinate IDs. It means those files are orphaned on the host and do not belong to anyone, when used via volume mapping?

mixedbit · 8h ago
If I understand things correctly, this is Linux namespaces limitation, so tools like Docker or Podman will not be able to support such mapping without support from Linux. But I'm afraid the requirement for UIDs to be mapped 1:1 is fundamental, otherwise, say two container users 1000 and 0 are mapped to the same host user 1000. Who then should be displayed in the container as the owner of a file that is owned by the user 1000 on a host?
privatelypublic · 7h ago
Have you looked at idmapped mounts? I don't think it'll fix everything (only handles FS remapping, not kernel calls that are user permissioned)
diarrhea · 6h ago
I have not, thanks for the suggestion though.

A second challenge with the particular setup I’m trying is peer authentication with Postgres, running bare metal on the host. I mount the Unix socket into the container, and on the host Postgres sees the Podman user and permits access to the corresponding DB.

Works really well but only if the container user is root so maps natively. I ended up patching the container image which was the path of least resistance.

teekert · 7h ago
This. And then some way to just be “yourself” in the container as well. So logs just show “you”.
lights0123 · 7h ago
ignore_chown_errors will allow mapping root to your user ID without any other mappings required.
0xbadcafebee · 2h ago
If "security" is the reason you're switching to Podman, I have some bad news.

Linux gets a new privilege escalation exploit like once a month. If something would break out of the Docker daemon, it will break out of your own user account just fine. Using a non-root app does not make you secure, regardless of whatever containerization feature claims to add security in your own user namespace. On top of all that, Docker has a rootless mode. https://docs.docker.com/engine/security/rootless/

The only things that will make your system secure are 1) hardening every component in the entire system, or 2) virtualization. No containers are secure. That's why cloud providers all use mini-VMs to run customer containers (e.g. AWS Fargate) or force the customer to manage their own VMs that run the containers.

amclennon · 2h ago
> That's why cloud providers all use mini-VMs to run customer containers (e.g. AWS Fargate) or force the customer to manage their own VMs that run the containers.

This is only partially true. Google's runtime (gvisor) does not share a kernel with the host machine, but still runs inside of a container.

carwyn · 49m ago
s_ting765 · 55m ago
Google cloud dropped gVisor in favor of micro VMs.

https://cloud.google.com/blog/products/serverless/cloud-run-...

raquuk · 9h ago
The "podman generate systemd" command from the article is deprecated. The alternative are Podman Quadlets, which are similar to (docker-)compose.yaml, but defined in systemd unit files.
stingraycharles · 8h ago
Which actually makes a lot of sense, to hand over the orchestration / composing to systemd, since it’s not client <> server API calls (like with docker) anymore but actual userland processes.
Cyph0n · 7h ago
Yep. It works even better on a declarative distro like NixOS because you can define and extend your systemd services (including containers) from a single config.

Taking this further (self-plug), you can automatically map your Compose config into a NixOS config that runs your Compose project on systemd!

https://github.com/aksiksi/compose2nix

solarkraft · 8h ago
It totally does! On the con side, I find systemd unit files a lot less ergonomic to work with than compose files that can easily be git-tracked and colocated.
mariusor · 7h ago
What makes a systemd service less ergonomic? I guess it needs a deployment step to place it into the right places where systemd looks for them, but is there anything else?
broodbucket · 8h ago
With almost no documentation, mind
raquuk · 8h ago
I find the man page fairly comprehensive: https://docs.podman.io/en/latest/markdown/podman-systemd.uni...
tux1968 · 6h ago
Is linking to a 404 page meant to highlight the lack of docs, or is there some mistake?
raquuk · 6h ago
Apparently the documentation was just updated. The new location is https://docs.podman.io/en/latest/markdown/podman-quadlet.7.h...
mdaniel · 6h ago
I do believe you about the "updated" part, and that's a constant hazard with linking to "latest" or "main" of anything. But I don't know why you'd then change the actual file in the URL, since the original comment was citing "podman-systemd.unit.5.html" <https://docs.podman.io/en/v5.6.1/markdown/podman-systemd.uni...> and you've chosen to cite quadlet.7
stryan · 3h ago
Not OP but "podman-systemd.unit.5" used to be the primary Quadlet documentation (a remnant of when it was podman-generate-systemd perhaps?) with every Quadlet file type (.container, .volume, .network, etwc) documented on one page.

The new docs split that out into separate podman-container/volume/etc.unit(5) pages, with quadlet.7 being the index page. So they're still linking to the same documentation, just the organization happened to change underneath them.

If you must see what they linked to originally, the versions docs are still the original organization (i.e. all on one page): https://docs.podman.io/en/v5.6.0/markdown/podman-systemd.uni...

vaylian · 3h ago
You can also look the documentation up locally: `man quadlet`
vermaden · 3h ago
cheema33 · 2h ago
Can you run MS SQL Server inside a FreeBSD jail? Or any of the thousands of other ready to run docker containers?

Whatever you gain by running FreeBSD comes at a high cost. And that high cost is keeping FreeBSD jails from taking over.

chuckadams · 2h ago
That's ... a lot of setup. Does FreeBSD have anything similar to containerd?
matrix12 · 2h ago
Very distro specific however.
udev4096 · 2h ago
How is that any different than running VMs on a linux host?
Tajnymag · 9h ago
I've wanted to migrate multiple times. Unfortunately, it failed on multiple places.

Firstly, podman had a much worse performance compared to docker on my small cloud vps. Can't really go into details though.

Secondly, the development ecosystem isn't really fully there yet. Many tools utilizing Docker via its socket, fail to work reliably with podman. Either because the API differs or because of permission limitations. Sure, the tools could probably work around those limitations, but they haven't and podman isn't a direct 1:1 drop in replacement.

bonzini · 8h ago
> podman had a much worse performance compared to docker on my small cloud vps. Can't really go into details though.

Are you using rootless podman? Then network redirection is done using user more networking, which has two modes: slirp4netns is very slow, pasta is the newer and good one.

Docker is always set up from the privileged daemon; if you're running podman from the root user there should be no difference.

Tajnymag · 8h ago
Well, yes, but rootless is basically the main selling point of podman. Once you start using daemons and privileged containers, you can just keep using docker.
bonzini · 6h ago
No, the main selling point is daemonless. For example, you put podman in a systemd unit and you can stop/start with systemctl without an external point of failure.

Comparing root docker with rootless podman performance is apples to oranges. However, even for rootless pasta does have good performance.

curt15 · 5h ago
Some tools talk to docker not using the docker CLI but directly through its REST API. Podman also exposes a similar REST API[1]. Is Podman with its API server switched on substantially different from the docker daemon?

[1]. https://docs.podman.io/en/latest/markdown/podman-system-serv...

bonzini · 1h ago
Yes because the API server is stateless, unlike the docker daemon. If you kill it you can still operate on containers, images, etc. by other means, whereas if you kill the docker daemon the CLI stops working too.
anilakar · 8h ago
SELinux-related permission errors are an endless nuisance with podman and quadlet. If you want to sandbox about anything it's easier to create a pod with full host permissions and necessary /dev/ files mounted, running a simple program that exposes minimal functionality over an isolated container network.
Aluminum0643 · 7h ago
Udica, plus maybe ausearch | audit2allow -C, makes it easy to generate SELinux policies for containers (works great for me on RHEL10-like distros)

https://www.redhat.com/en/blog/generate-selinux-policies-con...

seemaze · 7h ago
Thats funny, podman had better performance and less resource usage on my resource constrained system. I chalked it up to crun vs runc, though both docker and podman both support configuring alternate runtimes. Plus no deamon..
dktalks · 6h ago
If you are on a Mac, I have been using OrbStack[1] and it has been fantastic. I spin up few containers there, but my biggest use is just spinning up Alpine linux and then running most of my Docker containers in there.

[1] https://orbstack.dev/

ghrl · 5h ago
I use OrbStack too and think it's great software, both for running containers and stuff like having a quick Alpine environment. However, I don't see the point of running Docker within Alpine. Wouldn't that defeat the optimizations they have done? What benefits do you get?
dktalks · 5h ago
Many docker containers are optimized to run as Alpine on other systems. You get the benefit that it runs on Alpine itself.
dktalks · 4h ago
dktalks · 6h ago
Setup is really easy once you install alpine

1. ssh orb (or machine name if you have multiple) 2. sudo apk add docker docker-cli-compose (install docker) 3. sudo addgroup <username> docker (add user to docker group) 4. sudo rc-update add docker default (set docker to start on startup)

Bonus, add lazydocker to manage your docker containers in a console

1. sudo apk add lazydocker

classified · 6h ago
You mean, you let Docker containers run inside the OrbStack container, or how does that work?
dktalks · 4h ago
No, you don't run the Docker containers run in OrbStack, you can spin up an Alpine instance and run all docker instance on it.

The benefit is that, Alpine has access to all your local and network drives so you can use them. You can sandbox them as well. It's not a big learning curve, just a good VM with access to all drives but isolated to local only.

dktalks · 4h ago
And you can run Docker inside OrbStack too, it is really good. But most of my containers are optimized Alpine containers so I prefer to run them on an OS they were built for and others in OrbStack.
kdumont · 9h ago
Both podman and docker have pretty poor error handling in my experience. It depends on the error, but for me it often comes down to a docker compose misconfiguration, resource, permissions, etc. In docker always find the errors quite difficult to trace back to root cause. In podman you get a python a stack trace. I wish both projects would assert different assumptions/requirements at runtime and report errors/warnings in a human-readable way.
cpuguy83 · 5h ago
BTW, undrstandable to not have an example on the spot.

In general we do actually try to provide full context for errors from dockerd. Some things can be cryptic because, frankly, they are cryptic and require digging into what really happened (typical of errors from runc), but we do tend to wrap things so at least you know where the call site was.

There's also tracing data you can hook into, which could definitely be improved (some legacy issues around context propagation that need to be solved).

I've definitely seen, in the past, my fair share of errors that simply say "invalid argument" (typically this is a kernel message) without any context but have worked to inject context everywhere or do better at handling errors that we can.

So definitely interested in anything you've seen that could be improved because no one likes to get an error message that you can't understand.

cpuguy83 · 7h ago
Do you have an example?
idoubtit · 8h ago
I also ditched docker when I could. In my experience...

Podman with pods is a better experience than docker-compose. It's easy to interactively create a pod and add containers to it. The containers ports will behave as if they were on the same machine. Then `podman generate kube` and you have a yaml file that you can run with `podman kube play`.

Rootless networking is very slow unless you install `passt`. With Debian, you probably should install every optional package that podman recommends.

The documentation is lacking. Officially, it's mostly man pages, with a few blog posts announcing features, though the posts are often out of date.

Podman with its docker socket is often compatible with Docker. Even docker-compose can (usually) work with podman. I've had a few failures, though.

Gitlab-runner can use podman instead of docker, but in this case the is no network aliases. So it's useless if the runner needs to orchestrate several images (e.g. code and db).

gucci-on-fleek · 37m ago
> Rootless networking is very slow unless you install `passt`.

If the software that you're running inside the container supports it, you can use socket activation [0] to get native performance.

[0]: https://github.com/containers/podman/blob/main/docs/tutorial...

rsyring · 5h ago
> Rootless networking is very slow

I came across just how slow recently:

- Container -> host: 0.398 Gbps vs. 42.2 Gbps

- host -> container: 20.6 Gbps vs 47.4 Gbps

Source: https://github.com/containerd/nerdctl/blob/main/docs/rootles...

codedokode · 5h ago
The slow speed is for using slirp4netns, not containers in general.
markstos · 7h ago
I'm a podman user and fan, but there is one gotcha to know about the systemd integration.

You might expect that setting User=foo via systemd would enable seamless rootless containers, but it turns out to be a hard problem without a seamless solution.

Instead, there's this discussion thread with 86 comments and counting to wade through to find some solutions that have worked for some people in some cases.

https://github.com/containers/podman/discussions/20573#discu...

hvenev · 7h ago
What I personally do is

    User=per-service-user
    ExecStart=!podman-wrapper ...
where podman-wrapper passes `--user=1000:1000 --userns=auto:uidmapping=1000:$SERVICE_UID:1,gidmapping=1000:$SERVICE_GID:1` (where the UID/GID are set based on the $USER environment variable). Each container runs as 1000:1000 inside the container, which is mapped to the correct user on the host.
nunez · 6h ago
I started working at Red Hat this past year, so obviously all Podman, all day long. It's a super easy switch. I moved to using Containerfiles in my LinkedIn courses as well, if for no other reason than it having a much more "open" naming convention!

Rootless works great, though there are some (many) images that will need to be tweaked out of the box.

Daemonless works great as well. You can still mount podman.sock like you can with Docker's docker.sock, but systemd handles dynamically generating the UNIX socket on connect() which is a much better solution than having the socket live persistently.

The only thing that I prefer Docker for is Compose. Podman has podman-compose, which works well and is much leaner than the incumbent, but it's kind of a reverse-engineered version of Docker Compose that doesn't support the full spec. (I had issues with service conditions, for example).

mdaniel · 6h ago
> that doesn't support the full spec

I'd guess that's because "the spec" is more .jsonschema than a spec about what behaviors any random version should do. And I say "version" because they say "was introduced in version $foo" but they also now go out of their way to say that the file declaring what version it conforms to is a warning

mychael · 6h ago
OP fails to understand that in practice people use Docker Desktop on their laptop and deploy to a container platform or Kubernetes cluster that uses ContainerD. So all of these so-called issues are moot. Further, Docker Inc (and the people behind Docker CLI, Compose etc), have way better UX taste and care for DX than their competitors which matters a lot in local development.
odie5533 · 2h ago
The UI is what held me back from switching last time I tried podman. Docker Desktop is nice.
1a527dd5 · 9h ago
I really wish Docker didn't take over the industry like it has. In my experience not enough people know how to debug yet another layer of abstraction.

Remove layers, keep things simple.

That being said, it is here to stay. So any alternative tooling that forces Docker to get it's act together is welcome.

goku12 · 4h ago
> ... how to debug yet another layer of abstraction.

> Remove layers, keep things simple.

Due to the first line above, I'm not sure if I'm reading the second line correctly. But I'm going to assume that you're referring to the OCI image layers. I feel your pain. But honestly, I don't think that image layers are such a bad idea. It's just that the best practices for those layers are not well defined and some of the early tooling propagated some sub-optimal uses of those layers.

I'll just start with when you might find layers useful. Flatpak's sandboxing engine is bubblewrap (bwrap). It's also a container runtime that uses namespaces, cgroups and seccomp like OCI runtimes do. The difference is that it has more secure seccomp defaults and it doesn't use layers (though mounts are available). I have a tool that uses bwrap to create isolated build and packaging environments. It has a single root fs image (no layers). There are two annoyances with a single layer like this:

1. If you have separate environments for multiple applications/packages, you may want to share the base OS filesystem. You instead end up replicating the same file system redundantly.

2. If you want to collect the artifacts from each step (like source download, extract and build, 'make install', etc) into a separate directory/archive, you'll find yourself reaching out for layers.

I have implemented this and the solutions look almost identical to what OCI runtimes do with OCI image layers - use either overlayfs or btrfs/zfs subvolume mounts.

So if that's the case, then what's the problem with layers? Here are a few:

1. Some tools like the image builders that use Dockerfile/Containerfile create a separate layer for every operation. Some layers are empty (WORKDIR, CMD, etc). But others may contain the results of a single RUN command. This is very unnecessary and the work-arounds are inelegant. You'll need to use caches to remove temporary artifacts, and chain shell commands into a single RUN command (using semicolons).

2. You can't manage layers like files. The chain of layers are managed by manifests and the entire thing needs a protocol, servers and clients to transfer images around. (There are ways to archive them. But it's so hackish.)

So, here are some solutions/mitigations:

1. There are other build tools like buildah and packer that don't create additional layers unless specified. Buildah, a sister project of Podman, is a very interesting tool. It uses regular (shell) commands to build the image. However, those commands closely resemble the Dockerfile commands, making it easy to learn. Thus you can write a shell script to build an image instead of a Dockerfile. It won't create additional layers unless you specify. It also has some nifty features not found in Dockerfiles.

Newer Dockerfile builders (I think buildkit) have options to avoid creating additional layers. Another option is to use dedicated tools to inspect those layers and split/merge them on demand.

2. While a protocol and client/servers are rather inconvenient for lugging images around, they did make themselves useful in other ways too. Container registries these days don't host just images. They can host any OCI artifact. And you can practically pack any sort of data into such an artifact. They are also used for hosting/transferring a lot of other artifacts like helm charts, OPA policies, kubectl plugins, argo templates, etc.

> So any alternative tooling that forces Docker to get it's act together is welcome

What else do you consider as some bad/sub-optimal design choices of Docker? (including those already solved by podman)

Hizonner · 7h ago
I don't know how podman compares to docker in terms of performance, and I do know that rootless containers can be a real pain.

But Docker is simply a non-starter. It's based on a highly privileged daemon with an enormous, hyper-complicated attack surface. It's a fundamentally bad architecture, and as far as I've been able to tell, it also comes from a project that's always shown an "Aw, shucks" attitude toward security. Nobody should be installing that anywhere, not even if there weren't an alternative.

causal · 3h ago
I generally find rootless pretty easy, it's just annoying that it's an additional few steps. Feels like an afterthought when it should be the default.
matesz · 7h ago
Rootless containers are a pain but only on mac, otherwise it’s just pure upside.
sharts · 17m ago
The fact that people keep conflating docker desktop and the docker engine is crazy.
giamma · 7h ago
I am still on an x86 Mac.

When Docker Desktop changed licensing I tried to switch to Podman and it was a disaster, Podman was brand new and despite many blog posts that claimed it was the perfect replacement it did not work for me, and I have very simple requirements. So I ended up using Rancher Desktop instead, which was also very unstable but better.

Fast forward 1 year, Rancher was pretty good and Podman still did not work reliably on my mac.

Fast forward another year or so and I switched to colima.

I tried podman last time about one year ago and I still had issues on my old mac. So far colima has been good enough for my needs although at least two times a brew update broke colima and I had to reinstall from scratch.

xtracto · 5h ago
This mirror my experience. Ive been testing podman over the few years since Docker changed its license. But every time there's some thing that just doesnt work. So i find myself always returning to docker
ehaughee · 7h ago
I also had issues with Podman on an M2 Mac. I'm currently using OrbStack as I had network performance issues with Colima (I have 10GbE between the Mac and a NAS). Outside that issue, Colima was awesome.
ehaughee · 7h ago
I also had issues with Podman on an M2 Mac. I'm currently using OrbStack as I had network performance issues with Colima (I have 10GbE between the Mac and a NAS). Outside that issue, Colima was awesome.
sspiff · 6h ago
I've been on Podman for about two years now. My coworkers and the entire company codebase / scripts etc are on Docker.

Podman has a number of caveats that make it not a drop in replacement out of the box, but they are mostly few and far between. Once you've learned to recognize a handful of common issues, it's quite simple to migrate.

This might sound like me trying to explain away some of the shortcomings of podman, but honestly, from my point of view, podman does it better, and the workarounds and fixes I make to our scripts and code for podman are backwards compatible and I view them as improvements.

rsyring · 5h ago
Do you have a summary of those most common issues and their workarounds?
wyoung2 · 3h ago
Broadly, the claim that Podman is a drop-in replacement for Docker is true only for the simple cases, but people have developed assorted dependencies on Docker implementation details. Examples:

1. People hear about how great rootless is with Podman but then expect to be able to switch directly from rootful Docker to rootless Podman without changing anything. The only way that could work is if there was no difference between rootful and rootless to begin with, but people don't want to hear that. They combine these two selling points in their head and think they can get both a drop-in replacement for Docker and also rootless by default. The proper choice is to either switch from rootful Docker to rootful Podman *or* put in the work to make your container work in rootless, work you would also have had to do with rootless Docker.

2. Docker Compose started out as an external third-party add-on (v1) which was later rewritten as an internal facility (v2) but `podman compose` calls out to either `docker-compose` (i.e. v1) or to its own clone of the same mechanism, `podman-compose`. The upshot is a lot of impedance mismatch. Combine that with the fact that Podman wants you to use Quadlets anyway, resulting in less incentive to work on these corner cases.

3. Docker has always tried to pretend SELinux doesn't exist, either by hosting on Debian and friends or by banging things into place by using their privileged (rootful) position. Podman comes from Red Hat, and until recently they had Mr SELinux on the team. Thus Podman is SELinux-first, all of which combines to confuse transplants who think they can go on ignoring SELinux.

4. On macOS and Windows, both Podman and Docker need a background Linux VM to provide the kernel, without which they cannot do LXC-type things. These VMs are not set up precisely the same way, which produces migration issues when someone is depending on exact details of the underlying VM. One common case is that they differ in how they handle file sharing with the host.

hyperpape · 7h ago
I have a few links saved from my joyful experience using podman with Fedora (and therefore selinux). Iirc, I tried using podman because Fedora shipped cgroups v2, which didn't work with Docker (in my own ignorance, I would've thought coordinating with major dev tools would be important, but distros often have other ideas).

- https://www.redhat.com/en/blog/user-namespaces-selinux-rootl... - https://www.redhat.com/en/blog/sudo-rootless-podman

I'd summarize these posts as "very carefully explaining how to solve insane problems."

Kerbiter · 7h ago
Fedora is rather aggressive in pushing Podman. They have their Cockpit control panel for Fedora Server, and they've simply made the Cockpit Docker plugin unavailable when it was working fine, because "use Podman integration plugin instead".
danousna · 5h ago
I use both Podman and Docker at work, specifically I had to use the same docker images / container setup in a RHEL deployment and it worked great.

A huge pain was when I used "podman-compose" with a custom podman system storage location, two times it ended corrupted when doing an "up" and I had to completely scratch my podman system storage.

I must have missed something though ...

ThomW · 2h ago
We did the same. Switching was a cinch honestly - the only thing that screwed me up was some dumb page that returned a bunch of nonsense I was supposed to do to my docker-compose.yml file to make it more compatible with podman-compose. I spent a couple hours trying to figure out why things weren't working, until I finally rolled back all the stupid suggested changes, and my app fired right up.

The only impactful difference I've noticed so far is that the company is moving to an artifact repository that requires authentication, and mounting secrets using --mount doesn't support the env= parameter -- that's really it.

I treat podman like I did docker all day long and it works great.

kodama-lens · 8h ago
I tried podman for multiple times. Normal testing & sandox stuff just works and you really can do alias docker=podman. But ass soon as you add nertworking me broke for me. And for me it is really just a tool and I need my tools working. So I switched back.

Recently I did the GitLab Runner migration for a company and switched to rootless docker. Works perfectly, all devs did not notice all there runs now use rootless docker and buildkit for builds. All thanks to rootless kit. No podman problems, more secure and no workflow change needed

drzaiusx11 · 8h ago
Still happily using Colima as a Docker Desktop for Mac replacement. It even allows mixed architecture containers in the same compose stack. What's podman gain me besides a half baked Docker compose implementation?
osigurdson · 7h ago
Keep using docker, who cares. The article is concerned about CVEs, etc, but this doesn't matter for development very much.

If you use k8s for anything, podman might help you avoid remembering yet another iac format.

cpuguy83 · 7h ago
Concerned about cve's but doesn't pay attention to the massive list of cve's for rootless setups which have a much broader scope/impact.
drzaiusx11 · 6h ago
My laptop isn't exposing any ports outside localhost, so all I care about is validation of my containers for local-only testing (similar use case to Docker desktop.)

Colima would/should never be used in production for a number of reasons, but yeah it's great for local only development on a laptop.

smoyer · 1h ago
So did I ... But I've started writing my applications to run locally, in a WASM runtime and in a container so most of my debug and test occurs right on my OS.
vbezhenar · 8h ago
I did numerous attempts to switch from docker to podman. Latest one worked, and so far I didn't feel the need to get back to docker. There was only one issue that I had: huge uid didn't work in podman (like 1000000 I think), but I fixed the dockerfile and rest worked fine for me. podman-compose does not work well in my experience, but I don't use it anymore.
wyoung2 · 3h ago
> huge uid didn't work in podman (like 1000000 I think)

You're running into the `/etc/sub[ug]id` defaults. The old default was to start your normal user at 100000 + 64k additional sub-IDs per user, but that changed recently when people at megacorps with hundreds of thousands of employees defined in LDAP and similar ran into ID conflict. Sub-IDs now start at 2^19 on RHEL10 for this reason.

No comments yet

osigurdson · 8h ago
Instead of using compose, you can create Kubernetes like yamls and run with podman play kube.

Of course if you have really large / complex compose files or just don't feel like learning something else / aren't using k8s, stick with docker.

r_lee · 5h ago
have you tried nerdctl? its basically just the containerd cli which is close to k8s and etc. not a for profit thing, just following containerd spec
rglover · 4h ago
I may be the odd man out, but after getting unbelievably stressed out by containers, k8s, etc., I couldn't believe how zen just spinning up a new VPS and bootstrapping it with a bash script was. That combined with systemd scripts can get you relatively far without all of the (cognitive) overhead.

The best part? Whenever there's an "uh oh," you just SSH in to a box, patch it, and carry on about your business.

TrainedMonkey · 4h ago
Containers and container orchestrators are complex tools. The constant cost of using them is pretty high compared to bash scripts. However the scale / maintenance factor is significantly lower, so for a 100 boxes simplicity of bash scripts might still win out over the containers. At 1000 machines it is highly likely that simplest and least maintenance overall solution will be using an orchestrator.
rglover · 3h ago
That's what I found out, though: the footprint doesn't matter. I did have to write a simple orchestration system, but it's literally just me provisioning a VPS, bootstrapping it with deps, and pulling the code/installing its dependencies. Short of service or hardware limits, this can work for an unlimited number of servers.

I get the why most people think they need containers, but it really seems only suited for hyper-complex (ironically, Google) deployments with thousands of developers pushing code simultaneously.

chickensong · 2h ago
> it really seems only suited for hyper-complex (ironically, Google) deployments with thousands of developers pushing code simultaneously

There are many benefits to be had for individuals and small companies as well. The piece of mind that comes with immutable architecture is incredible.

While it's true that you can often get quite far with the old cowboy ways, particularly for competent solo devs or small teams, there's a point where it starts to unravel, and you don't need to be a hyper-complex mega-corp to see it happen. Once you stray from the happy path or have common business requirements related to operations and security, the old ways become a liability.

There are reasons ops people will accept the extra layers and complexity to enable container-based architecture. They're not thrilled to add more infrastructure, but it's better than the alternative.

madeofpalk · 3h ago
> you just SSH in to a box, patch it

Oh god. I can’t imagine how I could build reliably software if this is what I was doing. How do you know what “patches” are needed to run your software?

lotyrin · 4h ago
Well... yeah? If you exist in as an individual or part of a group which is integrated (shared trust, goals, knowledge etc.) then yeah, obviously you do not have the problem (tragedy of the commons) that splitting things up (including into containers, but literally any kind of boundary) solves for.

The container split is often introduced because you have product-a, product-b and infrastructure operations teams/individuals that all share responsibility for an OS user space (and therefore none are accountable for it). You instead structure things as: a host OS and container platform for which infra is responsible, and then product-a container(s) and product-b container(s) for which those teams are responsible.

These boundaries are placed (between networks, machines, hosts and guests, namespaces, users, processes, modules, etc. when needed due to trust or useful due to knowledge sharing and goal alignment.

When they are present in single-user or small highly-integrated team environments, it's because they've been cargo-culted there, yes, but I've seen an equal number of environments where effective and correct boundaries were missing as I've seen ones where they were superfluous.

bjt · 5h ago
In the beginning, Docker DID have "standalone mode" where it would launch just one container as a child process. That was actually an easier way to manage the mounts and cgroups necessary to stand up a container. I made a ticket to bring it back after they removed it, and it was closed with a wontfix. The cynic in me says it was done more for commercial reasons (they wanted a more full featured daemon on the server doing things they could charge for) as opposed to just being a little shim that just did one thing.
qalmakka · 1h ago
In my experience Podman is better but always ends up having some wonky bug (like, the other day, secrets didn't mount anymore during builds). Rootless, daemonless is great but it's basically bound to have some extra tinkering required compared to "a stupid daemon running as root"
evertheylen · 6h ago
To add to the article: systemd integration works in the other way too! Running systemd in a Docker container is a pain. It's much easier in Podman: https://developers.redhat.com/blog/2019/04/24/how-to-run-sys...

(Most people use containers in a limited way, where they should do just one thing and shouldn't require systemd. OTOH I run them as isolated developer containers, and it's just so much easier to run systemd in the container as the OS expects.)

delduca · 9h ago
I have ditched docker desktop on macOS for OrbStack.
chrisweekly · 8h ago
OrbStack looks pretty nice, BUT an $8/mo/user subscription? Blech.
frje1400 · 8h ago
Orbstack is worth every penny. It's simply amazingly solid compared to Podman on macOS (a year ago at least, I don't know if Podman has improved). We migrated 100+ devs to Orbstack and it was like a collective sigh of relief that we finally had something that actually worked.
otterley · 6h ago
Useful software that makes our lives more convenient is worth paying for--after all, it pays most of our salaries, doesn't it?

It feels a little hypocritical for us to feed our families through our tech talent and then complain that someone else is doing the same.

chrisweekly · 4h ago
It's the subscription model that chafes. For SaaS? Ok, sure. But for a desktop app, I just don't like it. It might not be entirely rational. shrug
bzzzt · 8h ago
It doesn't only look prettier, it also starts and works a lot faster. Switched a few years ago; at that time Docker desktop has a known issue of continually using 5% CPU on Mac which they didn't fix for years.
osigurdson · 7h ago
I don't understand why people need a gui for docker/podman.
grep_name · 4h ago
I use orbstack, but I never look at it, it just opens when I start up the computer. I used to use docker desktop, which I never looked at either. The docker daemon has always just been broken on Mac for as long as I've been trying to work with it (about 4 years, at least as far as Mac environments).

Idk what the problem is, but it's ugly. I switched to orbstack because there was something like a memory leak happening with docker desktop, just using waaaaay too many resources all the time, sometimes it would just crash. I just started using docker desktop from the get-go because when I came on I had multiple people with more experience say 'oh, you're coming from linux? Don't even try to use the docker daemon, just download docker desktop'.

osigurdson · 3h ago
On Windows, the easiest thing is to just use podman without podman desktop. It installs easily as a winget package and works in your current shell without having to first start WSL (it does that behind the scenes).

On Linux, for development, podman and docker are pretty similar but I prefer the k8s yaml approach vs compose so tend to use podman.

I don't think Apple really cares about dev use cases anymore so I haven't used a Mac for development in a while (I would for iOS development of course if that ever came up).

elliottr1234 · 7h ago
Take a look at https://docs.orbstack.dev/

It's much more than a gui for it supports running k8s locally, managing custom vm instances, resource monitoring of containers, built in local domain name support with ssl mycontainer.orb, a debug shell that gives you ability to install packages that are not available in the image by default, much better and automated volume mounting and view every container in finder, ability to query logs, an amazing ui, plus it is much, much faster and more resource efficient.

I am normally with you that terminal is usually enough, but the above features really do make it worth it especially when using existing services that have complicated failure logs or are resource intensive like redis, postgres, livekit, etc or you have a lot of ports running and want to call your service without having to worry about remembering port numbers or complicated docker network configuration.

delduca · 8h ago
Trust me. It worth each cent.
jbverschoor · 8h ago
vs $11 for docker? blech
elliottr1234 · 7h ago
Honestly just the debug shell alone is worth a good amount of $. You can remotely run shell commands on your deployed docker container and install packages that are not available in the base image without modifying the base image which can be a life saver.

https://docs.orbstack.dev/features/debug

Let alone the local resource monitor, increased, performance, automated local domains (no more complicated docker network settings to get your app working with local host), and more.

chrisweekly · 3h ago
it does sound pretty compelling
timcambrant · 3h ago
I'm probably going to finally give podman a try, but apart from the security advantages of daemonless, I pretty much have all these features solved on my Docker hosts already. For home/lab workloads I define one docker compose project in a directory, using local path mounts for directories. Then I manually define a systemd service per docker compose project, which just runs "docker compose up -d <dir>" on start, and the opposite on stop. The hundreds of containers I run at home have higher uptime than the thousands of containers in the orchestration platform I run at work has.

Does the "podman generate kube" command just define pods, or does it support other K8s components such as services and ingresses?

BinaryIgor · 4h ago
I certainly like demon-less architecture; much simpler and there are less potential security issues and no single point of failure.

The one thing I don't necessarily agree:

"Privileged ports in rootless mode not working? Good! That's security working as intended. A reverse proxy setup is a better architecture anyway."

I usually use Ngix as a reverse proxy - why not have it set up in the exact same way as the rest of your apps? That's a simplicity advantage. So with Podman, I would just run this one exact container in root mode - that's still better than all of them, but quite.

I am not a fan of docker-compose - a classic example of a tool trying to do too much for me, so the lack of something similar in Podman is not a drawback for me :)

Not sure about tooling around logs and monitoring though - there is plenty for Docker.

dathinab · 5h ago
There can be interesting differences, I'm not sure which of them still apply but some I ran into:

- podman having a more consistent CLI API/more parameters (but I think docker did at least partially catch up)

- user-ns container allow mounting the context instead of copying it, this means that if you somehow end up with a huge build context user-ns build can be _way_ faster then "classical" docker builds (might also apply to rootless docker, idk.). We ran into that when that one person in the team using Mac+Docker asked if we can do something about the unbearable slow docker build times (no one else on the team experienced :/)

- docker implicitly always has the docker Hub as configured as source which resolves "unqualified", this might not be true for your podman default setup so some scripts etc. which should work with both might fail (but it's easy to fix, preferable always qualify your images as there are increasingly more image hosts, in worst add docker hub in the config file).

- "podman compose" supports some less feature, this might seem like a huge issue but compose doesn't seem the best choice for deploying software and if I look how it turned out in dev setups the moment things became larger/more complicated I came to the conclusion that docker/podman compose is one of the easy to start with then get trapped in a high complexity/maintenance cost "bad" corner technologies. But I'm still looking for alternatives.

- podman sometimes missing some resource management features, but also docker sometimes does have differences in how it effectively enforces them not just between version but also with the same version between OSes, this had lead to issues before where docker-rootless kills a container and docker on Mac doesn't because on Mac it didn't notice spikes in resource usage (in that specific case).

CraigJPerry · 7h ago
Docker is failing in that trap where they feel the need to try (and mostly fail so far) to add net-new value streams (e.g. mcp catalogue, a bunch of other stuff i immediately turned off that last time i installed it) rather than focus on the core value.

It's not the case that they've maximised the utility of the core build / publish container nor the acquire / run container workflows and but they're prioritising fluff around the edges of the core problem.

Podman for its various issues is a whole lot more focussed.

colechristensen · 7h ago
The core of docker needs to be free. The docker registry can charge corporate customers for storage and such, but besides being the default registry, there's not much money there because it's a commodity service, not anything unique.

There's just not much money to be made there, especially considering that docker is a pretty thin wrapper built on top of a bunch of other free technology.

When somebody can make a meaningful (though limited) clone of your core product in 100 lines of bash, you can't make much of a business on top of it [1]

Docker suffers from being useful in the open source space but having no reasonable method to make revenue.

https://github.com/p8952/bocker

codethief · 4h ago
> With Podman, even if someone somehow escalates privileges inside a container to root level, they're still just an unprivileged user on the actual host.

As much as I like Podman (and I really do), Docker has supported rootless mode for a long time and it's not any harder to set up than Podman.

> Use podman-compose as a drop-in replacement

Oh, if only it were a drop-in replacement. There are so many ways in which it is not exactly compatible with docker-compose, especially when it comes to the network setup. I have wasted more hours on this than I can count.

ktosobcy · 9h ago
Tried to migrate (M1 MBP) a couple of times and it wasn't working well resuting in reverting to docker...
pitah1 · 7h ago
I have a tool[1] that solely worked with docker before and was putting off supporting podman for a while because I thought it would take some time. But it turned out to work straight out of the box without tweaking. Essentially frictionless.

[1] Tool for reference: https://github.com/data-catering/insta-infra

juancroldan · 8h ago
We where using Podman for certain deployments to AWS recently. However, it was in an EC2 instance and the overhead was unnecesary, so we ended up pasting Bocker[1] to AI, and stripping it of anything unnecesary until leaving just the few isolation features we needed.

[1] https://github.com/p8952/bocker/tree/master

tomrod · 9h ago
Most of my containers end up on k8s clusters as pods. What else would one use podman or docker for beyond local dev or maybe running a local containerized service?
jeffhuys · 9h ago
For a while we used it for scalable preview environments: specify the branch, hit deploy, and have a QA-able environment, with full database (anonymized) ready to go in 15 minutes (DB was time bottleneck).

We ditched it for EC2s which were faster and more reliable while being cheaper, but that's beside the point.

Locally I use OrbStack by the way, much less intrusive than Docker Desktop.

spicyusername · 9h ago
EC2 and containers are orthogonal technologies, though.

Containers are the packaging format, EC2 is the infrastructure. (docker, crio, podman, kata, etc are the runtime)

When deploying on EC2, you still need to deploy your software, and when using containers you still need somewhere to deploy to.

jeffhuys · 8h ago
True; I conflate the two often. The EC2s run on an IAM image, same as production does, which before was a docker image.
spicyusername · 7h ago
Arguably it would still be beneficial to use container images when building your AMIs (vs installing use apt or copying your binaries), since using container images still solves the "How do I get my software to the destination?" and the "How do I run my software and give it the parameters it needs?" problems in a universal way.
jeffhuys · 2h ago
In what way does you mean this? I’ve built two jobs for the preview envs: DeployEnvironment (runs the terraform stuff that starts the ec2/makes s3 buckets/creates api gateway/a lot of other crap) and then ProvisionEnvironment (zips the local copy of the branch and rsyncs it to the environment, and some other stuff). I build the .env file in ProvisionEnvironment, which accounts for the parameters. I’d love to get your point of view here!
spicyusername · 47m ago
Using a container image as your "artifact" is often a good approach to distributing your software.

    zips the local copy of the branch and rsyncs it to the environment, and some other stuff
This would happen in your Dockerfile, and then the process of actually "installing" your application is just docker run (or kubectl apply, etc), which is an industry standard requiring no specialized knowledge about your application (since that is abstracted away in your Dockerfile).

You're basically splitting the process of building and distributed your application into: write the software, build the image, deploy the image.

Everyone who uses these tools, which is most people by this point, will understand these steps. Additionally, any framework, cloud provider, etc that speaks container images, like ECS, Kubernetes, Docker Desktop, etc can manage your deployments for you, since they speak container images. Also, the API of your container image (e.g. the environment variables, entrypoint flags, and mounted volumes it expects) communicate to those deploying your application what things you expect for them to provide during deployment.

Without all this, whoever or whatever is deploying your application has to know every little detail and you're going to spend a lot of time writing custom workflows to hook into every different kind of infrastructure you want to deploy to.

anticorporate · 8h ago
There are many SMB application use cases that sit somewhere on the spectrum between "self-hosted" and "enterprise" where docker/podman hit the sweet spot in terms of complexity and cost versus reliability. Containers have become a handy application packaging format (just don't tell yourself the isolation provides meaningful security on its own).
sc68cal · 8h ago
Someone has to manage your kubernetes environment. Depending on the nature of your workload, it may not be worth running kubernetes and instead just run everything via podman on your hosts. It really depends on how much investment you have in Kubernetes YAMLs.
devjab · 7h ago
I suspect a lot of places pour them into Azure Kubernetes Services and Azure Container Apps for this exact reason. I assume other cloud provices have similar services.

Though as someone who's used a lot of Azure infrastructure as code with Bicep and also done the K8s YAML's I'm not sure which is more complicated at this point to be honest. I suspect that depends on your k8s setup of course.

GrumpyGoblin · 8h ago
Podman networking is extremely unreliable. Our company made an effort to switch to get away from Docker Enterprise. We had to kill the effort because multiple people had random disconnects and packet drops with a range of services including K8S, Kafka, and even basic applications, both internal and in host network.

```

> kubectl port-forward svc/argocd-server -n argocd 8080:443

Forwarding from 127.0.0.1:8080 -> 8080

Forwarding from [::1]:8080 -> 8080

Handling connection for 8080

Handling connection for 8080

Handling connection for 8080

E0815 09:12:51.276801 27142 portforward.go:413] an error occurred forwarding 8080 -> 8080: error forwarding port 8080 to pod 87b32b48e6c729565b35ea0cefe9e25d8f0211cbefc0b63579e87a759d14c375, uid : failed to execute portforward in network namespace "/var/run/netns/cni-719d3bfa-0220-e841-bd35-fe159b48f11c": failed to connect to localhost:8080 inside namespace "87b32b48e6c729565b35ea0cefe9e25d8f0211cbefc0b63579e87a759d14c375", IPv4: dial tcp4 127.0.0.1:8080: connect: connection refused IPv6 dial tcp6 [::1]:8080: connect: connection refused

error: lost connection to pod

```

People had other issues also. It looks nice and I would love to use it, but it just currently isn't mature/stable enough.

dpkirchner · 7h ago
I've had similar issues using kubectl to access some tools that made a lot of requests (polling, which is something argocd does I believe).

Setting this environment variable helped a lot: KUBECTL_PORT_FORWARD_WEBSOCKETS=true

Note: because Google's quality is falling you won't be able to find this variable using their search, but you can read about it by searching Bing or asking an LLM.

condenser · 6h ago
I'm interested in using podman for my homeserver because of the deamonless and rootlessnes nature, but I haven't found a convenient replacement for docker compose.

On my dev machine I do `docker compose up -d --build` in the directory of the Dockerfile, and it builds, uploads, and restarts the service on the server. In the podman world you're supposed to use Quadlets, which can be rsynced to the server, but I haven't found something simple for the build-step that doesn't involve an external registry or manually transferring the image.

What's the end-to-end solution for this?

gucci-on-fleek · 20m ago
podman-compose [0] is a mostly drop-in replacement for local usage, but I have no idea if it works remotely. "podman image scp" [1] looks like it could be helpful though.

[0]: https://github.com/containers/podman-compose

[1]: https://docs.podman.io/en/latest/markdown/podman-image-scp.1...

rsyring · 5h ago
Might be part of your solution: https://github.com/psviderski/unregistry

> Unregistry is a lightweight container image registry that stores and serves images directly from your Docker daemon's storage. > > The included docker pussh command (extra 's' for SSH) lets you push images straight to remote Docker servers over SSH. It transfers only the missing layers, making it fast and efficient.

But, given that podman rootless doesn't have a daemon like Docker, I think using Podman in a push-to-remote scenario is just going to have more pieces for you to manually manage.

There are PaaS solutions out there, like Dokku, that would give you a better devx but will also bring additional setup and complexity.

rsyring · 5h ago
I believe rootless containers require Linux user namespaces which have historically been the source of many vulnerabilities: https://news.ycombinator.com/item?id=43517734

I'm conflicted about whether or not it's better to run a root daemon that can launch unprivileged non-root containers or run rootless containers launched by a non-root user.

Anyone have thoughts or more definitive resources they could point to that discuss the tradeoffs?

betaby · 4h ago
Can `podman` use `macvlan` network if run as non-root? I'm talking about that scenario https://stackoverflow.com/questions/59515026/how-do-i-replic... but for non-root containers.
vb-8448 · 8h ago
i'm the only one that wished docker swarm became the standard instead of k8s?
papascrubs · 6h ago
Swarm syntax is much better than the YAML sprawl of k8s. That said the underlying engine was pretty buggy and lack of customization for lower level components was a pain. Their whole plugin system was a great idea, but actual plugins developed by vendors ended up being very brittle. That said, yeah I'd prefer that timeline
skor · 6h ago
no, wish swarm had a feature or two more, but I'm happy with how simple it is.
vb-8448 · 3h ago
the sens of my comment was: I would have liked that k8s looks like swarm.
leetrout · 8h ago
Or Nomad...
Lariscus · 4h ago
Rootless podman in combination with systemd quadlet works great for me. I host all my personal services like that. Having containers integrated directly into systemd makes mapping out dependencies between mounts and other non containerized services much more reliable and easier.
tsoukase · 6h ago
Podman has full compatibility and same command syntax with docker. Perfect cockpit and kubernetes co existence. Daemonless, rootless and open source are killer features and reasons to switch.
Eji1700 · 5h ago
As bad as the horror stories about switching might be, I don't see how docker can remain as is. The level of vulnerability it causes seems like a fundamental flaw. I assume docker itself hasn't changed because it took off so fast and now it'd be breaking changes galore, but eventually everyone is going to have to pull the trigger.
dathinab · 4h ago
In addition to rootless podman / root docker there are some other options:

- rootless docker, works fine, not fully sure why it's not the default by now (I do have issues form time to time but I had the same issues with root docker)

- rootfull podman

- running docker/podman daemon as a different and non root user (but have fun trying to mount anything !?)

rweichler · 6h ago
On the topic of ditching Docker, has anyone else created a custom test harness with QEMU? I feel like I'm the only person doing it this way. QEMU's target userbase is emulators in general, which is a much broader audience with way more development effort going into it, therefore I don't think it can ever go "out of fashion" or get hijacked by perverse corporate interests like Docker can. Podman seems to have the same vulnerability.
drzaiusx11 · 6h ago
This is what Lima is, which is the basis for Colima which runs on top with all the Docker runtime stuff

https://github.com/lima-vm/lima

https://github.com/abiosoft/colima

rweichler · 5h ago
Interesting, thanks. Looks much better than Docker/Podman. But seems to suffer from the same incentive issue. I think I'll stick with my raw QEMU setup, Lima seems like QEMU + batteries, but I already built the batteries.
drzaiusx11 · 3h ago
Fun fact podman desktop is just a front end to Lima, or was last I checked
rweichler · 1h ago
Yeah, seems like the power law is at play here. I made my test harness in 2020 so I didn't have a choice as Lima didn't exist back then. I should have waited a year. I'll certainly keep an eye on it
todotask2 · 5h ago
As much as I’d like to switch to Podman, I’m using Vite inside a container and need to monitor file changes on the host folder. It doesn’t detect them, and polling isn’t ideal. Does anyone have tips I might not know about yet?

But Apple Container is another option with direct IP network support on macOS Tahoe, not possible with macOS Sequoia.

acdha · 4h ago
I switched for local development work a couple of years back and can count on no fingers the number of times I thought about switching back. It let me stop thinking about the Docker Mac high CPU issue which was open for years with no progress, too.
jamra · 2h ago
Very nice. Macs have a new containers program. It’s open source on github but not ready for the current MacOS version. Might be an even better approach as it’s made by Apple.
acdha · 1h ago
Yeah, I’ll take a look after I upgrade but I haven’t had to spend time on Podman in years so there’s an upper bound for how much it can improve my life.
cyrialize · 7h ago
> I'm old enough to remember when Vagrant looked like a promised land where every development environment would look the same.

Oh no... does this mean I'm old too?!? This feels just like yesterday!

mdaniel · 5h ago
In fairness, Vagrant solves a slightly different problem than does containerization, so I doubt the need went away but rather folks realized no one cares very much about fresh VM, rather it's all about the application configs

Also, fuck them: https://github.com/hashicorp/vagrant/blob/v2.4.9/LICENSE who the fuck are they expecting to pay for Vagrant, or that "AWS gonna steal our ... vagrant?"

AsmodiusVI · 4h ago
This is a hell of a lot like saying you ditched driving cars to ride a monorail with only two stops in the wrong neighborhood. You’re comparing apples and oranges and selectively leaving out info to highlight that.
EE84M3i · 7h ago
I would love to switch to podman, but rely on docker's credential helpers with gcloud CLI for authentication to pull from Google Artifact Registry on Mac with hyperkit. Last time I tried I couldn't figure out how to do this with podman machine in a way that respected gcloud credentials properly and could only find some hacks that involved passing short term tokens instead of supporting proper refresh flows. Is there a guide how to do that now?
gr4vityWall · 7h ago
I wonder if we'll see Podman running on Illumos at some point. SmartOS does currently support running Dockerized programs if I remember correctly.
mdaniel · 5h ago
Only if Illumos supports kernel namespaces; it's the same problem as "podman on XNU" (I don't mean via VM, I mean on XNU): there's nothing stopping them, but it evidently isn't important to them, either
tannhaeuser · 7h ago
me: great can target POSIX for stuff

them: not so fast here's glib

me: great can use debian for stuff

them: not so fast, here's rpm

me: great can use docker for "abstracting" over Linux diversity

them: not so fast, here's podman

disqard · 7h ago
Does anyone here have more than "initial impressions" of systemd-nspawn? It seems chronically overlooked in these sorts of threads.
s_ting765 · 3h ago
You are going to have to pry Docker from my cold dead hands.

Podman is a failed reverse-engineering of cherry-picked so-called "good" parts of Docker.

vaylian · 3h ago
What part of podman is not working for you?
s_ting765 · 2h ago
Many things but summarily it would be the promise of a 1:1 drop-in replacement for Docker.
gtirloni · 7h ago
Same here. Podman Desktop is great. podman/buildah and the whole ecosystem is much more reliable on the server as well.
fh973 · 8h ago
Docker swarm is great on single servers. Apparently still no such thing for Podman.

Even if the tech is not top notch, Docker got a few things right on product management.

tietjens · 2h ago
How does Podman work with Alpine? Lots of talk of Ubuntu and Debian below.
rasmus-kirk · 4h ago
I use podman since I can simply enable it using home-manager:

`services.podman.enable`

This also means that it's in the reproducible part of my setup which is a bonus.

ZeroConcerns · 9h ago
I would love to love Podman, but the fact that it regularly just fails to work on my Windows laptop (the WSL2 instance seems fine, but can't be connected to, the UI just says 'starting', and none of the menu options do anything) and that I can't figure out how to make IPv6 networking work on any platform means that Docker isn't going anywhere for the foreseeable future, I'm afraid...
jimt1234 · 5h ago
thesurlydev · 5h ago
I like Podman with it's API for hosting (no k8s) but I reverted back to Docker for local because of docker-compose incompatibilities. This was a year or more ago so it may not still be an issue.
sc68cal · 8h ago
I have been running Podman in production for a number of years now, and have been very happy with the results.

Podman pods have been super useful, and the nature of my workload is such that we just run a full pod on every host, so it's actually removed the need for an orchestrator like Kubernetes. I manage everything via Ansible and it has been great.

osigurdson · 7h ago
Why not just use Kubernetes?
jnovacho · 7h ago
> Privileged ports in rootless mode not working? Good! That's security working as intended. A reverse proxy setup is a better architecture anyway.

So, how are you supposed to run the proxy inside the container? Traefik for example? Genuinely curious.

eddieroger · 7h ago
Don't run it in rootless for your reverse proxy? Having one container running that way is still better than having all of them work that way.
mathfailure · 7h ago
My go service in podman container requires a container restart after waking up. That's the only downside I've felt after switching from docker to podman.
pnathan · 7h ago
docker works well enough.

podman avoids having to deal with the Purchasing department, but doesn't work great.

would definitely suggest doing docker if you're up to dealing with the purchasing department.

duxbuse · 8h ago
I had an issue where docker was not producing repeatable sha's. Somehow a time based metadata was affecting the image sha after every build.

Switching to podman immediately fixed it, never looked back

rubenv · 3h ago
Is there anything like Skaffold that works with Podman?
minton · 4h ago
We are are migrating back to Docker org-wide after 5 months on Podman.
dev_l1x_be · 5h ago
What is the current way of running a dokcer container as a systemd service? Is it podman?
ethagnawl · 5h ago
Just create a unit file that starts/stops/restarts/etc. the container.
dev_l1x_be · 4h ago
what do you mean exactly? What is the tool that you invoke? Or you use systemd's container executing ability?
ethagnawl · 1h ago
Whenever I've done this, I've used the system service's ExecStart|Stop directives to run a Bash script which handles starting/stopping the container (e.g. `docker run ...`) and any other setup teardown behavior that's appropriate.
MadVikingGod · 3h ago
I wish podman was more common in documentation. Want to use podman with your CI platform, it will probably work with socket compatibility but good luck finding instructions and anyone know if it’s tested. How about using some service with podman? Yeah the container is the same, but all the instructions or docker this or docker that, and god help you if you if they used networking or compose.

I prefer to use podman if it’s available on my system but it still hasn’t hit the critical mass needed for it to be targeted and that’s a shame.

Also is there something like a dockerfile for buildah? I’ve tried a few times to understand how to use buildah and just fall back on a dockerfile because I can’t seem to wrap my head around how to put it into something IAC like.

der_gopher · 7h ago
I did the same first, then went with Colima, nowadays with Orbstack. My point is, there are so many great options apart from Docker Desktop
ravenstine · 8h ago
Could never get rootless to work properly. I wanted to like Podman, but every time I wanted to use it there was some bump in the road that made me give up and use Docker.
Hackbraten · 8h ago
Exactly this.

Plus, I don’t see the point in babysitting a separate copy of a user space if systemd has `DynamicUser`.

CuriouslyC · 7h ago
The out of the box DX on podman needs improvement, the automated image management logic is bad, but besides that it's a pretty good tool.
gigatexal · 9h ago
Has anyone ported distrobox to MacOS? or know of a project doing that?

I could reverse engineer all the cool user land stuff it does to make things seamless ... but who has the time ;-)

SamInTheShell · 6h ago
It’s gotten a lot better on MacOS as the virtualization framework had virtio functionality added to it.
irusensei · 9h ago
Last third party container I’ve built had COPY —-link statements that didn’t worked on podman. Granted it worked just fine with a normal copy but it’s not 1:1.
jpeeler · 8h ago
Haven't tested this yet, but --link support was just added:

https://github.com/containers/buildah/issues/4325#issuecomme...

mehdibl · 9h ago
if you mount local path's, it's a pain.

I know a lot of kubernetes fans migrate to podman, but if you use dev stacks.

Use in dev: devcontainer, podman can't replace docker!

bval · 6h ago
I have used Podman before, and it just works. However, for the past 8 months, I've been using https://orbstack.dev/, and I can confidently say that it's much better.
varispeed · 2h ago
Author lists vulnerabilities of Docker, but it doesn't mean podman is more secure. Maybe it's more obscure so it is not as attractive for hackers as a target.

I tried to use podman, but that was largely a waste of time and I reverted to Docker. I don't have time going through docs to figure out why something that supposed to work is not working.

bryangrigorie · 8h ago
Docker daemon had been freaking me out for a while. If it's really so seamless I'll look into migrating.
travisgriggs · 7h ago
I switched about 3 weeks ago. I’m not a power user of either. But I don’t smell the odor of coming enshitification anymore either. Podman doesn’t feel like a platform or an ecosystem. It’s just a tool.
Gud · 5h ago
FreeBSD jails ftw!
gregors · 7h ago
Iv been using podman for around 2 years now without issue. Works great for my use cases.
whobre · 8h ago
Tried a couple of times and gave up. Just didn’t work at all for too many containers
bval · 6h ago
I have used Podman before, and it just works. However, for the past 8 months, I've been using orbstack, and I can confidently say that it's much better.
leoc · 8h ago
To bikeshed a little, "Why I Ditched Docker for Podman, And Why You Should Too" would be better than the current headline of "Why I Ditched Docker for Podman (And You Should Too)": the "you should too" part is after all the main message of the article, not a side-point.
0_gravitas · 7h ago
I'm a fan of grammatical radicalism, the parens are appropriate if they're meant to convey a certain tone/voice in the writing, like a quick added-in-fragment after you're done saying the original title (as if you were giving a presentation).
leoc · 3h ago
It sounds as if what you want there is an em-dash.
0_gravitas · 1h ago
Also valid--but I think both work.
unethical_ban · 3h ago
The HN title was altered from the exact title to a modified title that changes the meaning. I don't understand why.
osigurdson · 7h ago
"You should too" is the part that annoys me. I use podman but if you are happy with docker, fill your boots.
sudoshred · 5h ago
Rancher Desktop was far easier to get up and running on macOS than podman for me. Drop in compatible with docker compose syntax too.
hatch_q · 8h ago
Don't give iXsystems (TrueNAS) ideas. 3 times was enough.
jayd16 · 6h ago
Any alternative for Windows containers?
coffeecoders · 6h ago
There isn't anything really.

The WSL backend is the pain point, which doesn't go away with Docker or Podman or anything else.

jayd16 · 5h ago
I actually mean windows kernel containers so I don't think wsl is involved.

Windows server can run them without docker but for local dev I'm not sure what the alternative is.

lisbbb · 6h ago
Orbstack if you have a Mac.
avereveard · 8h ago
same, not because compose, but because I wanted a software to run containers and docker only provides a solution
user94wjwuid · 4h ago
Man I’m tired
ac130kz · 6h ago
Podman compose isn't compatible with Docker compose, end of story.
mrits · 8h ago
My favorite part of the blog is how the author lets us know he is pretty young to reference vagrant as old.
the__alchemist · 7h ago
Vagrant was one of my first intros to programming (2 Scoops of Django tutorial). It and Chef were a nightmare that almost made me quit in frustration!
codesmash · 6h ago
Thank you - you've made my day! Last time I felt like that when a lady asked my about the ID at the liquor store :)
esseph · 8h ago
vagrant was released 15 years ago (2010) =)
mrits · 5h ago
About 20 years after the IRC crowd peaked which seems to be a large portion of this community
2OEH8eoCRo0 · 8h ago
I've been on Podman since 2019 since Docker didn't support Cgroups v2 for a very long time.
cramcgrab · 8h ago
Isn’t Portman ibm?
johannes1234321 · 8h ago
Podman is created by RedHat, thus IBM.
usrbinbash · 8h ago
Yeah, no, sorry.

Too many problems with things that worked out of the box with docker.

I don't have time to waste on troubleshooting yet another issue that can be solved by simply using the thing that just works.

rootless is not an argument for me, since the hosts are dedicated docker hosts anyway.

jbverschoor · 8h ago
Whenever I see portmappings I die a little inside. OrbStack makes so much more sense be default
righthand · 6h ago
5 years ahead of you on that.

Interesting how slowly this advice has bubbled up. This is another software example where people didn’t care about resource usage and were “fine” with the Docker daemon.

xyst · 7h ago
I already use podman for local development. While docker can run under rootless mode [1] and alleviate the security concern. The concern about docker being resource intensive is still quite true (which is why I avoid using it in my self hosted setup).

Rather _declaratively_ define configuration with nix. Deploy nixOS to machines (rpi4/5, x86, arm) and vms (proxmox) and manage remotely with nixos-anywhere.

One of these days, I’ll get around to doing a write up.

[1] https://docs.docker.com/engine/security/rootless/

srid · 4h ago
Or if you are on macOS, there is https://github.com/juspay/services-flake which is based on process-compose. So you get an unified alternative to docker-compose but based on Nix and works on both platforms.
dingi · 7h ago
I use both Podman and Docker pretty regularly, and to be honest I don’t see a huge amount of differentiation or practical value in Podman for my day-to-day. It feels like another OCI runtime with some quirks compared to Docker.

One pain point for me is rootless mode: my Podman containers tend to stop randomly for no obvious reason. I tried the recommended “enable user lingering” fix and it didn’t help. I’ve never run into this with Docker.

I get the theoretical advantages, daemonless architecture, better systemd integration, rootless by default, podman generate kube, etc. But if you’re just using containers for development or straightforward deployments, Docker feels smoother and more reliable. Maybe if you’re in a security-sensitive environment or need tighter system integration Podman shines, but for my use cases I’m still not convinced.

rcarmo · 7h ago
Sorry, but I tried and just couldn’t get compose and networking to work the way I wanted - as well as permissions, volumes and a lot of other stuff…
goku12 · 4h ago
Which compose? Podman-compose [1] wasn't fully up to date with the latest compose-spec [2] the last time I checked it. However, the docker-compose v2 [3] (the one in Go, not Python [4]) is compatible with the Podman engine [5] and works like a charm for me.

I have also had no issues with networking, permissions or volumes while running as non-root user. Are you simply facing issues setting it up, or are you hitting some bugs or missing features?

[1] https://github.com/containers/podman-compose

[2] https://compose-spec.io/

[3] https://github.com/docker/compose

[4] https://github.com/docker/compose/tree/v1

[5] https://www.devopsroles.com/how-to-use-docker-compose-with-p...

crinkly · 8h ago
Yeah I have done this as well.

I write programs that run on the target OS again. It's much easier, turnaround time is much quicker, it's faster. Even battery lasts longer on my laptop. What the hell have we done to ourselves with these numerous layers of abstraction?!?

payong · 3h ago
what dot mee
arein3 · 7h ago
Reminder that docker still doesn't support nftables
tristor · 7h ago
Podman is really painful if you do anything interesting inside of a container, it's great and simple if all you're doing is running nginx or a scripting language runtime or something in a container, but for folks who write actual software that gets compiled to target a system and utilizes syscalls, running in Podman is a pain in the ass unless you disable most of the "benefits". Docker on the other hand pretty much just works.
udev4096 · 3h ago
> just convert it to Kubernetes YAML

Either you haven't worked on k8s at scale or you're seriously suggesting an overly complex solution to elegant docker-compose. Docker compose exists because of it's simplicity and stability. I have also started using swarm and it doesn't get the recognition it deserves for the most easy-to-manage orchestration. Podman doesn't have such a thing. And yes, podman-compose is absolute garbage

sirmike_ · 7h ago
ever try to dev with it on a Mac?

haha.

Nope. No thank you. Not sure if Windows has that issue.

mring33621 · 5h ago
I'm using 'podman' with 'k8s/kind' on my M1 macbook and have had very few issues

Maybe a couple quirks with TCP port access, but a quick convo with gemini helped me