Proxmox Virtual Environment 9.0 with Debian 13 released

171 speckx 69 8/5/2025, 1:57:00 PM proxmox.com ↗

Comments (69)

redundantly · 15h ago
I like Promox a lot, but I wish it had an equivalent to VMware's VMFS. The last time I tried, there wasn't a way to use shared storage (i.e., iscsi block devices) across multiple nodes and have a failover of VMs that use that storage. And by failover I mean moving a VM to another host and booting it there, not even keeping the VM running.
aaronius · 14h ago
That should have been possible for a while. Get the block storage to the node (FC or configure iSCSI), configure multipathing in most situations, and then configure LVM (thick) on top and mark it as shared. One nice thing this release brings is the option to finally also have snapshots for such a shared storage.
redundantly · 14h ago
I tried that, but had two problems:

When migrating a VM from one host to another it would require cloning the LVM volume, rather than just importing the group on the other node and starting the VM up.

I have existing VMware gusts that I'd like to migrate over in bulk. This would be easy enough to do by converting the VMDK files, but using LVM means creating an LVM group for each VM and importing the contents of the VMDK into the LV.

aaronius · 14h ago
Hmm, staying with iSCSI. You should create one large LUN that is available on each node. Then it is important to mark the LVM as "shared". This way, PVE knows that all nodes access the same LVM, so copying the disk images is not necessary on a live migration.

With such a setup, PVE will create LVs on the same VG for each disk image. So no handling of multiple VGs or LUNs is necessary.

The multipathing PVE wiki page lines out the whole process: https://pve.proxmox.com/wiki/Multipath

SlavikCA · 14h ago
Proxmox has built-in support for CEPH, which is promoted as VMFS equivalent.

I don't have much experience with them, so can't tell if it's really on the same level.

thyristan · 14h ago
Proxmox with Ceph can do failover when a node fails. You can configure a VM as high-availability to automatically make it boot on a leftover node after a crash: https://pve.proxmox.com/wiki/High_Availability . When you add ProxLB, you can also automatically load-balance those VMs.

One advantage Ceph has over VMware is that you don't need specially approved hardware to run it. Just use any old disks/SSDs/controllers. No special extra expensive vSAN hardware.

But I cannot give you a full comparison, because I don't know all of VMware that well.

woleium · 14h ago
Yes, you can do this with ceph on commodity hardware (or even your compute nodes, if you are brave), or if you have a bit of cash, something like a netapp to do NFS/iSCSI/NVME-oF

Use any of these with the built in HA manager in proxmox

redundantly · 14h ago
As far as I understand it, Ceph allows you to create distributed storage by using the hardware across your hosts.

Can it be used to format a single shared block device that is accessed by multiple hosts like VMFS does? My understanding is this isn't possible.

nyrikki · 12h ago
Ceph RBD can technically support multi-writer access through exclusive locking, but it won't be the same as multi-writer.

You can set up a radosgw outside of the proxmox and use objects.

But ceph is fundamentally a distributed object store, shared LUNs with block level multi-writer is fundamentally a tightly coupled solution.

If you have a legacy need that has OCFS or a quorum drive, the underlying tools proxmox is an abstraction can sometimes be used as these types of systems tend to be pets.

But if you were just using multi-writer because it was there, there are alternatives that are typically more robust under the shared nothing model like Ceph uses.

But it is all tradeoffs and horses for courses.

guerby · 6h ago
From what I read about VMFS yes ceph allows you to have a shared block devices (RBD = rados block devices) across a cluster, and proxmox VE HA makes sure only one instance of the VM is active on the cluster to avoid having multiple writers to the same disk image.
pdntspa · 13h ago
That, and configuring mount points for read-write access on the host is incredibly confusing and needlessly painful
BodyCulture · 9h ago
Seems like it still has no official support for any kind of disk encryption, so you are on your own if you fiddle that in somehow and things may break. Such a beautiful, peaceful world where disk encryption is not needed!
rcarmo · 4m ago
In the hypervisor? Because I have plenty of VMs with LUKS and BitLocker.
stormking · 5m ago
Proxmox supports ZFS and ZFS has disk encryption.
riedel · 16h ago
We are really happy with proxmox for our 4 machine cluster in the group. We evaluated many things, they were either to light or to heavy for our users and/or our group of hobbyist admins. A while back we also set up a backup server. Forum is also a great resource. I just failed to contribute a pull request via their git email workflow and I am now stuck with a non-upstreamed patch to the LDAP Sync (btw. the code there is IMHO not the best part of PVE). In general, while the system works great as a monolith, extending it is IMHO really not easily possible. We have some cludges all over the place (mostly using the really good API), that could be better integrated, e.g. with the UI. At least I did not find a way to e.g. add a new auth provider easily.
woleium · 14h ago
Can’t it use pam? so many options for providers there.
riedel · 14h ago
It was mostly about syncing groups with proxmox. Worked by patching the LDAP provider to support our schema. Comment was more about the extensibility problem when doing this. Actually when you say this, I wonder how PAM could work, only ever used it for providing shell access: we typically do not have any local users on the machine. Never used PAM in a way not providing any local execution privileges (which is the whole point of a VM host).
BLKNSLVR · 7h ago
For my homelab I switched from ESXi to Proxmox a few years ago because the consumer-level hardware I mostly used didn't have Intel network cards and ESXi didn't support the Realtek network devices that were ubiquitous in consumer gear at the time.

Love Proxmox, it's done everything I needed of it.

I don't use it to anywhere near it's potential, but I can vouch for the robustness of its backup process. I've had to restore more than handful of VMs for various reasons, and they've all been rock-solid upon restoration. I would like to use its high-availability features, but haven't needed them and don't really have the time to tinker so much these days.

avtar · 13h ago
I would see Proxmox come up in so many homelab type conversations so I tried 8.* on a mini pc. The impression I got was that the project probably provides the most value in a clustered environment or even on a single node if someone prefers using a web UI. What didn't seem very clear was an out-of-box way for declaring VM and container configurations [1] that could then be version controlled. Common approaches seemed to involve writing scripts or reach for other tools like Ansible. Whereas something like LXD/Incus makes this easier [2] by default. Or maybe I'm missing some details?

[1] https://forum.proxmox.com/threads/default-settings-of-contai...

[2] https://linuxcontainers.org/incus/docs/main/howto/instances_...

rcarmo · 3m ago
cloud-init support is sorely missed.
m463 · 7h ago
I have similar feelings.

I really wish proxmox had nicer container support.

If a proxmox container config could specify a dockerfile as an option, I think proxmox would be 1000% more useful (and successful)

Instead with LXC and their config files, I feel like i have to put on a sysadmin hat to get a container going. Seems like picking up an adding machine to do my taxes.

(also, lxc does have a way to specify a container, but it is not used)

Instead I have written scripts to automate some of this, which helps,

There is also cloud-init, but I found it sort of unfriendly and never went anywhere with it.

cyberpunk · 13h ago
There are various terraform providers for proxmox.
whalesalad · 11h ago
Yeah, this is the way. You end up treating Proxmox like it is AWS and asserting your desired state against it.
cassianoleal · 7h ago
You don't. The existing providers only deal with spinning up VMs and containers.

Most of the operations and configurations that exist in Proxmox are not present in the providers, like SDN, firewall and the various storage options. The reason is that the API is all over the place and really badly documented.

Note that I still quite like the product. I have a 2- (soon to be 3-) node cluster at home and at this moment I have no plans to migrate away from it.

I looked into Incus but it's also far from mature, with an API and general data structure being full of inconsistencies, and again the documentation is not quite there.

PeterStuer · 15h ago
I have only recently moved to proxmox as the Hyper-V licensing became too opressive for hobby/one-person projects use.

Can someone tell me wether proxmox upgrades are usually smooth sailing, or should I prepare for this being an endeavour?

guerby · 6h ago
Proxmox ships a tool that verify if everything is right for the update (eg: pve8to9), and the wiki documentation is extensive and kept up to date.

At work we started with 6.x a few years ago, upgraded to 7.x a bit after releases, then same with 8.x without issue.

We'll wait a reasonable while before upgrading to 9.x but I don't expect any issue.

Note : same with integrated ceph update, did Reef to Squeed a few weeks ago, no issue.

thyristan · 14h ago
Never had a problem with them. Just put each node in maintenance, migrate the VMs to another node, update, move the VMs back. Repeat until all nodes are updated.
zamadatix · 13h ago
The "update" step is a bit of a "draw the rest of the owl" in the case of major version updates like this 8.x -> 9.x release. It also depends how many features you're using in that cluster as to how complicated the owl is to draw.

That said, I just made it out alright in my home lab without too much hullabaloo.

pimeys · 13h ago
Did you do the backup and full reinstall or just with apt?

I should do the same update this weekend. .

woleium · 14h ago
if you are using hardware pass through for e.g. nvidia cards you have to update your VMs as well, but other than that pretty painless in my experience (over 15 years)
sschueller · 16h ago
The official release of Debian Trixie is not until the 9th...
piperswe · 16h ago
Trixie is under a heavy freeze right now; just about all that's changing between now and the 9th are critical bug fixes. Yeah it's not ideal for Proxmox to release an OS based on Trixie this early, but nothing's really going to change in the next few days on the Debian side except for final release ISOs being uploaded
zozbot234 · 15h ago
They might drop packages between now and the stable release. An official Debian release won't generally drop packages unless they've become totally unusable to begin with.
tlamponi · 9h ago
We manage anything, including package builds, ourselves if the need should arise; we also monitor Debian and release critical bugs closely, we see no realistic potential for any Proxmox relevant package to disappear, at least nothing higer compared to that happening after the 9th.

FWIW, we got staff members that are also directly involved with Debian which makes things a bit easier.

piperswe · 15h ago
Given that Proxmox operates their own repos for their custom packages and users don't typically install their own packages on top of Proxmox, if a package they need gets dropped due to RC bugs (etc) they can upload it to their own repo
guerby · 6h ago
Proxmox 8 was also released just before Debian 12 bookworm.
znpy · 16h ago
Debian repositories gets frozen months in advance before a release, and pretty much only security patches are imported after that. Maybe some package gets rebuilt, or stuff like that. No breaking changes.

I wouldn't expect much changes, if any a all, between today (Aug 5th) and the expected release date (Aug 9th).

cowmix · 16h ago
Yeah, it’s wild how many projects—especially container-based ones—have already jumped to Debian Trixie as their “stable” base, even though it’s still technically in testing. I got burned when linuxserver.io’s docker-webtop suddenly switched to Trixie and broke a bunch of my builds that were based on Bookworm.

As you said, Debian 13 officially lands on August 9, so it’s close—but in my (admittedly limited) experience, the testing branch still feels pretty rough. I ran into way more dependency chaos—and a bunch of missing or deprecated packages—than I expected.

If you’re relying on container images that have already moved to Trixie, heads up: it’s not quite seamless yet. Might be safer to stick with Bookworm a bit longer, or at least test thoroughly before making the jump.

sgc · 15h ago
When did you run into your problems? Is there a chance they are largely resolved at this point?
Pet_Ant · 16h ago
Yeah, but what is the rush? I mean 1) what if something critical changes, and 2) I could easily see some setting somewhere being at "-rc" which causes a bug later.

Frankly, not waiting half a week is bright orange flag to me.

tlamponi · 9h ago
The linked forum post has an FAQ entry, this was a carefully weighted decision with many factors playing a role, including having more staff available to manage any potential release fall-out on our side. And we're in general pretty much self-sufficient for any need that should arise, always have been that way and provide enterprise support offerings that back our official support guarantees if your org would have the need for that.

Finally, we provide bug and security updates for the previous stable release for over a year, so no user has any rush to upgrade now, they can safely choose any time between now and until August 2026.

grmone · 15h ago
"Proxmox VE is using a newer Linux kernel 6.14.8-2 as stable default enhancing hardware compatibility and performance."

kernel.org don't even list version 6.14 anymore. do they backport security patches on there own?

Arrowmaster · 14h ago
I don't know what they are currently doing, but historically Proxmox uses Debian as the OS base but with Ubuntu as the kernel source. So they rely on the Ubuntu security team backporting security patches for the kernel.
whiztech · 14h ago
nativeit · 14h ago
I wouldn’t expect it to be, as kernel.org don’t list distribution kernels.

(Ignore this if it’s irrelevant and I’m missing the point, which is always a distinct possibility)

> Many Linux distributions provide their own "longterm maintenance" kernels that may or may not be based on those maintained by kernel developers. These kernel releases are not hosted at kernel.org and kernel developers can provide no support for them.

> It is easy to tell if you are running a distribution kernel. Unless you downloaded, compiled and installed your own version of kernel from kernel.org, you are running a distribution kernel. To find out the version of your kernel, run uname -r:

throw0101c · 15h ago
Highlights of the release (release notes):

* https://pve.proxmox.com/wiki/Roadmap#Proxmox_VE_9.0

Takennickname · 16h ago
"It’s also possible to install Proxmox VE 9.0 on top of Debian."

Has that always been the case? I have a faint memory of trying once and not being able to with Proxmox 7.x

carlhjerpe · 16h ago
You can even install Proxmox on NixOS now (no official support ofc) though https://github.com/SaumonNet/proxmox-nixos

Which I think is really cool since it means their stuff is "truly open-source" :)

robeastham · 16h ago
I'm pretty sure it's been the case since at least 7.0, as I've done it a few times on hosts such as Scaleway that only offered a Debian base image for my machine.
m463 · 6h ago
I wonder about things like this.

It seems to me people who try things like this might also be ok with spaces in filenames, or replacing bash with csh...

It should work, but you might want to cross your fingers.

SirMaster · 16h ago
Yes it's always been the case. I installed Proxmox 3.4 (based on Debian 7) this way originally, and have been upgrading ever since with no issues.
throw0101c · 15h ago
> Has that always been the case? I have a faint memory of trying once and not being able to with Proxmox 7.x

We did it for 7.x [1] and it worked fine (since upgraded things in-place to 8.x).

[1] https://pve.proxmox.com/wiki/Install_Proxmox_VE_on_Debian_11...

zozbot234 · 15h ago
Somewhat annoyingly, Proxmox relies on a non-Debian kernel for at least some of its features. This definitely made a difference w/ Bookworm (which was on the 6.1 kernel release series), not sure about Trixie (which will be on 6.12).
ChocolateGod · 16h ago
Yeh. It's useful when trying to install onto partition setups the built in installer doesn't support OOTB.

But, things like proxmox-boot-tool may not work.

rcarmo · 16h ago
I've done it a few times--8.x for sure, maybe earlier, but I've now been using it for too long to remember accurately.
oakwhiz · 16h ago
I do it this way every time.
xattt · 15h ago
Ugh, I’d love to make the leap, but I don’t want the headache of trying to get SR-IOV going again for my integrated Intel graphics.
zozbot234 · 15h ago
Why not run virtio-gpu in the guest?
xattt · 13h ago
Would Plex QuickSync transcoding work?
zamadatix · 13h ago
Windows.
yla92 · 15h ago
> The Proxmox VE mobile interface has been thoroughly reworked, using the new Proxmox widget toolkit powered by the Rust-based Yew framework.

First time hearing about Yew (yew.rs). First time hearing about it. Is it like writing frontend code in Rust and compiled to WASM ? Is anyone using it (other than Proxmox folks, of course).

tlamponi · 9h ago
> Is it like writing frontend code in Rust and compiled to WASM ?

Exactly, it's actually quite lightweight and stable plus mostly finished, so don't let the slower upstream releases discourage you from ever trying it more extensively.

We build a widget library with our products as main target around Yew and native web technologies, you can check out:

https://github.com/proxmox/proxmox-yew-widget-toolkit

And the example repo:

https://github.com/proxmox/proxmox-yew-widget-toolkit-exampl...

For code and a little bit more info. We definitively need to clean a few documentation and resource things up, but we tried to make it so that it can be reused by others without tying them to our API types or the like.

FWIW, the in-development Proxmox Datacenter Manager also uses our Rust / Yew based UI, it's basically our first 100% rust project (well, minus the Linux / Debian foundation naturally, but it's getting there ;-)

dylanowen · 15h ago
I'm using it for a browser extension, just because I wanted to code more in rust. It's great at what it does and has all the same paradigms from React. The best use case though, would be if all your code is already rust. If you have a complex UI I'd probably use react and typescript.
throw0101c · 15h ago
I've heard good things about XCP-ng [1] as well: anyone use both that can lay out the pros/cons of each?

[1] https://en.wikipedia.org/wiki/XCP-ng

nick__m · 14h ago
We tried both at work and thet were more or less equivalent but proxmox appears to have more momentum behind it. Also distributed storage in proxmox is based on ceph while xcpng use the obscure xostor
nirav72 · 15h ago
I can't speak for pros and cons with XCP-Ng. I've been meaning to try out XCP-ng. But feel like there just isn't a large enough community support around it yet. At least not like Proxmox, which has seen a surge in usage and popularity after the Broadcom fiasco.
unixhero · 10h ago
Maybe a surge of newcomers, but the community was strong before and irrespective of the fiasco.
nativeit · 14h ago
Still use/love Proxmox daily. Congrats to the team on the latest release!
krisknez · 12h ago
I would love if Proxmox had a UI for port forwarding. I hate doing it through the terminal. I like how LXD has a UI for that.