Proxmox Virtual Environment 9.0 with Debian 13 released

141 speckx 56 8/5/2025, 1:57:00 PM proxmox.com ↗

Comments (56)

redundantly · 5h ago
I like Promox a lot, but I wish it had an equivalent to VMware's VMFS. The last time I tried, there wasn't a way to use shared storage (i.e., iscsi block devices) across multiple nodes and have a failover of VMs that use that storage. And by failover I mean moving a VM to another host and booting it there, not even keeping the VM running.
aaronius · 4h ago
That should have been possible for a while. Get the block storage to the node (FC or configure iSCSI), configure multipathing in most situations, and then configure LVM (thick) on top and mark it as shared. One nice thing this release brings is the option to finally also have snapshots for such a shared storage.
redundantly · 4h ago
I tried that, but had two problems:

When migrating a VM from one host to another it would require cloning the LVM volume, rather than just importing the group on the other node and starting the VM up.

I have existing VMware gusts that I'd like to migrate over in bulk. This would be easy enough to do by converting the VMDK files, but using LVM means creating an LVM group for each VM and importing the contents of the VMDK into the LV.

aaronius · 4h ago
Hmm, staying with iSCSI. You should create one large LUN that is available on each node. Then it is important to mark the LVM as "shared". This way, PVE knows that all nodes access the same LVM, so copying the disk images is not necessary on a live migration.

With such a setup, PVE will create LVs on the same VG for each disk image. So no handling of multiple VGs or LUNs is necessary.

The multipathing PVE wiki page lines out the whole process: https://pve.proxmox.com/wiki/Multipath

SlavikCA · 5h ago
Proxmox has built-in support for CEPH, which is promoted as VMFS equivalent.

I don't have much experience with them, so can't tell if it's really on the same level.

thyristan · 4h ago
Proxmox with Ceph can do failover when a node fails. You can configure a VM as high-availability to automatically make it boot on a leftover node after a crash: https://pve.proxmox.com/wiki/High_Availability . When you add ProxLB, you can also automatically load-balance those VMs.

One advantage Ceph has over VMware is that you don't need specially approved hardware to run it. Just use any old disks/SSDs/controllers. No special extra expensive vSAN hardware.

But I cannot give you a full comparison, because I don't know all of VMware that well.

woleium · 4h ago
Yes, you can do this with ceph on commodity hardware (or even your compute nodes, if you are brave), or if you have a bit of cash, something like a netapp to do NFS/iSCSI/NVME-oF

Use any of these with the built in HA manager in proxmox

redundantly · 4h ago
As far as I understand it, Ceph allows you to create distributed storage by using the hardware across your hosts.

Can it be used to format a single shared block device that is accessed by multiple hosts like VMFS does? My understanding is this isn't possible.

nyrikki · 2h ago
Ceph RBD can technically support multi-writer access through exclusive locking, but it won't be the same as multi-writer.

You can set up a radosgw outside of the proxmox and use objects.

But ceph is fundamentally a distributed object store, shared LUNs with block level multi-writer is fundamentally a tightly coupled solution.

If you have a legacy need that has OCFS or a quorum drive, the underlying tools proxmox is an abstraction can sometimes be used as these types of systems tend to be pets.

But if you were just using multi-writer because it was there, there are alternatives that are typically more robust under the shared nothing model like Ceph uses.

But it is all tradeoffs and horses for courses.

pdntspa · 3h ago
That, and configuring mount points for read-write access on the host is incredibly confusing and needlessly painful
riedel · 6h ago
We are really happy with proxmox for our 4 machine cluster in the group. We evaluated many things, they were either to light or to heavy for our users and/or our group of hobbyist admins. A while back we also set up a backup server. Forum is also a great resource. I just failed to contribute a pull request via their git email workflow and I am now stuck with a non-upstreamed patch to the LDAP Sync (btw. the code there is IMHO not the best part of PVE). In general, while the system works great as a monolith, extending it is IMHO really not easily possible. We have some cludges all over the place (mostly using the really good API), that could be better integrated, e.g. with the UI. At least I did not find a way to e.g. add a new auth provider easily.
woleium · 4h ago
Can’t it use pam? so many options for providers there.
riedel · 4h ago
It was mostly about syncing groups with proxmox. Worked by patching the LDAP provider to support our schema. Comment was more about the extensibility problem when doing this. Actually when you say this, I wonder how PAM could work, only ever used it for providing shell access: we typically do not have any local users on the machine. Never used PAM in a way not providing any local execution privileges (which is the whole point of a VM host).
avtar · 3h ago
I would see Proxmox come up in so many homelab type conversations so I tried 8.* on a mini pc. The impression I got was that the project probably provides the most value in a clustered environment or even on a single node if someone prefers using a web UI. What didn't seem very clear was an out-of-box way for declaring VM and container configurations [1] that could then be version controlled. Common approaches seemed to involve writing scripts or reach for other tools like Ansible. Whereas something like LXD/Incus makes this easier [2] by default. Or maybe I'm missing some details?

[1] https://forum.proxmox.com/threads/default-settings-of-contai...

[2] https://linuxcontainers.org/incus/docs/main/howto/instances_...

cyberpunk · 3h ago
There are various terraform providers for proxmox.
whalesalad · 1h ago
Yeah, this is the way. You end up treating Proxmox like it is AWS and asserting your desired state against it.
PeterStuer · 5h ago
I have only recently moved to proxmox as the Hyper-V licensing became too opressive for hobby/one-person projects use.

Can someone tell me wether proxmox upgrades are usually smooth sailing, or should I prepare for this being an endeavour?

thyristan · 4h ago
Never had a problem with them. Just put each node in maintenance, migrate the VMs to another node, update, move the VMs back. Repeat until all nodes are updated.
zamadatix · 3h ago
The "update" step is a bit of a "draw the rest of the owl" in the case of major version updates like this 8.x -> 9.x release. It also depends how many features you're using in that cluster as to how complicated the owl is to draw.

That said, I just made it out alright in my home lab without too much hullabaloo.

pimeys · 3h ago
Did you do the backup and full reinstall or just with apt?

I should do the same update this weekend. .

woleium · 4h ago
if you are using hardware pass through for e.g. nvidia cards you have to update your VMs as well, but other than that pretty painless in my experience (over 15 years)
grmone · 5h ago
"Proxmox VE is using a newer Linux kernel 6.14.8-2 as stable default enhancing hardware compatibility and performance."

kernel.org don't even list version 6.14 anymore. do they backport security patches on there own?

Arrowmaster · 4h ago
I don't know what they are currently doing, but historically Proxmox uses Debian as the OS base but with Ubuntu as the kernel source. So they rely on the Ubuntu security team backporting security patches for the kernel.
whiztech · 4h ago
nativeit · 4h ago
I wouldn’t expect it to be, as kernel.org don’t list distribution kernels.

(Ignore this if it’s irrelevant and I’m missing the point, which is always a distinct possibility)

> Many Linux distributions provide their own "longterm maintenance" kernels that may or may not be based on those maintained by kernel developers. These kernel releases are not hosted at kernel.org and kernel developers can provide no support for them.

> It is easy to tell if you are running a distribution kernel. Unless you downloaded, compiled and installed your own version of kernel from kernel.org, you are running a distribution kernel. To find out the version of your kernel, run uname -r:

sschueller · 6h ago
The official release of Debian Trixie is not until the 9th...
piperswe · 6h ago
Trixie is under a heavy freeze right now; just about all that's changing between now and the 9th are critical bug fixes. Yeah it's not ideal for Proxmox to release an OS based on Trixie this early, but nothing's really going to change in the next few days on the Debian side except for final release ISOs being uploaded
zozbot234 · 5h ago
They might drop packages between now and the stable release. An official Debian release won't generally drop packages unless they've become totally unusable to begin with.
tlamponi · 1m ago
We manage anything, including package builds, ourselves if the need should arise; we also monitor Debian and release critical bugs closely, we see no realistic potential for any Proxmox relevant package to disappear, at least nothing higer compared to that happening after the 9th.

FWIW, we got staff members that are also directly involved with Debian which makes things a bit easier.

piperswe · 5h ago
Given that Proxmox operates their own repos for their custom packages and users don't typically install their own packages on top of Proxmox, if a package they need gets dropped due to RC bugs (etc) they can upload it to their own repo
znpy · 6h ago
Debian repositories gets frozen months in advance before a release, and pretty much only security patches are imported after that. Maybe some package gets rebuilt, or stuff like that. No breaking changes.

I wouldn't expect much changes, if any a all, between today (Aug 5th) and the expected release date (Aug 9th).

cowmix · 6h ago
Yeah, it’s wild how many projects—especially container-based ones—have already jumped to Debian Trixie as their “stable” base, even though it’s still technically in testing. I got burned when linuxserver.io’s docker-webtop suddenly switched to Trixie and broke a bunch of my builds that were based on Bookworm.

As you said, Debian 13 officially lands on August 9, so it’s close—but in my (admittedly limited) experience, the testing branch still feels pretty rough. I ran into way more dependency chaos—and a bunch of missing or deprecated packages—than I expected.

If you’re relying on container images that have already moved to Trixie, heads up: it’s not quite seamless yet. Might be safer to stick with Bookworm a bit longer, or at least test thoroughly before making the jump.

sgc · 5h ago
When did you run into your problems? Is there a chance they are largely resolved at this point?
Pet_Ant · 6h ago
Yeah, but what is the rush? I mean 1) what if something critical changes, and 2) I could easily see some setting somewhere being at "-rc" which causes a bug later.

Frankly, not waiting half a week is bright orange flag to me.

xattt · 5h ago
Ugh, I’d love to make the leap, but I don’t want the headache of trying to get SR-IOV going again for my integrated Intel graphics.
zozbot234 · 5h ago
Why not run virtio-gpu in the guest?
xattt · 3h ago
Would Plex QuickSync transcoding work?
zamadatix · 3h ago
Windows.
Takennickname · 6h ago
"It’s also possible to install Proxmox VE 9.0 on top of Debian."

Has that always been the case? I have a faint memory of trying once and not being able to with Proxmox 7.x

carlhjerpe · 6h ago
You can even install Proxmox on NixOS now (no official support ofc) though https://github.com/SaumonNet/proxmox-nixos

Which I think is really cool since it means their stuff is "truly open-source" :)

robeastham · 6h ago
I'm pretty sure it's been the case since at least 7.0, as I've done it a few times on hosts such as Scaleway that only offered a Debian base image for my machine.
throw0101c · 5h ago
> Has that always been the case? I have a faint memory of trying once and not being able to with Proxmox 7.x

We did it for 7.x [1] and it worked fine (since upgraded things in-place to 8.x).

[1] https://pve.proxmox.com/wiki/Install_Proxmox_VE_on_Debian_11...

zozbot234 · 5h ago
Somewhat annoyingly, Proxmox relies on a non-Debian kernel for at least some of its features. This definitely made a difference w/ Bookworm (which was on the 6.1 kernel release series), not sure about Trixie (which will be on 6.12).
SirMaster · 6h ago
Yes it's always been the case. I installed Proxmox 3.4 (based on Debian 7) this way originally, and have been upgrading ever since with no issues.
ChocolateGod · 6h ago
Yeh. It's useful when trying to install onto partition setups the built in installer doesn't support OOTB.

But, things like proxmox-boot-tool may not work.

rcarmo · 6h ago
I've done it a few times--8.x for sure, maybe earlier, but I've now been using it for too long to remember accurately.
oakwhiz · 6h ago
I do it this way every time.
yla92 · 5h ago
> The Proxmox VE mobile interface has been thoroughly reworked, using the new Proxmox widget toolkit powered by the Rust-based Yew framework.

First time hearing about Yew (yew.rs). First time hearing about it. Is it like writing frontend code in Rust and compiled to WASM ? Is anyone using it (other than Proxmox folks, of course).

dylanowen · 5h ago
I'm using it for a browser extension, just because I wanted to code more in rust. It's great at what it does and has all the same paradigms from React. The best use case though, would be if all your code is already rust. If you have a complex UI I'd probably use react and typescript.
nativeit · 4h ago
Still use/love Proxmox daily. Congrats to the team on the latest release!
throw0101c · 5h ago
I've heard good things about XCP-ng [1] as well: anyone use both that can lay out the pros/cons of each?

[1] https://en.wikipedia.org/wiki/XCP-ng

nirav72 · 5h ago
I can't speak for pros and cons with XCP-Ng. I've been meaning to try out XCP-ng. But feel like there just isn't a large enough community support around it yet. At least not like Proxmox, which has seen a surge in usage and popularity after the Broadcom fiasco.
unixhero · 4m ago
Maybe a surge of newcomers, but the community was strong before and irrespective of the fiasco.
nick__m · 4h ago
We tried both at work and thet were more or less equivalent but proxmox appears to have more momentum behind it. Also distributed storage in proxmox is based on ceph while xcpng use the obscure xostor
throw0101c · 5h ago
Highlights of the release (release notes):

* https://pve.proxmox.com/wiki/Roadmap#Proxmox_VE_9.0

krisknez · 2h ago
I would love if Proxmox had a UI for port forwarding. I hate doing it through the terminal. I like how LXD has a UI for that.