Rocky Linux 10 Will Support RISC-V

131 fork-bomber 65 5/21/2025, 8:40:58 PM rockylinux.org ↗

Comments (65)

audidude · 9h ago
Red Hat announced RISC-V yesterday with RHEL 10. So this seems rather expected.

https://www.redhat.com/en/blog/red-hat-partners-with-sifive-...

No comments yet

gerdesj · 7h ago
I understand why people use RH and Rocky and even Oracle: the rpm wranglers. However its not for me.

My earliest mainstream distro was RH when they did it just for fun (pre IBM) and then I slid slightly sideways towards Mandrake. I started off with Yggdrassil.

I have to do jobs involving RH and co and its just a bit of a pain dealing with elderly stuff. Tomcat ... OK you can have one from 1863. There is a really good security back port effort but why on earth start off with a kernel that is using a walking stick.

Perhaps I am being unkind but for me the RH efforts are (probably) very stable and a bit old.

It's not the distro itself either. The users seem to have snags with updating it.

I (very generally) find that RH shops are the worst at [redacted]

thebeardisred · 3h ago
Hi! I'm sorry this has been your experience. I'm one of the Red Hatters who's been working behind the scenes to get this over the finish line.

I do say my genuine thanks for your earnest expression. The version and ABI guarantee is not for everyone. At the same time some folks around these parts know that I'm "not an apologist for running an out of date kernel". I can assure you that everything shipped in the forthcoming P550 image is fresh. GCC 15. LLVM 19, etc. It's intended for development to get more software over the finish line for RISC-V.

Conflict of interest statement: I work for Red Hat (Formerly CoreOS), and I'm also the working group lead for the distro integration group within RISE (RISC-V Software Ecosystem).

jabl · 2h ago
> The version and ABI guarantee is not for everyone.

As an aside, that kABI guarantee only goes so far. I work in HPC/AI, and the out-of-tree drivers we use like MOFED and Lustre drivers would break with EVERY SINGLE RHEL minor update (like RHEL X.Y -> X.(Y+1) ). Using past form here because I haven't been using RHEL for this purpose for the past ~5 years, so maybe it has changed since although I doubt it.

pabs3 · 1h ago
Is anyone trying to get those drivers upstreamed?
bigstrat2003 · 6h ago
> Tomcat ... OK you can have one from 1863. There is a really good security back port effort but why on earth start off with a kernel that is using a walking stick.

Because old software is battle-tested and reliable. Moreover, upgrading software is ever a pain so it's best to minimize how often you have to do it. With a support policy of 10 years, you just can't beat RHEL (and derivatives) for stability.

danieldk · 43m ago
Having had to use those kinds of machines often as a user, it is a total pain. For some reason, these enterprise distributions end up being used a lot on scientific and machine learning clusters. You have to deal with 5-10 year old bugs that are solved in every other distribution already and you have to jump through hoops to make modern software run.

For me it always felt like the system administrators externalizing the cost on the users and developers (which are the same in many cases).

Despite my dislike of enterprise Linux, Red Hat is doing a lot of awesome work all around the stack. IMO Fedora and the immutable distros are the real showcase of all the things they do.

znpy · 34m ago
You can run whatever you want in containers. You don't even need root permissions. Red Hat's podman can launch containers without the need for root privileges.

> Despite my dislike of enterprise Linux, Red Hat is doing a lot of awesome work all around the stack. IMO Fedora and the immutable distros are the real showcase of all the things they do.

Fedora today is what RHEL will be tomorrow. They quite literally freeze a Fedora release to use as a base for RHEL's next release. If you like Fedora today you're gonna like Fedora tomorrow.

homebrewer · 43m ago
RHEL's kernel — the actual base of the operating system with the largest effect on stability — is not old. It might have a version number from the middle of the last century, but there are so many massive backports in there that in a few years time after release it gets closer to the latest mainline than to its original version. Don't expect too much from it.
tanelpoder · 6h ago
Yep, when you have thousands of different production apps, installed and running directly on Linux - not talking about containers or microservices here - you’ll have very little appetite to upgrade all of them to the latest and shiniest technologies every couple of years. Stability & compatibility with existing skillsets is more important.
rubitxxx10 · 5h ago
I’m old. I used one of the original boxed RH distros. It was cool then. That was almost 30 years ago.

I know they give back to Linux, and I’m thankful for the enterprises that pay for it because of that.

It’s not a bad company, though it’s strange that you could be a great developer and lose your position there if your project gets cut, unless another team picks you up, from what I hear.

But when Linus created Linux, he didn’t do it for money, and RH just seems corporate to me like the Microsoft of Linux, way back before Microsoft had their own Linux. I want my Linux free-range not cooped.

They don’t do anything wrong. They just don’t give the vibe. Anyone asking for money for it doesn’t “get it” to me.

danieldk · 39m ago
I’m old. I used one of the original boxed RH distros. It was cool then. That was almost 30 years ago.

Does anyone remember glint (graphical UI for RPM) that was part of Red Hat? Must have been Red Hat 4.x or thereabout.

znpy · 32m ago
> But when Linus created Linux, he didn’t do it for money, and RH just seems corporate to me like the Microsoft of Linux, way back before Microsoft had their own Linux. I want my Linux free-range not cooped.

You seem to forget that Red Hat has funded a lot of the development of the Linux ecosystem. There would be essentially no modern linux environment without Red Hat.

copperx · 6h ago
I have to confess that my early experiences with RedHat as a teenager and dealing with the nightmareish RPM dependencies soured me from the distribution. I went to Debian and then its many descendants and never looked back; APT seemed magical in comparison.

I assume they have a package manager that resolves dependencies well now? Is that what an RPM wrangler is?

homebrewer · 39m ago
This is a very outdated view. dnf runs circles around apt. Try it out, or at least find man pages on the ole 'net and see what it can do.

Probably the thing I like the most is transactional installation (or upgrades/downgrades/removals) of packages with proper structured history of all package operations (not just a bunch of log records which you have to parse yourself), and the ability to revert any of those transactions with a single command.

anticodon · 9m ago
I had the same experience as the OP in the beginning of the century. I've built a lot of RPM packages back then and it was clear that system of dependencies built into RPM format itself (not apt or dnf, this is dpkg level in terms of Debian) was poorly thought out and clearly insufficient for any complex system.

I've also migrated to Debian and it felt like a huge step forward.

I'm on Arch now, BTW.

speakspokespok · 2h ago
First impressions really matter. This is also why I went Debian. You shouldn't be getting marked down for saying it.

Many of us were running on 28.8 dial-up. Internet search was not even close to a solved problem. Compiling a new kernel was an overnight or even weekend process. Finding and manually downloading rpm dependencies was slow and hard. Same era when compiling a kernel went overnight or over the weekend. You didn't download an ISO you bought a CD or soon a DVD that you could booted off of.

Compare that to Debian's apt-get or Suse's yast/yast2 of the time, both just handled all that for you.

Debian and Suse were fun and fit perfectly into the Web 1.0 world; RedHat was corporate. SystemD was pushed by RedHat.

danieldk · 37m ago
Compiling a new kernel was an overnight or even weekend process

One friend and I had a competition who could make the smallest kernel configuration still functional on their hardware. I remember that at some point we could build it in ten minutes or so. This was somewhere in the nineties, I was envious of his DX2-50.

Compare that to Debian's apt-get or Suse's yast/yast2 of the time, both just handled all that for you.

One of the really huge benefits of S.u.S.E. in Europe in the nineties was that you could buy it in nearly every book shop and it came with an installation/administration book and multiple CD-ROMs with pretty much all packages. Since many people did not have internet at all or at most dial-up, it gave you everything to have a complete system.

bigfatkitten · 5h ago
rpm dependencies has been a solved problem with yum (and now dnf) for about two decades.
speakspokespok · 53m ago
Yum was borrowed from yellow dog Linux.
danieldk · 28m ago
To be pedantic, yum was not from Yellow Dog, it is Yellow dog Updater Modified after all. It was a rewrite of the Yellow Dog Updater by people at Duke University. (Yellow Dog Linux was based on Red Hat.)

There was a lot of competition around package managers back then. For RPM, there were also urpmi, apt-rpm, etc.

znpy · 30m ago
Which in turn was based on RHEL/CentOS: https://en.wikipedia.org/wiki/Yellow_Dog_Linux
znpy · 36m ago
> I have to do jobs involving RH and co and its just a bit of a pain dealing with elderly stuff. Tomcat ... OK you can have one from 1863.

It's 2025, you can run whatever version you need in containers.

worthless-trash · 2h ago
Disclaimer: I'm very involved in the kernel part of this for $company.

The RHEL kernels themselves do see many improvements over time, the code that you'll see when the product goes end of life is considerably updated compared to the original version string that you see in the package name / uname -a. There are customer and partner feature requests, cve fixes and general bug fixes that go in almost every day.

The first problem of 'running old kernels' is exacerbated by the kernel version string not matching code reality.

The second probelm is many companies don't start moving to newer rhels when its out, they often stick to current -1, which is a bit of a problem because by the time they roll out a release, n-1 is likely entering its first stage of "maintenance" so fixes are more difficult to include. If you can think of a solution to this, I'm all ears.

The original reason behind not continually shipping newer kernel versions is to ensure stability by providing a stable whitelisted kABI that third party vendors can build on top of. This is not something that upstream and many OS vendors support, but with the "promise" of not breaking kabi, updates should happen smoothly without third party needing to update their drivers.

The kabi maintenance happens behind the scenes while ensuring that CVE fixes and new features are delivered during the relevant stage of the product lifecycle.

The kernel version is usually very close to the previous release, in the case of rhel10 its 6.13 and already with zero day fixes it has parts of newer code backported, tested, etc in the first errata release.

The security landscape is changing, maybe sometime Red Hat Business Unit may wake up and decide to ship a rolling better tested kernel (Red Hat DOES have an internal/tested https://cki-project.gitlab.io/kernel-ark/ which is functionally this ). Shipping this has the downside is that the third party vendors would not have the same KABI stability guarantees that RHEL currently provides, muddy the waters of rhels value and confuse people on which kernel they should be running.

I believe there are two customer types, ones who would love to see this, and get the newest features for their full lifecycle, and ones who would hate it, because the churn and change would be too much introducing risk and problems for them down the line.

Its hard, and likely impossible to keep everyone happy.

jabl · 2h ago
As I mentioned in another comment on this thread:

> As an aside, that kABI guarantee only goes so far. I work in HPC/AI, and the out-of-tree drivers we use like MOFED and Lustre drivers would break with EVERY SINGLE RHEL minor update (like RHEL X.Y -> X.(Y+1) ). Using past form here because I haven't been using RHEL for this purpose for the past ~5 years, so maybe it has changed since although I doubt it.

I'm not sure what the underlying problem here is, is the kABI guarantee worthless generally or is it just that MOFED and Lustre drivers need to use features not covered by some kind of "kABI stability guarantee"?

znpy · 11m ago
What if, more than a rolling kernel, we get a new kernel every two years or so?

Or maybe one in the middle of the (expected) lifetime of the major release ?

Just thinking out loud, but I acknowledge that maintaining a kernel version is no small task (probably takes a lot of engineering time)

pabs3 · 1h ago
How much of the RHEL kernels is stuff that isn't in Linux mainline or LTS?
dgfitz · 7h ago
I’d rather use redhat than Ubuntu. I was handed a machine the other week with Ubuntu 23.10 on it, OS supplied from a vendor with extensive customization. Apt was dead. Fuck that. At least RH doesn’t kill their repos.
gerdesj · 7h ago
I've got Ubuntu 22.04 lying around that still update because they are LTS. Ubuntu has a well publicised policy for releases and you will have obviously read them.

Try do-release-upgrade.

You also mention "OS supplied from a vendor with extensive customization. Apt was dead."

How on earth is that Ubuntu's problem?

hshdhdhj4444 · 5h ago
Isn’t Ubuntu basically killing apt?

My Ubuntu became unusable because it kept insisting on installing a snap version of Firefox breaking a whole bunch of workflows.

I do want to try a RH based OS (maybe Fedora) so they don’t keep changing things on me, but just where I am in life right now I don’t have the time/energy to do so, so for now I’m relying on my Mac.

Hopefully I can try a new Linux distro in a few months, because I can’t figure it out yet, but something about macOS simply doesn’t work for me from a getting work done perspective.

nine_k · 5h ago
I've heard many good things about Pop OS. It's like Ubuntu done right, and it does have an apt package for Firefox.

(I run Void myself, and stay merrily away from all these complications.)

tgmatt · 5h ago
I can highly recommend it. Have been using it for a couple years or so now, haven't had any serious issues.
ChocolateGod · 2h ago
> It's like Ubuntu done right

But it is Ubuntu?

mulmen · 5h ago
I have been using Fedora Sway as my desktop operating system for a couple years now and I am very happy. It’s definitely worth a try. I have access to flatpak when I need it for apps like steam but the system is still managed by rpm/dnf. There’s of course some SELinux pain but that’s typically my fault for holding it wrong. Overall very impressed.
dgfitz · 4h ago
I cannot update the OS per the contract.

It’s Ubuntu’s problem because they decide they’re smarter than their users and nuke their repos.

Fuck all of that.

viraptor · 1h ago
The vendor should provide you with updates to the new version, or use LTS. There's absolutely nothing here bad on the Ubuntu part.

Your contract is with the vendor if you have one. Unless you have a contract with Canonical and then you can ask them for support.

dismalaf · 4h ago
It's well publicized that they don't maintain support for old, non-LTS distros. They literally delivered what they promised. Could have been avoided by using an LTS distro.

Fedora does the same. No corporate vendor supports 6 month cycle distros for more than a year. RHEL releases come super slowly, for example.

dgfitz · 4h ago
I didn’t have a say in the matter of OS choice, it doesn’t matter how well-publicized Ubuntu’s stance is, it’s wrong. I don’t care if it’s not an LTS, keep the fucking repos open and advertise you’re using an insecure OS. Let me, the user, make that choice. Don’t pretend I’m stupid and need some kind of benevolent dictator to make choices for me, or handicap me because they’re smarter than me. They’re not.
eddythompson80 · 1h ago
That’s exactly how it works. If you want to use an unsupported, insecure OS, you just have to opt into it.

You opt into it by changing your repositories to the https://old-releases.ubuntu.com archive mirror. You can install and use Ubuntu 6.10 if you want.

dismalaf · 3h ago
"Keeping the repos open" has a cost on their part. Servers aren't free. If you think you're smart then mirror your own repos.
dgfitz · 3h ago
Really? How much more can it cost to host their LTS and non LTS repos open at the same time?

C’mon, that’s such a weak argument I think you know it.

dismalaf · 2h ago
If there's no cost in time, effort or equipment then mirror it yourself. It's easy, right?

Or just use an LTS distro like literally every single other organization that depends on Ubuntu for their business SMH. Like, it's absurd to even think about...

unmole · 2h ago
Sounds made up.
pabs3 · 1h ago
publicmail · 5h ago
Maybe a dumb question but how do non x86 boards normally boot Linux images in a generic way? When I was in the embedded space, our boards all relied on very specific device tree blobs. Is the same strategy used for these or does it use ACPI or something?
rwmj · 53s ago
Arnavion · 3h ago
All RISC-V consumer boards running Linux also use DT. RISC-V is also working on getting ACPI but primarily for the sake of servers, just like with ARM where ACPI is primarily used for servers (ARM SBBR / ServerReady).

ARM Windows laptops only use ACPI because Windows has no interest in DTs, but under Linux these devices are still booted using DT. I don't know for sure, but the usual reason is that these ACPI implementations are hacked up by the manufacturer to be good enough to work with Windows, so supporting them on Linux requires more effort than just writing up the DT.

ChocolateGod · 2h ago
> so supporting them on Linux requires more effort than just writing up the DT.

More effort then producing unique images for every board?

Arnavion · 45m ago
ChocolateGod · 9m ago
[delayed]
jabl · 2h ago
The x86 platform uses a plethora of platforms services under different names like UEFI/ACPI/PCI/(ISA plug-n-play back in the day)/APIC (programmable interrupt controller and evolved variants thereof)/etc. that allows the generic kernel to discover what's available when it boots and load the correct drivers.

ARM servers do the same with SBSA (a spec that mandates things like UEFI, ACPI etc. support) etc. I think there's some effort in RISC-V land to do the same, also using UEFI and ACPI.

skywal_l · 1h ago
Maybe I'm wrong but isn't it what SBI[0] is for?

[0] Supervisor Binary Interface

beeflet · 4h ago
I think windows ARM laptops use UEFI?
Arnavion · 48m ago
publicmail was asking about ACPI vs DT, not UEFI. Using UEFI and ACPI/DT are orthogonal; DT-using devices can also boot from UEFI if the firmware provides it. See https://github.com/TravMurav/dtbloader for example.
ChocolateGod · 2h ago
They do, Windows Phone even use UEFI (not sure was completely compliant) back in the day.
arminiusreturns · 8h ago
I'm so looking forward to a RISC future!
agarren · 7h ago
Ditto! I haven’t found any hardware that’s daily-driver ready, but I keep looking.

https://store.deepcomputing.io/products/dc-roma-ai-pc-risc-v...

I especially like the idea of getting a framework version in this case I want to swap in a different mainboard. By their own admission, the risc-v board is targeting developers and not ready for prime time. Also coming from the US, not sure how the tariff thing will workout…

0x000xca0xfe · 5h ago
RISC-V software ecosystem is really good already. It feels like everybody is just waiting for high performance CPU cores now. Sadly silicon cannot be built and released within seconds like software...

Better to buy a SBC for now (I can recommend the OrangePi RV2 - it's fantastic!) and wait until actually desktop/laptop-class hardware is ready :)

bobmcnamara · 4h ago
I miss my RISC past.
mrbluecoat · 8h ago
Better title: Rocky Linux 10 Will Support Two RISC-V Boards
NewJazz · 8h ago
For a distro,just building packages for an architecture is notable support-wise. Those with custom firmware and kernels can pair them with the rocky 10 userspace.
rjsw · 8h ago
They could easily support the Pine64 Star64 board as well, the VisionFive2 build of u-boot works on the Star64 too.
nine_k · 8h ago
Even to support one board, they'd need the whole build / testing infrastructure for RISC-V. Likely adding more boards is booing to be easy now, and any architecture-specific regressions, easier to spot and fix timely.
rob_c · 7h ago
Even better title: Rocky will take the RHEL work and rebrand and sell the boards at a discount from China and claim a win and that they're being attacked by IBM.
felbane · 6h ago
Man some of y'all really have beef with Rocky...
dismalaf · 4h ago
Because their model is the absolute laziest possible one.