"Unfortunately for John, the branches made a pact with Satan and quantum mechanics [...] In exchange for their last remaining bits of entropy, the branches cast evil spells on future generations of processors. Those evil spells had names like “scaling-induced voltage leaks” and “increasing levels of waste heat” [...] the branches, those vanquished foes from long ago, would have the last laugh."
> The Mossad is not intimidated by the fact that you
employ https://. If the Mossad wants your data, they’re going to
use a drone to replace your cellphone with a piece of uranium
that’s shaped like a cellphone, and when you die of tumors filled
with tumors, […] they’re going to buy all of your stuff
at your estate sale so that they can directly look at the photos
of your vacation instead of reading your insipid emails about
them.
wood_spirit · 3h ago
So this is where they got the pager and walkie talkie ideas from
bee_rider · 3h ago
The bit about vast matrices shows some silver lining though; it turns out John’s little brother figured out how to teach those matrices to talk like a person.
yvdriess · 1h ago
Yes but those transistors moved to greener pastures.
> [...] the contents of the entire memory to be read over time, explains Rüegge. “We can trigger the error repeatedly and achieve a readout speed of over 5000 bytes per second.” In the event of an attack, therefore, it is only a matter of time before the information in the entire CPU memory falls into the wrong hands.
formerly_proven · 5h ago
Prepare for another dive maneuver in the benchmarks department I guess.
cenamus · 5h ago
And if not, why did they introduce severe bugs for a tiny performance improvement?
umanwizard · 1h ago
A modern processor pipeline is dozens of cycles deep. Without branch prediction, we would need to know the next instruction at all times before beginning to fetch it. So we couldn’t begin fetching anything until the current instruction is decoded and we know it’s not a branch or jump. Even more seriously, if it is a branch, we would need to stall the pipeline and not do anything until the instruction finishes executing and we know whether it’s taken or not (possibly dozens of cycles later, or hundreds if it depends on a memory access). Stalling for so many cycles on every branch is totally incompatible with any kind of modern performance. If you want a processor that works this way, buy a microcontroller.
bloppe · 5h ago
It's not tiny. Speculative execution usually makes code run 10-50% faster, depending on how many branches there are
bee_rider · 5h ago
Yeah… folks who think this is just some easy to avoid thing should go look around and find the processor without branch prediction that they want to use.
On the bright side, they will get to enjoy a much better music scene, because they’ll be visiting the 90’s.
titzer · 1h ago
That's a vast underestimate. Putting in lfence before every branch is on the order of 10X slowdown.
trebligdivad · 5h ago
Thanks! It would be great if someone could update the title URL to that blog post; the press release is worse than useless.
- Predictor updates may be deferred until sometime after a branch retires. Makes sense, otherwise I guess you'd expect that branches would take longer to retire!
- Dispatch-serializing instructions don't stall the pipeline for pending updates to predictor state. Also makes sense, considering you've already made a distinction between "committing the branch instruction" and "committing the result of the prediction".
- Privilege-changing instructions don't stall the pipeline for pending updates either. Also makes sense, but only if you can guarantee that the privilege level is consistent between making/committing a prediction. Otherwise, you might be creating a situation where predictions generated by code in one privilege level may be committed to state used in a different one?
Maybe this is hard because "current privilege level" is not a single unambiguous thing in the pipeline?
mettamage · 5h ago
Good to see Kaveh Razavi, he used to teach at my uni in the Vrije Universiteit in Amsterdam :) The course Hardware Security was crazy cool and delved into stuff lijke this.
markus_zhang · 5h ago
I checked out this course (and another one from Vrije about malware) a couple of years ago, back then there was very little public info about the courses.
Do you know if there is any official recording or notes online?
Thanks in advance.
thijsr · 2h ago
As far as I am aware, the course material is not public. Practical assignments are an integral part of the courses given by the VUSEC group, and unfortunately those are difficult to do remotely without the course infrastructure.
The Binary and Malware Analysis course that you mentioned builds on top of the book "Practical Binary Analysis" by Dennis Andriesse, so you could grab a copy of that if you are interested.
mettamage · 2h ago
Ah yea, he gave a guest lecture on how he hacked a botnet!
No, but last time I checked you can be a contracted student for 1200 euro's.
If I knew what I was getting into at the time, I'd do it. I did pay for extra, but in my case it was the low Dutch rate, so for me it was 400 euro's to follow hardware security, since I already graduated.
But I can give a rough outline of what they taught. It has been years ago but here you go.
Hardware security:
* Flush/Reload
* Cache eviction
* Spectre
* Rowhammer
* Implement research paper
* Read all kinds of research papers of our choosing (just use VUSEC as your seed and you'll be good to go)
Binary & Malware Analysis:
* Using IDA Pro to find the exact assembly line where the unpacker software we had to analyze unpacked its software fully into memory. Also we had to disable GDB debug protections. Something to do with ptrace and nopping some instructions out, if I recall correctly (look, I only low level programmed in my security courses and it was years ago - I'm a bit flabbergasted I remember the rough course outlines relatively well).
* Being able to dump the unpacked binary program from memory onto disk. Understanding page alignment was rough. Because even if you got it, there were a few gotcha's. I've looked at so many hexdumps it was insane.
* Taint analysis: watching user input "taint" other variables
* Instrumenting a binary with Intel PIN
* Cracking some program with Triton. I think Triton helped to instrument your binary with the help of Intel PIN by putting certain things (like xor's) into an SMT equation or something and you had this SMT/Z3 solver thingy and then you cracked it. I don't remember got a 6 out of 10 for this assignment, had a hard time cracking the real thing.
Computer & Network Security:
* Web securtiy: think XSS, CSRF, SQLi and reflected SQLi
* Application security: see binary and malware analysis
* Network security: we had to create our own packet sniffer and we enacted a Kevin Mitnick attack (it's an old school one) where we had to spoof our IP addresses, figure out the algorithm to create TCP packet numbers - all in the blind without feedback. Kevin in '97 I believe attacked the San Diego super computer (might be wrong about the details here). He noticed that the super computer S trusted a specific computer T. So the assignment was to spoof the address of T and pretend we were sending packets from that location. I think... writing this packet sniffer was my first C program. My prof. thought I was crazy that this was my first time writing C. I was, I also had 80 hours of time and motivation per week. So that helped.
* Finding vulnerabilities in C programs. I remember: stack overflows, heap overflows and format strings bugs.
-----
For binary & malware analsys + computer & network security I highly recommend hackthebox.eu
For hardware security, I haven't seen an alternative. To be fair, I'm not looking. I like to dive deep into security for a few months out of the year and then I can't stand it for a while.
If CPU brach predictor had bits of information readily available to check buffer boundaries and privilege level of the code, all this would be much easier to prevent. But apparently that will only happen when we pry out the void* from the cold C programmers' hands and start enriching our pointers with vital information.
ActorNightly · 3h ago
Or people could just understand the scope of the issue better, and realize that just because something has a vulnerability doesn't mean there is a direct line to an attack.
In the case of speculative execution, you need an insane amount of prep to use that exploit to actually do something. The only real way this could ever be used is if you have direct access to the computer where you can run low level code. Its not like you can write JS code with this that runs on browsers that lets you leak arbitrary secrets.
And in the case of systems that are valuable enough to exploit with a risk of a dedicated private or state funded group doing the necessary research and targeting, there should be a system that doesn't allow unauthorized arbitrary code to run in the first place.
I personally disable all the mitigations because performance boost is actually noticeable.
vlovich123 · 2h ago
> Its not like you can write JS code with this that runs on browsers that lets you leak arbitrary secrets
That's precisely what Spectre and Meltdown were though. It's unclear whether this attack would work in modern browsers but they did reenable SharedArrayBuffer & it's unclear if the existing mitigations for Spectre/Meltdown stimy this attack.
> I personally disable all the mitigations because performance boost is actually noticeable.
Congratulations, you are probably susceptible to JS code reading crypto keys on your machine.
nine_k · 1h ago
Disabling some mitigations makes sense for an internal box that does not run arbitrary code from the internet, like a build server, or a load balancer, or maybe even a stateless API-serving box, as long as it's not a VM on a physical machine shared with other tenants.
anyfoo · 40m ago
You run "arbitrary code from the internet" as soon as you use a web browser with JS enabled.
dwattttt · 3m ago
Or with JS disabled. HTML isn't as expressive, but it's still "arbitrary code from the internet"
baobun · 10m ago
Which you wouldn't do on an internal load balancer or database server, right?
anyfoo · 37m ago
> Or people could just understand the scope of the issue better
Do you understand the scope of the issue? Do you know that this couldn't personally affect you in a dragnet (so, not targeted, but spread out, think opportunistic ransomware) attack?
Because this statement of yours:
> Its not like you can write JS code with this that runs on browsers that lets you leak arbitrary secrets.
was not true for Spectre. The original spectre paper notoriously mentions JS as an attack vector.
If you truly disable all mitigations (assuming CPU and OS allow you to do so), you will reopen that hole.
So:
> The only real way this could ever be used is if you have direct access to the computer where you can run low level code.
I'm a low level kernel engineer, and I don't know this to be true in the general case. JITs, i.e. the JavaScript ones, also generate "low level code". How do you know of this not being sufficient?
quotemstr · 4h ago
You want CHERI.
ajross · 5h ago
I don't see how you think that will help? It's not about software abstraction, it's about hardware. Changing the "pointer" does nothing to the transistors.
Doing what you want would essentially require a hardware architecture where every load/store has to go through some kind of "augmented address" that stores boundary information.
Which is to say, you're asking for 80286 segmentation. We had that, it didn't do what you wanted. And the reason is that those segment descriptors need to be loaded by software that doesn't mess things up. And it doesn't, it's "just a pointer" to software and amenable to the same mistakes.
rini17 · 3h ago
286 far pointers were used sparingly, to save precious memory. Now we don't have any such problem and there are still unused bits in pointers even on largest 64 bit systems that might be repurposed perhaps. With virtual memory, there are all kinds of hardware supported address mappings and translations and IOMMU already so adding more transistors isn't an issue. The issue is purely cultural as you have just shown, people can't imagine it.
ajross · 2h ago
That's misunderstanding the hardware. All memory access on a 286 was through a segment descriptor, every access done in protected mode was checked against the segment limit. Every single one.
A "far pointer" was, again, a *software* concept where you could tell the compiler that this particular pointer needed to use a different descriptor than the one the toolchain assumed (by convention!) was loaded in DS or SS.
nine_k · 3h ago
Why stop at 80286, consider going back to the ideas of iAPX432, but with modern silicon tech and the ability to spend a few million transistors here and there.
(CHERI already exists on ARM and RISC-V though.)
nottorp · 5h ago
I suppose a CPU that only runs Rust p-code is what the OP is dreaming about...
ajross · 5h ago
Generated rust "p-code" would presumably be isomorphic to LLVM IR, which doesn't have this behavior either and would be subject to the same exploits.
Again, it's just not a software problem. In the real world we have hardware that exposes "memory" to running instructions as a linear array of numbers with sequential addresses. As long as that's how it works, you can demand an out of bounds address (because the "bounds" are a semantic thing and not a hardware thing).
It is possible to change that basic design principle (again, x86 segmentation being a good example), but it's a whole lot more involved than just "Rust Will Fix All The Things".
nottorp · 4h ago
Holy... I need to stop making fun of Rust (*). I keep getting misinterpreted.
(*) ... although I don't think I can abstain ...
smartmic · 4h ago
> Closing these sorts of gaps requires a special update to the processor’s microcode. This can be done via a BIOS or operating system update and should therefore be installed on our PCs in one of the latest cumulative updates from Windows.
Why mention only Windows, what about Linux users?
matja · 4h ago
The Linux kernel has had microcode loading support (`CONFIG_MICROCODE` / `CONFIG_MICROCODE_INTEL`) but many years, but it does require that Intel release the microcode files necessary for distribution maintainers to update the packages, then it should be included in a system update.
Not expert enough to know what to look for to see if these particular mitigations are present yet.
Alcatros552 · 1h ago
As it seems a lot of people are not aware that this one is a newer generation of branch predictor issue. You can see that Intels eIBRS doesn't mitigate the problems and make them susceptible to attacks. To prevent bigger issues the issue was released after Intel has been informed of the Issue and most systems are patched in the meantime.
rtkwe · 5h ago
I wonder if there's similar gaps in AMD hardware? Seems like speculative execution is simply an extremely hard to patch vulnerability in a share processor space so I wonder how AMD has avoided it.
tmoertel · 5h ago
According to the authors' blog post:
> Does Branch Privilege Injection affect non-Intel CPUs?
> No. Our analysis has not found any issues on the evaluated AMD and ARM systems.
The short of it is that AMD haven’t “avoided it”. Speculative execution side channels aren’t one vulnerability but rather a whole family of vulnerabilities. This particular one is (apparently) Intel-only, same as Meltdown was, but AMD was also vulnerable to the original Spectre.
bee_rider · 5h ago
Pedantically, speculative execution isn’t the vulnerability, it is a necessary mechanism for every high-performance CPU nowadays (where “nowadays” started, like, around the turn of the century). However, bugs and vulnerabilities in speculative execution engines are very widespread because they are complicated.
There are probably similar bugs in AMD and ARM, I mean how long did these bugs sit undiscovered in Intel, right?
Unfortunately the only real fix is to recognize that you can’t isolate code running on a modern system, which would be devastating to some really rich companies’ business models.
quotemstr · 4h ago
The solution to this particular vulnerability is intuitive to me: snapshot the current privilege level when we enqueue a branch predictor update and carry that snapshot along with the update itself as it flows through the processor's internal buffers. Same problem you might have in software and the same solution, yes?
wbl · 1h ago
That actually doesn't work. The evaluation of the branch condition may be at some point far away from where the privilege update is recognized and executed. There is no current state to update, it's only recognized in retrospect what the state was. And carrying along data is pricey in a CPU: the instruction pointer isn't even available because of this.
You could say we only update the predictor at retirement to solve this. But that can get a little dicy also: the retirement queue would have to track this locally and retirement frees up registers, better be sure it's not the one your jump needs to read. Doable but slightly harder than you might think.
Just to make sure I got this right, at this point in time there are patches out for all major operating systems that can mitigate this/apply relevant microcode to mitigate it?
The28thDuck · 2h ago
Haven’t we been here before? It seems like it’s very similar to the branch prediction exploits of the late 2010s. Is there something particularly novel about this class of exploits?
mettamage · 1h ago
Probably, I haven't had time to delve into the article yet. But ever I first learned about them I got the hunch that they'd never fully go away.
Then people say "no that's not possible, we got security in place."
So then the researchers showcase a new demo where they use their existing knowledge with the same issue (i.e. scaling-induced voltage leaks).
I suspect this will go on and on for decades to come.
margorczynski · 5h ago
I wonder if there's any way to recover for Intel. They don't have anything worthwhile on the market, R&D takes a lot of time and their foundries are a constant source of losses as they're inferior compared to the competition.
On top of that x86 seems to be pushed out more and more by ARM hardware and now increasingly RISC-V from China. But of course there's the US chip angle - will the US, especially after the problems during Covid, let a key manufacturer like Intel bite the dust?
chneu · 5h ago
Intel really isn't in as much trouble as tech blogs like to act.
It's not great but lol the sensationalism is hilarious.
Remember, gamers only make up a few percentage of users for what Intel makes. But that's what you hear about the most. One or two data center orders are larger than all the gaming cpus Intel will sell in a year. And Intel is still doing fine in the data center market.
Add in that Intel still dominates the business laptop market which is, again, larger than the gamer market by a pretty wide margin.
WaxProlix · 4h ago
You're right about gamers, but other verticals are looking bad for Intel, too.
The two areas you mention (data center, integrated OEM/mobile) are the two that are most supply chain and business-lead dependent. They center around reliable deliveries of capable products at scale, hardware certifications, IT department training, and organizational bureaucracy that Intel has had captured for a long time.
But!
Data center specifically is getting hit hard from AMD in the x86 world and ARM on the other side. AWS's move to Graviton alone represents a massive dip in Intel market share, and it's not the only game in town.
Apple is continuing to succeed in the professional workspace, and AMD's share of laptop and OEM contracts just keeps going up. Once an IT department or their chosen vendor has retooled to support non-Intel, that toothpaste is not going back into the tube - not fully, at least.
For both of these, AMD's improvement in reliability and delivery at scale will be bearing fruit for the next decade (at Intel's expense), and the mindshare, which gamers and tech sensationalism are indicators for, has already shifted the market away from an Intel-dominated world to a much more competitive one. Intel will have to truly compete in that market. Intel has stayed competitive in a price-to-performance sense by undermining their own bottom line, but that lever only has so far it can be pulled.
So I'm not super bullish on Intel, sensationalism aside. They have a ton of momentum, but will need to make use of it ASAP, and they haven't shown an ability to do that so far.
layer8 · 5h ago
Intel still has well over 70% x86 market share. They have a long runway. Arm had only 15% datacenter market share last year, and still hasn’t made much headway in the Windows market.
freeone3000 · 4h ago
Arm is making huge gains though — five years ago they had less than 5%. The future of x86 is not bright.
baq · 4h ago
x86 vs arm doesn’t matter. Hardware matters. Intel needs to make the best cpu again. It can be x86, it can be arm, it can be risc-v.
adgjlsfhk1 · 3h ago
Arm vs x86 matters a lot for Intel since they don't make Arm CPUs. x86 used to be a massive moat for Intel/AMD. The rise of ARM market-share means that that moat is draining. 10 years ago, AMD and IBM were the only competition (and they were both in rough shape). Now Intel is competing against AMD, NVidia, Qualcom, Amazon, and Arm. Even if Intel can make the best CPU again, they no longer can charge monopoly prices for it. If you have a 10% faster CPU, that only lets you charge a small premium over everyone else.
emkoemko · 5h ago
didn't i read something about apple,nvidia and other companies looking to use their foundries? why would they do that if its inferior or was that something else?
greenavocado · 5h ago
Because there's nothing else in America
porridgeraisin · 5h ago
I guess it depends on your expectations. Will they be fine as a company? I think yes. Will they be as prominent as they were at different points in their history? I think not.
Product aside, from a shareholder/business point of view (I like to think of this separately these days as financial performance is becoming less and less reflective of the end product) I think they are too big to fail.
tannhaeuser · 5h ago
> All intel processors since the 9th generation (Coffee Lake Refresh) are affected by Branch Privilege Injection. However, we have observed predictions bypassing the Indirect Branch Prediction Barrier (IBPB) on processors as far back as 7th generation (Kaby Lake).
From that piece of text on the blog, I don‘t quite unterstand if Kaby Lake CPUs are affected or not.
chrisweekly · 4h ago
I interpret it as including Kaby Lake.
fwip · 4h ago
At least some Kaby Lake CPUs are affected, but they can't say for sure that all of them are.
lostmsu · 1h ago
No, I think they are saying that they can only demonstrate exploit on Coffee Lake Refresh and later, but the issue that let them create exploit exists all the way back to Kaby Lake. So they are also probably exploitable, but this specific exploit does not target them.
201984 · 4h ago
mitigations=off
Don't care.
matja · 4h ago
"Don't mind me running this piece of WASM in a webworker to collect all the useful encryption keys and cookies in your RAM..."
201984 · 2h ago
Has even a single web exploit ever been found in the wild? Until then, I'm not going to worry and probably not even then.
dzaima · 32m ago
As long as most people run with mitigations on, you're technically probably indeed safe. But you should still care that things get fixed with mitigations=on otherwise you wouldn't have the shield of "almost everyone has mitigations enabled for this so noone has reason to bother exploiting this"!
johnnyjeans · 3h ago
Uncaught ReferenceError: WebAssembly is not defined
vlovich123 · 2h ago
You don't need WASM to deploy Spectre/Meltdown. Vanilla JS works just fine which is what was demonstrated in the original paper.
brobinson · 29m ago
Didn't all the major browsers alter their timing APIs to make this impossible/difficult?
anyfoo · 16m ago
I'm not an expert, but I think you can only make this harder by intentionally making timers less precise (even adding some random fuzz). Someone may correct me if I'm wrong, but I think statistically, a less precise timer means you will just need a longer runtime.
Suppose you want to measure the distribution of the delay between recurring events (which is basically what's at the heart of those vulnerabilities). Suppose the delays are all sub-milliseconds, and that your timer, to pick something ridiculous, only has a 2 second granularity.
You may at first think that you cannot measure the sub-millisecond distribution with such a corse timer. But consider that event and timers are not synchronized to each other, so with enough patience, you will still catch some events barely on the left or on the right side of your 2 second timer tick. Do this over a long enough time, and you can reconstruct the original distribution. Even adding some randomness to the timer tick just means you need more samples to suss the statistic out.
Again, I am not an expert, and I don't know if this actually works, but that's what I came up with intuitively, and it matches with what I heard from some trustworthy people on the subject, namely that non-precision timers are not a panacea.
bee_rider · 3h ago
Yeah, he should really turn mitigations on, so that when running arbitrary code from the internet he can be subject to 9999 vulnerabilities, instead of 10,000.
darkmighty · 3h ago
There are many kinds of vulnerabilities. Most are pretty mundane afaict. Breaking sandboxes and reading out your entire RAM is basically game over, existential vulnerability (second only to arbitrary code execution, though it can give you SSH keys I guess).
The mitigating factor is actually that you don't go to malicious websites all the time, hopefully. But it happens, including with injected code on ads and stuff that may enabled by secondary vulnerabilities.
anyfoo · 29m ago
I challenge you to name another readily available "read arbitrary RAM from userspace"[1] vulnerability.
[1] Not even including "potentially exploitable from JavaScript", which Spectre was. It's sufficient if you name one where an ordinary userspace program can do it.
gitroom · 3h ago
yeah this just makes me wanna see real world numbers on the slowdown, cuz honestly all these microcode fixes feel like trading off years of speed for maybe a little more peace of mind - you ever think well actually move off this cycle or is it just here to stay?
dzdt · 4h ago
The end-user processor slowdowns from Spectre and Meltdown mitigations were fairly substantial. Has anyone seen an estimate of how much the microcode updates for this new speculative vulnerability are going to cost in terms of slowdown?
leonidasv · 4h ago
> Our performance evaluation shows up to 2.7% overhead for the microcode mitigation on Alder Lake. We have also evaluated several potential alternative mitigation strategies in software with overheads between 1.6% (Coffee Lake Refresh) and 8.3% (Rocket lake)
Thanks, missed that! I remember seeing benchmarks showing like 15% slowdown from Spectre/Meltdown mitigations, so this is not as bad as that, but that is on top of the other too I guess...
j45 · 5h ago
Since the cloud is someone else's computer, and someone else's shared CPU, is cloud hosting (including vps) potentially impacted?
Look forward to learning how this can be meaningfully mitigated.
matja · 4h ago
For reads across different VMs on the same CPU, theoretically TME-MK could mitigate the usefulness of the memory reads by having each VM access memory using a different memory encryption key, but I don't know of any hypervisors that implement this.
AMD has had SEV support in QEMU for a long time, which some cloud hosting providers use already, that would mitigate any such issue if it occurred on AMD EPYC processors.
andrewla · 5h ago
Intel claims [1] that they already have microcode mitigation. Like Spectre and Meltdown this is likely to have performance implications.
Spectre and Meltdown had some pretty big performance hits in the beginning. Wonder how much it will differ here in real world, third party (and independent) testing.
whatever1 · 5h ago
It’s dead, can you please stop stubbing it?
anonymars · 5h ago
I thought I understand these words, yet I don't understand what you mean
arghwhat · 5h ago
> On an up to date Ubuntu 24.04
So not very up to date, but I suppose mitigations haven't changed significantly upstream since then.
necubi · 5h ago
24.04 is the most recent LTS (long term support) release; it's what users are meant to be running for anything important
arghwhat · 2h ago
My point is that it is not representative of the current state of the kernel.
The kernel has nothing to do with Ubuntu, its release schedule and LTS's. Distro LTS releases also often mean custom kernels, backports, hardware enablement, whatnot, which makes it a fork, so unless were analyzing Ubuntu security rather than Linux security, mainline should be used.
42lux · 47m ago
Microcode updates have nothing to do with the kernel?
thomasdziedzic · 5h ago
That version is significant because it is the latest LTS release. Most servers use LTS releases.
blueflow · 5h ago
Ubuntu 24.04 is the current LTS release. Our are you intending to say that Ubuntu, regardless of version, is not up to date?
Edit: "LTS" added due to popular demand
pdpi · 5h ago
You need a qualifier there — the latest Ubuntu release is 25.04, but 24.04 is the current LTS release.
razemio · 4h ago
It is up to date, with security patches and fixes. That is obviously what is relevant here. That is why the parent comment got down voted, since it is up to date in context of a security vulnerability.
It should be even more secure, since new software versions might introduce unknown attack vectors.
arghwhat · 2h ago
I am saying that any version of Ubuntu is not representative of the mainline kernel, which is what is relevant when it comes to analyzing current mitigations.
Distro LTS releases often mean custom kernels, backports, hardware enablement, whatnot, which makes it effectively a fork.
Unless were interested in discovering kernel variation discrepancies, its more interesting to analyze mainline.
7bit · 5h ago
There is a difference between an up2date Ubuntu 24.04 and an up2date Ubuntu.
And as security updates are back ported to all supported versions - and 24.04 being an LTS release, it is as up2date as it gets.
If you're being pedantic, be the right kind of pedantic ;)
arghwhat · 2h ago
The problem is that it's downstream backports and hardware enablement - you're running an old forked artisinal kernel maintained by Canonical, you will only get bugfixes if known to be severe enough to be flagged, and all this patching deviates it from mainline and can itself introduce new security vulnerabilities not present in mainline.
This differs from an actual later release which is closer to mainline and includes all newer fixes, including ones that are important but weren't flagged, and with less risk of having new downstream bugs.
If you're going to fight pedantism by being pedantic, better be the right kind of pedantic. ;)
fwip · 5h ago
24.04 is an LTS (long term support) release, so it receives updates, including security updates, for much longer than a regular release. I believe it's a 5-year support window, and longer if you shell out for paid support.
arghwhat · 2h ago
These updates mean that you are no longer running a mainline kernel, but an Ubuntu fork with whatever backports and hardware enablement (and new bugs!) this might introduce. This is also true for other software.
LTS does not mean you get all updates, it only means you get to drag your feet for longer with random bugfixes. Only the latest release has updates.
anyfoo · 12m ago
This only matters if the mainline kernel since then somehow experienced changes which would affect this hardware vulnerability (fixed through microcode), which I see no indication of?
"Unfortunately for John, the branches made a pact with Satan and quantum mechanics [...] In exchange for their last remaining bits of entropy, the branches cast evil spells on future generations of processors. Those evil spells had names like “scaling-induced voltage leaks” and “increasing levels of waste heat” [...] the branches, those vanquished foes from long ago, would have the last laugh."
https://www.usenix.org/system/files/1401_08-12_mickens.pdf
> The Mossad is not intimidated by the fact that you employ https://. If the Mossad wants your data, they’re going to use a drone to replace your cellphone with a piece of uranium that’s shaped like a cellphone, and when you die of tumors filled with tumors, […] they’re going to buy all of your stuff at your estate sale so that they can directly look at the photos of your vacation instead of reading your insipid emails about them.
Paper: https://comsec.ethz.ch/wp-content/files/bprc_sec25.pdf
> [...] the contents of the entire memory to be read over time, explains Rüegge. “We can trigger the error repeatedly and achieve a readout speed of over 5000 bytes per second.” In the event of an attack, therefore, it is only a matter of time before the information in the entire CPU memory falls into the wrong hands.
On the bright side, they will get to enjoy a much better music scene, because they’ll be visiting the 90’s.
- Predictor updates may be deferred until sometime after a branch retires. Makes sense, otherwise I guess you'd expect that branches would take longer to retire!
- Dispatch-serializing instructions don't stall the pipeline for pending updates to predictor state. Also makes sense, considering you've already made a distinction between "committing the branch instruction" and "committing the result of the prediction".
- Privilege-changing instructions don't stall the pipeline for pending updates either. Also makes sense, but only if you can guarantee that the privilege level is consistent between making/committing a prediction. Otherwise, you might be creating a situation where predictions generated by code in one privilege level may be committed to state used in a different one?
Maybe this is hard because "current privilege level" is not a single unambiguous thing in the pipeline?
Do you know if there is any official recording or notes online?
Thanks in advance.
The Binary and Malware Analysis course that you mentioned builds on top of the book "Practical Binary Analysis" by Dennis Andriesse, so you could grab a copy of that if you are interested.
More info here: https://krebsonsecurity.com/2014/06/operation-tovar-targets-...
it's been a while back :)
If I knew what I was getting into at the time, I'd do it. I did pay for extra, but in my case it was the low Dutch rate, so for me it was 400 euro's to follow hardware security, since I already graduated.
But I can give a rough outline of what they taught. It has been years ago but here you go.
Hardware security:
* Flush/Reload
* Cache eviction
* Spectre
* Rowhammer
* Implement research paper
* Read all kinds of research papers of our choosing (just use VUSEC as your seed and you'll be good to go)
Binary & Malware Analysis:
* Using IDA Pro to find the exact assembly line where the unpacker software we had to analyze unpacked its software fully into memory. Also we had to disable GDB debug protections. Something to do with ptrace and nopping some instructions out, if I recall correctly (look, I only low level programmed in my security courses and it was years ago - I'm a bit flabbergasted I remember the rough course outlines relatively well).
* Being able to dump the unpacked binary program from memory onto disk. Understanding page alignment was rough. Because even if you got it, there were a few gotcha's. I've looked at so many hexdumps it was insane.
* Taint analysis: watching user input "taint" other variables
* Instrumenting a binary with Intel PIN
* Cracking some program with Triton. I think Triton helped to instrument your binary with the help of Intel PIN by putting certain things (like xor's) into an SMT equation or something and you had this SMT/Z3 solver thingy and then you cracked it. I don't remember got a 6 out of 10 for this assignment, had a hard time cracking the real thing.
Computer & Network Security:
* Web securtiy: think XSS, CSRF, SQLi and reflected SQLi
* Application security: see binary and malware analysis
* Network security: we had to create our own packet sniffer and we enacted a Kevin Mitnick attack (it's an old school one) where we had to spoof our IP addresses, figure out the algorithm to create TCP packet numbers - all in the blind without feedback. Kevin in '97 I believe attacked the San Diego super computer (might be wrong about the details here). He noticed that the super computer S trusted a specific computer T. So the assignment was to spoof the address of T and pretend we were sending packets from that location. I think... writing this packet sniffer was my first C program. My prof. thought I was crazy that this was my first time writing C. I was, I also had 80 hours of time and motivation per week. So that helped.
* Finding vulnerabilities in C programs. I remember: stack overflows, heap overflows and format strings bugs.
-----
For binary & malware analsys + computer & network security I highly recommend hackthebox.eu
For hardware security, I haven't seen an alternative. To be fair, I'm not looking. I like to dive deep into security for a few months out of the year and then I can't stand it for a while.
In the case of speculative execution, you need an insane amount of prep to use that exploit to actually do something. The only real way this could ever be used is if you have direct access to the computer where you can run low level code. Its not like you can write JS code with this that runs on browsers that lets you leak arbitrary secrets.
And in the case of systems that are valuable enough to exploit with a risk of a dedicated private or state funded group doing the necessary research and targeting, there should be a system that doesn't allow unauthorized arbitrary code to run in the first place.
I personally disable all the mitigations because performance boost is actually noticeable.
That's precisely what Spectre and Meltdown were though. It's unclear whether this attack would work in modern browsers but they did reenable SharedArrayBuffer & it's unclear if the existing mitigations for Spectre/Meltdown stimy this attack.
> I personally disable all the mitigations because performance boost is actually noticeable.
Congratulations, you are probably susceptible to JS code reading crypto keys on your machine.
Do you understand the scope of the issue? Do you know that this couldn't personally affect you in a dragnet (so, not targeted, but spread out, think opportunistic ransomware) attack?
Because this statement of yours:
> Its not like you can write JS code with this that runs on browsers that lets you leak arbitrary secrets.
was not true for Spectre. The original spectre paper notoriously mentions JS as an attack vector.
If you truly disable all mitigations (assuming CPU and OS allow you to do so), you will reopen that hole.
So:
> The only real way this could ever be used is if you have direct access to the computer where you can run low level code.
I'm a low level kernel engineer, and I don't know this to be true in the general case. JITs, i.e. the JavaScript ones, also generate "low level code". How do you know of this not being sufficient?
Doing what you want would essentially require a hardware architecture where every load/store has to go through some kind of "augmented address" that stores boundary information.
Which is to say, you're asking for 80286 segmentation. We had that, it didn't do what you wanted. And the reason is that those segment descriptors need to be loaded by software that doesn't mess things up. And it doesn't, it's "just a pointer" to software and amenable to the same mistakes.
A "far pointer" was, again, a *software* concept where you could tell the compiler that this particular pointer needed to use a different descriptor than the one the toolchain assumed (by convention!) was loaded in DS or SS.
(CHERI already exists on ARM and RISC-V though.)
Again, it's just not a software problem. In the real world we have hardware that exposes "memory" to running instructions as a linear array of numbers with sequential addresses. As long as that's how it works, you can demand an out of bounds address (because the "bounds" are a semantic thing and not a hardware thing).
It is possible to change that basic design principle (again, x86 segmentation being a good example), but it's a whole lot more involved than just "Rust Will Fix All The Things".
(*) ... although I don't think I can abstain ...
Why mention only Windows, what about Linux users?
Not expert enough to know what to look for to see if these particular mitigations are present yet.
> Does Branch Privilege Injection affect non-Intel CPUs?
> No. Our analysis has not found any issues on the evaluated AMD and ARM systems.
Source: https://comsec.ethz.ch/research/microarch/branch-privilege-i...
There are probably similar bugs in AMD and ARM, I mean how long did these bugs sit undiscovered in Intel, right?
Unfortunately the only real fix is to recognize that you can’t isolate code running on a modern system, which would be devastating to some really rich companies’ business models.
You could say we only update the predictor at retirement to solve this. But that can get a little dicy also: the retirement queue would have to track this locally and retirement frees up registers, better be sure it's not the one your jump needs to read. Doable but slightly harder than you might think.
Then people say "no that's not possible, we got security in place."
So then the researchers showcase a new demo where they use their existing knowledge with the same issue (i.e. scaling-induced voltage leaks).
I suspect this will go on and on for decades to come.
On top of that x86 seems to be pushed out more and more by ARM hardware and now increasingly RISC-V from China. But of course there's the US chip angle - will the US, especially after the problems during Covid, let a key manufacturer like Intel bite the dust?
It's not great but lol the sensationalism is hilarious.
Remember, gamers only make up a few percentage of users for what Intel makes. But that's what you hear about the most. One or two data center orders are larger than all the gaming cpus Intel will sell in a year. And Intel is still doing fine in the data center market.
Add in that Intel still dominates the business laptop market which is, again, larger than the gamer market by a pretty wide margin.
The two areas you mention (data center, integrated OEM/mobile) are the two that are most supply chain and business-lead dependent. They center around reliable deliveries of capable products at scale, hardware certifications, IT department training, and organizational bureaucracy that Intel has had captured for a long time.
But!
Data center specifically is getting hit hard from AMD in the x86 world and ARM on the other side. AWS's move to Graviton alone represents a massive dip in Intel market share, and it's not the only game in town.
Apple is continuing to succeed in the professional workspace, and AMD's share of laptop and OEM contracts just keeps going up. Once an IT department or their chosen vendor has retooled to support non-Intel, that toothpaste is not going back into the tube - not fully, at least.
For both of these, AMD's improvement in reliability and delivery at scale will be bearing fruit for the next decade (at Intel's expense), and the mindshare, which gamers and tech sensationalism are indicators for, has already shifted the market away from an Intel-dominated world to a much more competitive one. Intel will have to truly compete in that market. Intel has stayed competitive in a price-to-performance sense by undermining their own bottom line, but that lever only has so far it can be pulled.
So I'm not super bullish on Intel, sensationalism aside. They have a ton of momentum, but will need to make use of it ASAP, and they haven't shown an ability to do that so far.
Product aside, from a shareholder/business point of view (I like to think of this separately these days as financial performance is becoming less and less reflective of the end product) I think they are too big to fail.
From that piece of text on the blog, I don‘t quite unterstand if Kaby Lake CPUs are affected or not.
Suppose you want to measure the distribution of the delay between recurring events (which is basically what's at the heart of those vulnerabilities). Suppose the delays are all sub-milliseconds, and that your timer, to pick something ridiculous, only has a 2 second granularity.
You may at first think that you cannot measure the sub-millisecond distribution with such a corse timer. But consider that event and timers are not synchronized to each other, so with enough patience, you will still catch some events barely on the left or on the right side of your 2 second timer tick. Do this over a long enough time, and you can reconstruct the original distribution. Even adding some randomness to the timer tick just means you need more samples to suss the statistic out.
Again, I am not an expert, and I don't know if this actually works, but that's what I came up with intuitively, and it matches with what I heard from some trustworthy people on the subject, namely that non-precision timers are not a panacea.
The mitigating factor is actually that you don't go to malicious websites all the time, hopefully. But it happens, including with injected code on ads and stuff that may enabled by secondary vulnerabilities.
[1] Not even including "potentially exploitable from JavaScript", which Spectre was. It's sufficient if you name one where an ordinary userspace program can do it.
https://comsec.ethz.ch/research/microarch/branch-privilege-i...
Look forward to learning how this can be meaningfully mitigated.
AMD has had SEV support in QEMU for a long time, which some cloud hosting providers use already, that would mitigate any such issue if it occurred on AMD EPYC processors.
[1] https://www.intel.com/content/www/us/en/security-center/advi...
So not very up to date, but I suppose mitigations haven't changed significantly upstream since then.
The kernel has nothing to do with Ubuntu, its release schedule and LTS's. Distro LTS releases also often mean custom kernels, backports, hardware enablement, whatnot, which makes it a fork, so unless were analyzing Ubuntu security rather than Linux security, mainline should be used.
Edit: "LTS" added due to popular demand
Distro LTS releases often mean custom kernels, backports, hardware enablement, whatnot, which makes it effectively a fork.
Unless were interested in discovering kernel variation discrepancies, its more interesting to analyze mainline.
And as security updates are back ported to all supported versions - and 24.04 being an LTS release, it is as up2date as it gets.
If you're being pedantic, be the right kind of pedantic ;)
This differs from an actual later release which is closer to mainline and includes all newer fixes, including ones that are important but weren't flagged, and with less risk of having new downstream bugs.
If you're going to fight pedantism by being pedantic, better be the right kind of pedantic. ;)
LTS does not mean you get all updates, it only means you get to drag your feet for longer with random bugfixes. Only the latest release has updates.