Someone correct me if I am wrong, but self-mutating code is not as uncommon as the author portrays it.
I thought the whole idea of hotspot optimization in a compiler is essentially self-mutating code.
Also, I spent a moderately successful internship at Microsoft working on dynamic assemblies. I never got deep enough into that to fully understand when and how customers where actually using it.
A program that can generate, compile, and execute new code is nothing special in the Common Lisp world. One can build lambda expressions, invoke the compile function on them, and call the resulting compiled functions. One can even assign these functions to the symbol-function slot of symbols, allowing them to be called from pre-existing code that had been making calls to that function named by that symbol.
BenjiWiebe · 31d ago
I know that no other language can match Lisp, but many languages can generate and execute new code, if they're interpreted. Compile, too, if they're JITted. They all require quite a bit of runtime support though.
alcover · 32d ago
I often think this could maybe allow fantastic runtime optimisations. I realise this would be hardly debuggable but still..
barchar · 32d ago
It sometimes can, but you then have to balance the time spent optimizing against the time spent actually doing whatever you were optimizing.
Also on modern chips you must wait quite a number of cycles before executing modified code or endure a catastrophic performance hit. This is ok for loops and stuff, but makes a lot of the really clever stuff pointless.
The debuggers software breakpoints _are_ self-modifying code :)
vbezhenar · 32d ago
I used GNU lightning library once for such optimisation. I think it was ICFPC 2006 task. I had to write an interpreter for virtual machine. Naive approach worked but was slow, so I decided to speed it up a bit using JIT. It wasn't a 100% JIT, I think I just implemented it for loops but it was enough to tremendously speed it up.
userbinator · 32d ago
Programs from the 80s-90s are likely to have such tricks. I have done something similar to "hardcode" semi-constants like frame sizes and quantisers in critical loops related to audio and video decompression, and the performance gain is indeed measurable.
econ · 31d ago
The 80's:
Say you set a value for some reason. Later you have to check IF it is set. If the condition needs to be checked many times you replace it with the code (rather than set a value to check some place). If you need to check if something is still true repeatedly you replace the condition check with no-ops when it isn't true.
Also funny are insanely large loop unrolls with hard coded valued. You could make a kind of rainbow table of those.
alcover · 32d ago
> "hardcode" semi-constants
You mean you somehow avoided a load. But what if the constant was already placed in a register ? Also how could you pinpoint the reference to your constant in the machine code ? I'm quite profane about all this.
ronsor · 32d ago
> Also how could you pinpoint the reference to your constant in the machine code?
Not OP, but often one uses an easily identifiable dummy pattern like 0xC0DECA57 or 0xDEADBEEF which can be substituted without also messing up the machine code.
mananaysiempre · 32d ago
If you’re willing to parse object files (a much easier proposition for ELF than for just about anything else), another option is to have the source code mention the constants as addresses of external symbols, then parse the relocations in the compiled object. Unfortunately, I’ve been unable to figure out a reliable recipe to get a C compiler to emit absolute relocations in position-independent code, even after restricting myself to GCC and Clang for x86 Linux; in some configurations it works and in others you (rather pointlessly) get a PC-relative one followed by an add.
userbinator · 32d ago
All the registers were already taken.
You use a label.
Retr0id · 32d ago
It already does, in the form of JIT compilation.
alcover · 32d ago
OK but I meant in already native code, like in a C program - no bytecode.
lmm · 32d ago
If you are generating or modifying code at runtime then how is that different from bytecode? Standardised bytecodes and JITs are just an organised way of doing the same thing.
connicpu · 32d ago
LuaJIT has a wonderful dynamic code generation system in the form of the DynASM[1] library. You can use it separately from LuaJIT for dynamic runtime code generation to create machine code optimized for a particular problem.
Linux kernel had the same idea, and now they have "static keys". It's both impressive and terrifying.
xixixao · 31d ago
I’ve been thinking a lot about this topic lately, even studying how executables look on arm macOS. My motivation was exploring truly fast incremental compilation for native code.
The only way to do this now on macOS is remapping whole pages as JIT. This makes it quite a challenge but still it might work…
oxcabe · 32d ago
It's impressive how well laid out the content in this article is. The spacing, tables, and code segments all look pristine to me, which is especially helpful given how dense and technical the content is.
f1shy · 31d ago
I have the suspicion that there is a high correlation between how organized the content is, and how organized and clear the mind of the writer is.
AStonesThrow · 32d ago
It was designed by Elves on Christmas Island where Dwarves run the servers and Hobbits operate the power plant
belter · 32d ago
I guess in OpenBSD because of W ^ X this would not work?
mananaysiempre · 32d ago
Not as is, but I think OpenBSD permits you to map the same memory twice, once as W and once as X (which would be a reasonable hoop to jump through for JITs etc., except there’s no portable way to do it). ARM64 MacOS doesn’t even permit that, and you need to use OS-specific incantations[1] that essentially prohibit two JITs coexisting in the same process.
No, the protection is per-thread. You can run the JITs in different threads
rkeene2 · 32d ago
In Linux it also needs mprotect() to change the permissions on the page so it can write it. The OpenBSD man page[0] indicate that it supports this as well, though notes that not all implementations are guaranteed to allow it, but my guess is it would generally work.
It's not required on linux, if the ELF headers are set up such that the page is mapped rwx to begin with. (but rwx mappings are generally frowned upon from a security perspective)
akdas · 32d ago
I was thinking the same thing. Usually, you'd want to write the new code to a page that you mark as read and write, then switch that page to read and execute. This becomes tricky if the code that's doing the modifying is in the same page as the code being modified.
timewizard · 32d ago
The way it's coded it wouldn't; however, you can map the same shared memory twice. Once with R|W and a second time with R|X. Then you can write into one region and execute out of it's mirrored mapping.
Someone · 31d ago
Fun article, but the resulting code is extremely brittle:
- assumes x86_64
- makes the invalid assumption that functions get compiled into a contiguous range of bytes (I’m not aware of any compiler that violates that, but especially with profile-guided optimization or compilers that try to minimize program size, that may not be true, and there is nothing in the standard that guarantees it)
- assumes (as the article acknowledges) that “to determine the length of foo(), we added an empty function, bar(), that immediately follows foo(). By subtracting the address of bar() from foo() we can determine the length in bytes of foo().”. Even simple “all functions align at cache lines” slightly violates that, and I can see a compiler or a linker move the otherwise unused bar away from foo for various reasons.
- makes assumptions about the OS it is running on.
- makes assumptions about the instructions that its source code gets compiled into. For example, in the original example, a sufficiently smart compiler could compile
void foo(void) {
int i=0;
i++;
printf("i: %d\n", i);
}
as
void foo(void) {
printf("1\n");
}
or maybe even
void foo(void) {
puts("1");
}
Changing compiler flags can already break this program.
Also, why does this example work without flushing the instruction cache after modifying the code?
nekitamo · 31d ago
For the mainstream OSes (Windows, OSX, Linux Android) You don't need to flush the instruction cache on most x86 CPUs after modifying the code segment dynamically, but you do on ARM and MIPS.
This has burned me before while writing a binary packer for Android.
znpy · 31d ago
The author clearly explained that the whole article is more a demonstration for illustrative purposes than anything else.
> Changing compiler flags can already break this program.
That's not the point of the article.
saagarjha · 31d ago
They check all those assumptions by disassembling the code.
Cloudef · 31d ago
> self-modifying code
> brittle
I mean that is to be very much expected, unless someone comes up with a programming language that fully embraces the concept.
Also, I spent a moderately successful internship at Microsoft working on dynamic assemblies. I never got deep enough into that to fully understand when and how customers where actually using it.
https://learn.microsoft.com/en-us/dotnet/fundamentals/reflec...
Also on modern chips you must wait quite a number of cycles before executing modified code or endure a catastrophic performance hit. This is ok for loops and stuff, but makes a lot of the really clever stuff pointless.
The debuggers software breakpoints _are_ self-modifying code :)
Say you set a value for some reason. Later you have to check IF it is set. If the condition needs to be checked many times you replace it with the code (rather than set a value to check some place). If you need to check if something is still true repeatedly you replace the condition check with no-ops when it isn't true.
Also funny are insanely large loop unrolls with hard coded valued. You could make a kind of rainbow table of those.
You mean you somehow avoided a load. But what if the constant was already placed in a register ? Also how could you pinpoint the reference to your constant in the machine code ? I'm quite profane about all this.
Not OP, but often one uses an easily identifiable dummy pattern like 0xC0DECA57 or 0xDEADBEEF which can be substituted without also messing up the machine code.
You use a label.
[1]: https://luajit.org/dynasm.html
The only way to do this now on macOS is remapping whole pages as JIT. This makes it quite a challenge but still it might work…
[1] https://developer.apple.com/documentation/apple-silicon/port...
[0] https://man.openbsd.org/mprotect.2
- assumes x86_64
- makes the invalid assumption that functions get compiled into a contiguous range of bytes (I’m not aware of any compiler that violates that, but especially with profile-guided optimization or compilers that try to minimize program size, that may not be true, and there is nothing in the standard that guarantees it)
- assumes (as the article acknowledges) that “to determine the length of foo(), we added an empty function, bar(), that immediately follows foo(). By subtracting the address of bar() from foo() we can determine the length in bytes of foo().”. Even simple “all functions align at cache lines” slightly violates that, and I can see a compiler or a linker move the otherwise unused bar away from foo for various reasons.
- makes assumptions about the OS it is running on.
- makes assumptions about the instructions that its source code gets compiled into. For example, in the original example, a sufficiently smart compiler could compile
as or maybe even Changing compiler flags can already break this program.Also, why does this example work without flushing the instruction cache after modifying the code?
This has burned me before while writing a binary packer for Android.
> Changing compiler flags can already break this program.
That's not the point of the article.
I mean that is to be very much expected, unless someone comes up with a programming language that fully embraces the concept.