Reading this thread leaves me with the impression that most posters advocating learning assembly language have never had to use it in a production environment. It sucks!
For the overwhelming majority of programmers, assembly offers absolutely no benefit. I learned (MC6809) assembly after learning BASIC. I went on to become an embedded systems programmer in an era where compilers were still pretty expensive, and I worked for a cheapskate. I wrote an untold amount of assembly for various microcontrollers over the first 10 years of my career. I honestly can't say I got any more benefit out of than programming in C; it just made everything take so much longer.
I once, for a side gig, had to write a 16-bit long-division routine on a processor with only one 8-bit accumulator. That was the point at which I declared that I'd never write another assembly program. Luckily, by then gcc supported some smaller processors so I could switch to using Atmel AVR series.
a_cardboard_box · 1d ago
> I once, for a side gig, had to write a 16-bit long-division routine on a processor with only one 8-bit accumulator. That was the point at which I declared that I'd never write another assembly program.
This is exactly the kind of job I'd enjoy! A perfectly doable technical challenge with clear requirements. Some people like solving Sudoku puzzles, I like solving programming puzzles.
I guess I'm just not "the overwhelming majority of programmers".
HeyLaughingBoy · 1d ago
> doable technical challenge with clear requirements
That's a Project Management issue, not an implementation concern.
In my case, there was no requirement that said "use 16-bit long division." However, we had committed to a particular processor family (MC68HC05), and the calculation precision required 16-bit math. IIRC, there was a compiler available, but it cost more than the rest of the project and the code it produced wouldn't have fit into the variant of the processor that I was using anyway.
The actual requirement would have looked more like "detect a 0.1% decrease in signal that persists for 10 seconds, then do X."
Animats · 1d ago
Oh, yes, that era. I had to program a MC68HC11 in Forth because the C compiler was so expensive.
markus_zhang · 1d ago
Man what a blast!
favorited · 1d ago
I think the majority of programmers would enjoy it, but most would first need to pick an ISA (something older is probably going to be more approachable for beginners), learn enough about it to understand basic arithmetic instructions, learn enough about the dev tools to be able to assemble, link, and execute their code, etc.
For most folks, that's going to be a couple days of prep work before they can get to the fun part of solving the puzzle.
markus_zhang · 1d ago
I totally agree. I read and commented the source code of Woz's SWEET16 and it was a blast to fully understand it.
But of course, might not be that rosy if under great time constraints.
commandlinefan · 1d ago
> the kind of job I'd enjoy
I feel the same way, but I also can't help but imagine the boss jumping up and down and throwing chairs and screaming "how can you not be done yet? You're a programmer and this is a program and it's been three _hours_ already".
1vuio0pswjnm7 · 1d ago
"I guess I'm just not the "overwhelming majority of programmers"."
The "overwhelming" majority of programmers may be underwhelming
Some readers may be unimpressed by programmers who complain about and criticise assembly language, e.g., claiming it offers "no benefit" to others, especially when no one is forcing these programmers to use it
WalterBright · 1d ago
I did a lot of assembler programming before discovering C. I learned C in maybe an hour because of that.
Not knowing assembler means programmers have a bit of a blind spot towards what are expensive things to do in C vs what generates the best code.
For example, debugging a program sometimes requires looking at the generated assembler. Recently I was wondering why my deliberate null pointer dereference wasn't generating an exception. Looking at the assembler, there were no instructions generated for it. It turns out that since a null pointer dereference was undefined behavior, the compiler didn't need to generate any instructions for it.
ozim · 1d ago
I build web applications that run on top of databases, web servers and frameworks.
I do need to understand how indexes in db engine work, I need to understand there might be socket exhaustion in the system, I do need to understand how my framework allocates data on heap vs stack.
Having to drop to instructions that is for web servers, db, frameworks developers not for me to do. I do have a clue how low level works but there is no need for me.
That is part where parent poster is correct there are better ways for developers to spend time on - trust your database, web servers and framework and learn how those work in depth you can skip asembler, because all of those will take a lot of time anyway and most likely those are the ones you should/can tweak to fix performance not assembler.
marssaxman · 1d ago
> since a null pointer dereference was undefined behavior, the compiler didn't need to generate any instructions for it.
I deeply hate this attitude in modern compiler design.
WalterBright · 1d ago
Me too. My compilers don't do that.
marssaxman · 1d ago
I'm glad to hear it. Thank you for caring.
cogman10 · 1d ago
The issue is it's a moving target. What was expensive yesterday could be fast today based on compiler optimizations (and potentially vice versa).
Further, changes in the ISA can open up gains in performance that weren't available in yesteryear. An example of this would be SIMD instruction usage.
It's not a bad idea to know enough assembly language to understand why code is slow. However, the general default should be to avoid writing it. Your time would be better spent getting a newer version of your compiler and potentially enabling things like PGO.
zahlman · 1d ago
> most posters advocating learning assembly language have never had to use it in a production environment... For the overwhelming majority of programmers, assembly offers absolutely no benefit.
I don't follow. Why should assembly have to be useful or pleasant in a production environment, for learning it to be useful?
I was taught a couple different flavours of assembly in university, and I found it quite useful for appreciating what the machine actually does. Certainly more so than C. Abstractions do ultimately have to be rooted in something.
atoav · 1d ago
You and the post you commented on display both a valid point. If we're talking about using assembly as a broad general purpose programming environment that would be a mess (which is precisely why it has no broad adoption). When we talk about assembly as a niché special purpose solution we would come to a different conclusion, coincidentally this is where assembly is still used today: environments where we need highly optimized code.
Your point about education is orthogonal to the point made. I agree with you that learning assembly can be a good way to teach people how computers work on a low level, but that has nothing to do with whether it is useful as a skill to learn.
As someone teaching similar things at the university level to a non-tech audience I have to always carefully wheigh how much "practically useless" lessons a typical art student can stomach. And which kind of lesson will just deter them, potentially forever.
zahlman · 1d ago
> I agree with you that learning assembly can be a good way to teach people how computers work on a low level, but that has nothing to do with whether it is useful as a skill to learn.
I don't understand the distinction you're trying to make. The post I was replying to specifically discussed "learning assembly language". My entire point is that "learning assembly language" has purposes other than creating practical real-world programs in assembly.
bongodongobob · 1d ago
Is it useful to learn bagpipes? I guess learning for its own sake is good, but if you want to join a band, guitar or keyboards are going to be a better bet and learning bagpipes first isn't going to do much for you.
barrkel · 1d ago
Do bagpipes explain the mystery of sand performing calculations and taking actions? Do they give you an intuition for connecting how CPUs and memory accesses and cache hierarchies work with high level code, in such a way that you can start to understand why one version of code might be faster or slower than another?
If you can't see through field accesses and function calls to memory indirections, anything you might read about how TLBs and caches and branch prediction work doesn't connect to much.
bongodongobob · 1d ago
Guess what, almost no one knows how to program in assembly and yet everything is working out pretty good.
barrkel · 1d ago
I can say the performance of Windows Explorer lately, compared to how it was in Windows NT, does not impress me.
strken · 1d ago
If a guitar was an abstraction layer that was implemented by low-level bagpipes then a) that would be awesome and b) guitar players would find their guitar playing to benefit from bagpipe lessons. At the very least they'd be able to understand and maintain their guitar better.
bongodongobob · 1d ago
The connection is that they both play notes. You can play the same songs on both but no one wants to hear bagpipes.
SAI_Peregrinus · 1d ago
You can't play most of the same songs on both. Bagpipes (well, most forms of bagpipe, there are dozens, but unqualified people tend to mean the Scottish "Great Highland Bagpipe") are a diatonic instrument playing a just-intonation scale tuned to not cause discordant notes with their own drones, while guitars are a chromatic instrument fretted to play an equal-tempered intonation. The GHB plays in something rather close to the modern Mixolidian A mode with an augmented 4th, not any of the major or minor keys of modern Western music. The GHB and the guitar are entirely incompatible instruments, unless you're talking about a classical guitar with tied-on gut frets that could be replaced to allow playing the GHB scale.
HeyLaughingBoy · 19h ago
I have no idea what any of that means, but I love the deep knowledge that it expresses :-)
zahlman · 8h ago
I'll try to simplify: you can't readily adjust the tuning of a bagpipe; it's tuned in a way that would make it sound horribly dissonant against other instruments (for important music-theory reasons); and it doesn't even play all the notes, so you can't play in all the major and minor keys of Western (== European from c. 1580 onward + American) music tradition - you're stuck with scales that only make sense for the genre of music that's specifically written for the instrument.
strken · 21h ago
What you're trying to say is that assembly is like the bagpipes and impractical. What I'm trying to say is that's a terrible metaphor because the main reason to learn assembly is understanding what your non-assembly code is actually executed as.
spc476 · 1d ago
Learning the accordion didn't hurt Weird Al's career, nor did using the flute hurt Ian Anderson (lead vocalist and flutist of Jethro Tull).
bongodongobob · 1d ago
These are edge cases. Way to miss the point.
mabster · 1d ago
I started my career in assembly and it's reduced over time. Towards the end of the gamedev work I was still reading a lot of assembly but no longer writing it (using intrinsics instead). It was definitely a lot slower to write.
But there are a number of things we did that are not available or difficult in C:
- Guaranteed tail calls
- Returning multiple values without touching memory
- using the stack pointer as a general purpose pointer for writing to memory
- Changing the stack pointer to support co-routimes
- Using our own register / calling convention (e.g. setting aside a register to be available to all routines
- Unpicking the stack to reduce register setup for commonly used routines or fast longjmps
- VM "jump tables" without requiring an indirection to know where to jump to
ferguess_k · 1d ago
Most of us do not have the chance to use it in production. I think that's where the fancy came from.
We are also getting burned out by the modern Agile web/data/whatever development scene and would like to drill really deep into one specific area without stakeholders breathing down our necks every few hours, which assembly programming conveniently provides.
I also consider the grit (forced or voluntary) to be a bath of fire which significantly improved two important things - the programmer's understanding of the system, and the programmer's capability to run low level code in his brain. Is it suffering? Definitely, but this is a suffering that bring technical prowess.
Most of us do not have the privilege to suffer properly. Do you prefer to suffer from incomplete documentation, very low level code and banging your head on a wall for tough technical problems, or do you prefer to suffer from incomplete documentation, layers and layers of abstraction, stakeholders changing requirements every day and actually know very little about technical stuffs? I think it is an easy choice, at least for me. If there is an assembly language / C job that is willing to take me in, I'll do it in half of the salary I'm earning.
flohofwoe · 1d ago
On 8-bit home computer CPUs like the 6502 or Z80, high level programming languages like C simply were not an option, you left too much performance on the table (not to mention BASIC which was easily 100x slower than handwritten assembly).
Forth was quite acceptable performance wise, but that's barely above a good macro assembler.
And after the 8-bitters, assembly coding on the Amiga was pure pleasure - also for large programs, since apart from the great 68k ISA the entire Amiga hardware and operating system was written to make assembly coding convenient (and even though C was much better on the 68k, most serious programs used a mix of C and assembly).
(also, while writing assembly code today isn't all that important, reading assembly code definitely is when looking at compiler output and trying to figure out why and how the compiler butchered my high level code).
jamesfinlayson · 1d ago
> (also, while writing assembly code today isn't all that important, reading assembly code definitely is when looking at compiler output and trying to figure out why and how the compiler butchered my high level code).
Agreed - I wouldn't be able to write any x86 assembly without a bit of help, but having done some game reverse engineering I've learned enough to make sense of compiler generated code.
pjmlp · 1d ago
To add to that, there is a reason why even all modern JITs also have ways to look into generated code.
Anyone curious how their JVM, CLR, V8, ART, Julia,.... gets massaged into machine code only needs to learn about the related tools on the ecosystem.
Some of them are available on online playgrounds like Compiler Explorer, Sharpio,....
acegopher · 1d ago
> the entire Amiga hardware and operating system was written to make assembly coding convenient
I am curious what specific examples do you have of the HW and OS being made/written to make ASM convenient?
flohofwoe · 1d ago
The hardware could be controlled via memory mapped 16-bit registers, e.g. checking whether the left mouse button is down is a single instruction:
btst #6, $bfe001
The OS used a simple assembly-friendly calling convention, parameters were passed in registers instead of the stack (and the API documentation mentioned which parameters are expected in which registers), and the reference manuals usually had both C and assembly examples, etc... basically lots of little things to make the lives of assembly coders easier.
Also, using a 68000 instead of a shitty Intel processor was a huge boon to assembly programming. Ultimately Intel won in the market and eventually even shipped processors that weren't profoundly unpleasant at the assembly level, but the 68000 is still a much more pleasant architecture for the assembly programmer. ARM is nicer still, but this was before ARM's existence as a separate company from Acorn.
craftkiller · 1d ago
I never used it in production and yet learning it absolutely provided me with benefits. I didn't understand pointers until I spent a weekend learning assembly.
Const-me · 1d ago
I think writing assembly indeed offers no benefit for most developers. However, being able to read and understand assembly is generally useful.
Enables debugging binaries and crash dumps without complete source codes, like DLLs shipped with Windows or third-party DLLs. Allows to understand what compilers (both traditional and JIT) did to your source codes, this is useful when doing performance optimizations.
anta40 · 1d ago
I write mobile apps for living (mostly Java/Kotlin, a little bit of Flutter/RN) so yeah agree assembly is practically useless for professional work.
But for tinkering (e.g writing GBA/NES games), hell why not? It's fun.
grishka · 1d ago
I've also heard the opinion that modern compilers are better at generating optimized code than someone writing assembly by hand. Not sure how true it is, but considering the unfathomable complexity of modern CPUs, it does feel believable.
mabster · 1d ago
As a low level performance guy I trust the compiler nowadays, especially with deep instruction pipelines. The compiler is beatable - a lot of the decisions are heuristic - but it takes a lot of work to beat it.
GianFabien · 1d ago
Last time I looked intel CPUs had like 1700 instructions. Every generation comes with an even more expanded ISA. I doubt that compilers use even a fraction of the ISA. Especially considering that binaries are often expected to run on a wide range of older CPUs. I know that there are intrinsic functions which provide access to some of the powerful, yet special purpose instructions. It is unrealistic to expect the compiler to make effective use of all the fancy instructions you paid for with your latest hardware upgrade.
genewitch · 1d ago
> It is unrealistic to expect the compiler to make effective use of all the fancy instructions you paid for with your latest hardware upgrade.
I'd add "yet" - we runinafed that the reason new machines with similar shapes (quad core to quad core of a newer generation) doesn't immediately seem like a large a jump as it ought, in performance, is because it takes time for people other than intel to update their compilers to effectively make use of the new instructions. icc is obviously going to more quickly (in the sense of how long after the CPU is released, not `time`) generate faster executing code on new Intel hardware. But gcc will take longer to catch up.
There's a sweet spot from about 1-4 years after initial release where hardware speeds up, but toward the end of that run programs bloat and wipe all the benefits of the new instructions; leading to needing a new CPU, that isn't that much faster than the one you replaced.
Yet.
Which reminds me I need to benchmark a Linux kernel compile to see if my above supposition is correct, I have the timings from when I first bought it, as compared to a 10 year old HP 40 core machine (ryzen 5950 is 5% faster but used 1/4th the wall power.)
genewitch · 10h ago
> runinafed
-> ruminated
grishka · 21h ago
These kinds of SIMD instructions are usually used by things like media codecs and DSPs. They would include several versions of the performance-critical number-crunching code and would pick the best one at runtime depending on which SIMD instructions your CPU supports.
mystified5016 · 14h ago
If and only if someone has taken time to write specific optimizations for your specific CPU.
In embedded land, if your microcontroller is unpopular, you don't get much in the way of optimization. The assembly GCC generates is frankly hot steaming trash and an intern with an hour of assembly experience can do better. This is not in any way an exaggeration.
I've run into several situations where hand-optimized assembly is tens of times faster than optimized C mangled by GCC.
I do not trust compilers anymore unless it's specifically for x86_64, and only for CPUs made this decade
WalterBright · 1d ago
Sometimes you don't really want to write in assembler. Like loading a constant into a register for the AArch64. The instructions to do it are pretty wacky, and it's hard to see if you wrote the correct combination to load the value. Best to let the compiler do it for you (or use godbolt.org! to get the right mix). The same for floating point constants.
Once I got the code sequences for this right on my AArch64 code generator, I don't have to ever figure it out again!
I could have used that during the hours I spent figuring it out!
vardump · 1d ago
I expect a sufficiently good macro assembler should be able to do it as well.
WalterBright · 1d ago
Then you might as well use a HLL.
pjmlp · 1d ago
As someone that has spent some time in the 8 and 16 bit demoscene, I used my share of Z80, 80x86 and 68000.
It is all a matter of having high quality macro assemblers, and people that actually care to write structured documented code in Assembly.
When they don't care, usually not even writing in C and C++ will save the kind of code they write.
m463 · 1d ago
I've used "machine code" plenty of times in production as inline assembly encoded in other languages.
I vaguely recall ada inline assembly looking like function calls, with arguments that sometimes referenced high-level-language variables)
unrelated to that, I distinguish between machine code which is binary/hex, and assembly as symbolic assembler or macro assembler, which can actually have high level macros and other niceties.
And one thing I can say for sure. I took assembly language as my second computer course, and it definitely added a lifelong context as to how machines worked, what everything translated to and whether it was fast or efficient.
HeyLaughingBoy · 1d ago
LOL. As a poor college student, I couldn't even afford an assembler. Programming my CoCo (Radio Shack Color Computer) had to be done either with the built-in BASIC, or by POKEing in machine codes for programs that I hand assembled. One of the nice things about Motorola (CoCo was based on MC6809E) assembly languages is that the processors were very regular and it was easy to remember the opcodes and operation structures.
A friend of mine who also had a CoCo wrote an assembler as a term project.
kragen · 1d ago
Programming in assembly is slow. It takes a long time to make things that way; as Julia Ecklar sings, it's kind of like construction work with a toothpick for a tool. (https://www.youtube.com/watch?v=WZCs4Eyalxc) But that's also true of knitting (https://journal.stuffwithstuff.com/2025/05/30/consider-knitt...), crochet, plasterwork, childrearing, calligraphy, gardening, carving marble, hand-soldering electronics, watching sunrises, and solving crossword puzzles.
If you have a six-day deadline, probably it would be better to use a high-level language instead.
But, when you have time for them, all of these things are intrinsically rewarding. Not all the time! And not for everyone! But for some of us, some of the time, they can all be very enjoyable. And sometimes that slow effort can achieve a result that you can't get any other way.
I haven't written that much assembly, myself. Much less than you have. If I had to write everything in assembly for years, maybe I wouldn't enjoy it anymore. I've written a web server, a Tetris game, some bytecode interpreters, a threading library, a 64-byte VGA graphics demo, a sort of skeletal music synthesizer, and an interpreter for an object-oriented language with pattern-matching and multiple dispatch, as well as a couple of compilers in high-level languages targeting assembly or machine code. All of these were either 8086, ARM, RISC-V, i386, or amd64; I never had to suffer through 6809 or various microcontrollers.
Maybe most important, I've never written assembly code that someone else depended on working. Those programs I've mostly written in Python, which I regret now. It's much faster that way. However, I've found it useful in practice for debugging C and C++ programs.
I think that a farmer who says, "For the vast majority of consumers, gardening offers absolutely no benefit," is missing the point. It's not about easier access to parsley and chives. Similarly for an author who says, "For the vast majority of readers, solving crossword puzzles offers absolutely no benefit."
So I don't think assembly sucks.
alcover · 1d ago
> as Julia Ecklar sings, it's kind of like construction work with a toothpick for a tool.
I was taught assembler
in my second year of school.
It's kinda like construction work —
with a toothpick for a tool.
So when I made my senior year,
I threw my code away,
And learned the way to program
that I still prefer today.
Now, some folks on the Internet
put their faith in C++.
They swear that it's so powerful,
it's what God used for us.
And maybe it lets mortals dredge
their objects from the C.
But I think that explains
why only God can make a tree.
For God wrote in Lisp code
When he filled the leaves with green.
The fractal flowers and recursive roots:
The most lovely hack I've seen.
And when I ponder snowflakes,
never finding two the same,
I know God likes a language
with its own four-letter name.
Now, I've used a SUN under Unix,
so I've seen what C can hold.
I've surfed for Perls, found what Fortran's for,
Got that Java stuff down cold.
Though the chance that I'd write COBOL code
is a SNOBOL's chance in Hell.
And I basically hate hieroglyphs,
so I won't use APL.
Now, God must know all these languages,
and a few I haven't named.
But the Lord made sure, when each sparrow falls,
that its flesh will be reclaimed.
And the Lord could not count grains of sand
with a 32-bit word.
Who knows where we would go to
if Lisp weren't what he preferred?
And God wrote in Lisp code
Every creature great and small.
Don't search the disk drive for man.c,
When the listing's on the wall.
And when I watch the lightning burn
Unbelievers to a crisp,
I know God had six days to work,
So he wrote it all in Lisp.
Yes, God had a deadline.
So he wrote it all in Lisp.
drob518 · 1d ago
True. I love assembly language, but I write my code in Clojure.
chubot · 1d ago
That's kind of how I feel about C. C is fun, because you get to see "everything"
But C is slow to create -- it is like using a toothpick
Writing from scratch is slow, and using C libraries also sucks. Certainly libc sucks, e.g. returning pointers to static buffers, global vars for Unicode, etc.
So yeah I have never written Assembly that anybody needs to work, but I think of it as "next level slow"
---
Probably the main case where C is nice is where you are working for a company that has developed high quality infrastructure over decades. And I doubt there is any such company in existence for Assembly
kragen · 1d ago
Yeah, from the point of view of Python or the Bourne shell, assembly is C². Like E/m.
za3k · 1d ago
Good joke. E/m looks almost exactly like the programming language Elm though so took me a bit.
wat10000 · 1d ago
I've written assembly in a production environment. I love it and wish I could do more.
But the context where I'm doing it is very different from the context where you had to write a division routine from scratch! We never use assembly where a higher-level language would be good enough. It's only used for things that can't be written in C at all, either because it needs an instruction that the C compiler won't emit, or it involves some special calling convention that C can't express.
However, I read assembly in production all the time. Literally every day on the job. It's absolutely essential for crashes that won't reproduce locally, issues caused by compiler bugs, or extremely sensitive performance work. Now, lots of programmers very rarely have to deal with those sorts of things, but when it comes up, they'll be asking for help from the ones who know this stuff.
xxpor · 1d ago
+1 on reading.
I hardly consider myself an expert ARM ASM programmer (or even an amateur...), but a baseline level of how to read it even if you have to look up a bunch of instructions every time can be super useful for performance work, especially if you have the abstract computer engineering know how to back it up.
For example, it turns out that gcc 7.3 (for arm64) doesn't optimize
foo() ? bar() : baz();
the same as
if (foo()) {
bar();
} else {
baz();
}
!
The former was compiled into a branchless set of instructions, while the latter had a branch!
ferguess_k · 1d ago
Just curious, what kind of work do you do? Sounds like SRE in FAANG that deals with production systems. I work as a DE and the lowest level thing I have to read is JVM dump from spark. Man I envy you.
wat10000 · 1d ago
Close, OS development at a FAANG.
ajross · 1d ago
> Reading this thread leaves me with the impression that most posters advocating learning assembly language have never had to use it in a production environment. It sucks!
It absolutely sucks. But it's not scary. In my world (Zephyr RTOS kernel) I routinely see people go through all kinds of contortions to build abstractions around what are actually pretty straightforward hardware interfaces. Like, here's an interrupt controller with four registers. And here's a C API in the HAL with 28 functions and a bunch of function pointer typedefs and handler mechanics you're supposed to use to talk to it. Stuff like that.
It's really common to see giant abstractions built around things that could literally be two-instruction inline assembly blocks. And the reason is that everyone is scared of "__asm__".
K0balt · 1d ago
This exactly. If you are doing embedded programming, direct register access and manipulation is often a far, far superior option, and you don’t have to be some kind of “assembly sensei” to do it if you just have a very basic idea of how things work. It doesn’t mean you write programs in assembly… it means when you are needing to do something that the hardware is going to do for you anyway, you know how to ask for it directly without having to load a 2Kloc library. This is especially true when using python or JS bytecode on the MCU. Actually, using python with assembly is really the best of both worlds in many cases.
HeyLaughingBoy · 1d ago
Yeah, but doesn't the Zephyr device tree abstraction actually expect you to do that? I mean, I appreciate the elegance and the desire for portability, but all I could think of as I read those docs was "here's a couple months of work for something that should take ten minutes."
ajross · 1d ago
DTS[1] is there to parametrize things (like MMIO addresses and IRQ assignments) that need to be parametrized. The discussion here is about needless abstraction at the level of C code.
In, say, the interrupt controller case: there's a lot of value in having a generic way for boards to declare what the IRQ for a device actually is. But the driver should be stuffing that into the hardware or masking interrupt priorities or whatever using appropriately constructed assembly where needed, and not a needless layer of C code that just wraps the asm blocks anyway.
[1] And to be clear I'm not that much of a devicetree booster and have lots of complaints within the space, both about the technology itself and the framework with which it's applied. But none that would impact the point here.
mystified5016 · 1d ago
If you have a less-popular CPU, compilers today can be utter trash. GCC doesn't understand the TinyAVR core and emits insane assembly. Like iterating an array, instead of putting the array pointer in the Z register and using the atomic load-and-increment instructions, it will add to the pointer, read, subtract from the pointer, loop. It also uses the slower load instruction. Overall, looping over an array in C is 4 times slower than assembly, and consumes three times as much program space. Try examining the assembly from your next program, you'll probably be quite surprised at how awful it is.
I had to implement Morton ordering on this platform. The canonical C for this blows up to over 300 instructions per iteration. I unrolled the loop, used special CPU hardware and got the entire thing in under 100 instructions.
Compilers, even modern ones, are not magic and only understand CPUs popular enough to receive specific attention from compiler devs. If your CPU is unpopular, you're doing optimizations yourself.
Assembly doesn't matter to arduino script kiddies, but it's still quite important if you care at all about execution speed, binary size, resource usage.
commandlinefan · 1d ago
yeah, it's not _scary_. It's just tedious.
Lerc · 1d ago
I have tried to convince people that ASM is reasonable as a first stage teaching language. The reputation as a nearly mystical art practiced by a few doesn't help. The thing is, instructions are simple. Getting them to do things is not hard, the difficulty comes from tasks exceeding a scale where you can think about things at their most basic level.
It quickly becomes tedious to do large programs, not really hard, just unmanagable, which is precisely it should be taught as a first language. You learn how do do simple things and you learn why programming languages are used. You teach the problem that is being solved before teaching more advanced programming concepts that solve the problem.
tsimionescu · 1d ago
The biggest problem with using ASM as a first language to teach beginners is that it is extremely tedious, error prone, and sensitive to details. It is also unstructured, it uses entirely different control flow primitives than any language they will learn in the future, meaning they will not be well prepared for learning a real language that does scale to programs more complex than a few additions and calling an OS output routine.
So why teach someone a language that doesn't have if, while, (local) variables, scopes, types, nor even real function calls?
It's a very nice exercise for understanding how a computer functions, and it has a clear role in education - I'm not arguing people shouldn't learn it at all. But I think it's a terrible first language to learn.
MobiusHorizons · 1d ago
Because these are the primatives that are in use when programming in any language, and there is a benefit to learning the primatives before learning higher level abstractions. For instance we teach arithmetic before calculus.
I see lots of people become pretty helpless when their framework isn’t working as expected or abstraction becomes leaky. Most people don’t really need to know assembly in order to get past this, but the general intuition of “there is something underneath the subtraction that I could understand” is very useful.
jcranmer · 1d ago
The primitives of control flow in programming languages are sequencing, if, while, for, switch, return, and "early return" (goto restricted to exit a containing block). We might compile these into a form that represents everything using conditional jumps, unconditional jumps, and jump tables, but that's not how people think about it, definitely not at the level of programming languages (and even in the compiler IR phase, we're often mentally retranslating the conditional jump/unconditional jump model back into the high-level control flows).
And I could go on with other topics. High-level languages, even something like C, are just a completely different model of looking at the world from machine language, and truly understanding how machines work is actually quite an alien model. There's a reason that people try to pretend that C is portable assembler rather than actually trying to work with a true portable assembler language.
The relationship you're looking for is not arithmetic to calculus, but set theory to arithmetic. Yes, you can build the axioms of arithmetic on top of set theory as a purer basis. But people don't think about arithmetic in terms of set theory, and we certainly don't try to teach set theory before arithmetic.
jader201 · 1d ago
> For instance we teach arithmetic before calculus.
I don’t think that’s a fitting analogy.
Nearly everyone on the planet uses (basic) arithmetic. Very few use calculus.
By contrast, very few (programmers) use ASM, but nearly all of them use higher level languages.
nightski · 1d ago
I'd say every programmer uses the constructs in assembly. Just because we have layers and layers of abstraction on top of that doesn't mean it's not valuable to understand the far simpler world that it all sits upon (Granted I understand it sits upon machine code, not assembly, but assembly is probably the closest to machine code that is more human interpret-able without significant effort).
My first language was BASIC on a V-tech. It's not quite the same but it still was such a fantastic starting point.
I've tried luring people into programming with Python for example and see them get frustrated by the amount of abstractions and indirection going on. I am really starting to like this idea of starting with assembly.
jader201 · 1d ago
Yeah, my point wasn't that learning ASM isn't valuable, or that we don't use the constructs in higher level languages.
My point is that the analogy with arithmetic vs. calculus doesn't hold.
Nearly everyone uses basic arithmetic in everyday life, and a tiny fraction of those use calculus.
No programmer needs to learn ASM to be able to know how to use higher level languages. And a tiny fraction of them are using actual ASM in their everyday jobs.
Also, I think you can still learn the basic constructs of how languages work at a lower level without every learning actual ASM. There's no way you can learn calculus without an understanding of arithmetic.
wat10000 · 1d ago
Most people don't use basic arithmetic in everyday life anymore. They use machines which use arithmetic. Just like most programmers don't use assembly, they use programs which use assembly. In both cases, understanding what's going on is very useful even if you aren't directly touching that layer yourself.
tsimionescu · 1d ago
By this token, everyone who counts apples in a market is using the axioms of Peano arithmetic every day.
The fact that our high level languages compile down to assembly doesn't mean we use assembly in any meaningful sense. My C code will be correct or not based on whether it conforms to the semantics of the C abstract machine, regardless of whether those semantics match the semantics of the assembly language that it happens to compile down to. Even worse, code that is perfectly valid in assembler may be invalid C, even if the C code compiles down to that same assembler code. The most clear example is adding 1 to an int variable that happens to have the value MAX_INT. This code will often compile down to "add, ax, 1" and set the variable to MIN_INT, but it is nevertheless invalid C code and the compiler will assume this value is impossible to happen.
This relationship between a programming language and assembler is even more tenuous for languages designed to run on heavy runtimes, like Java or JavaScript.
tsimionescu · 1d ago
I think comparing assembly with arithmetic is dead wrong. Arithmetic is something that you use constantly in virtually any mathematical activity you will ever do, at least at the under-graduate level. There is literally 0 calculus, statistics, or algebra you could understand if you didn't know arithmetic.
In contrast, you can have a very successful, very advanced career in computer science or in programming without once in your life touching a line of assembler code. It's not very likely, and you'll be all the poorer for it, but it's certainly possible.
Assembly language is much more like learning the foundations of mathematics, like Hilbert's program (except, of course, historically that came a few millenia after).
wavemode · 1d ago
> extremely tedious, error prone, and sensitive to details
I've taught people Python as their first language, and this was their exact opinion of it.
When you're an experienced programmer you tend to have a poor gauge of how newcomers internalize things. For people who are brand new it is basically all noise. We're just trying to gradually get them used to the noise. Getting used to the noise while also trying to figure out the difference between strings, numbers, booleans, lists, etc. is more difficult for newcomers than many people realize. Even the concept of scoping can sometimes be too high-level for a beginner, IME.
I like asm from the perspective that, its semantics are extremely simple to explain. And JMP (GOTO) maps cleanly from the flowchart model of programming that most people intuit first.
cameldrv · 1d ago
IMO Python used to be a great first language, but it's gotten much more complicated over the years. When I'm teaching programming, I want an absolute minimum number of things where I have to say "don't worry about that, it's just boilerplate, you'll learn what it means later."
In particular, Python having generators and making range() be a generator means that in order to fully explain a simple for loop that's supposed to do something X times, I have to explain generators, which are conceptually complicated. When range() just returned a list, it was much easier to explain that it was iterating over a list that I could actually see.
ziml77 · 1d ago
It's probably best to act like more complex things are just syntax at the start. Leave the fact that something like range is just a normal function that returns a generator for later on.
Like if range was used like this:
for i in range 1 to 100:
pass
No one is going to ask how that works internally, so I don't think it's necessary to treat range(1, 100) any differently. For this usage it makes no difference if it's a generator, a list (excepting performance on large ranges), or if the local variable is incremented directly like a C-style for loop.
pjmlp · 1d ago
Python has made it into my toolbox around version 1.6.
It was already much more powerful than most people writing simple shell script replacements were aware of.
Thing is, very few bother to read the reference manuals cover to cover.
tsimionescu · 1d ago
I do realize how difficult this all is, I still have some recollection from how I started to program and how alien it all seemed. And note that I first started with 4 years of C in high-school
However, I don't agree at all that having strings and numbers as different things was ever a problem. On the contrary, explaining that the same data can be interpreted as both 40 and "0" is mistifying and very hard to grok, in my experience. And don't get me started on how hard it is to conceptualize pointers. Or working with the (implicit) stack in assembly instead of being able to use named variables.
wat10000 · 1d ago
My kid is just finishing up a high school intro CS class. A full school year in, and they still have trouble with the fact that their variable and type names must have the exact same capitalization everywhere they're used.
yetihehe · 1d ago
> So why teach someone a language that doesn't have if, while, (local) variables, scopes, types, nor even real function calls?
You can teach them how to implement function calls, variables and loops using assembly, to show them how they work under the hood and how they should be thankful for having simple if in their high level languages like C.
tsimionescu · 1d ago
That often leaves people with very bad mental models of how programs actually compile in modern optimizing compilers and in modern operating systems (e.g. people end up believing that variables always live on the stack, that function parameters are passed on the stack, that loops are executed in the same way regardless of how you write them, etc).
dmwilcox · 1d ago
Think about how far they've come if you get them to have these "misconceived" ideas!
They would understand code and data are in the same place, that all flow control effectively boils down to a jump, and they have a _more_ accurate picture of the inside of a machine than anyone starting out with Python or JavaScript could hope for.
Having spent 25 years to get to assembler, I wish I'd started it sooner. It's truly a lovely way to look at the machine. I'll definitely be teaching my kids how to program in assembly first (probably x86-16 using DOS as a program launcher)
horsawlarway · 1d ago
They have to want to understand any of those things first.
Be very careful that you're not going to just kill enthusiasm for programming as an activity entirely with this approach.
I see this happen a lot (I did a lot of robotics/programming mentoring), and then adults wonder why their kids don't like any of the stuff they like - and the reason is that the adult was really a dick about making them learn the things the adult liked, and ignored most of the fun aspects of the activity, or the wishes of the kid.
zahlman · 1d ago
> and then adults wonder why their kids don't like any of the stuff they like - and the reason is that the adult was really a dick about making them learn the things the adult liked
This can be done with any programming language.
The point of teaching assembly isn't for someone to memorize all the details of any particular instruction set. It's about conceiving of the decomposition of problems on that level. It's about understanding what data is, so that when the student later learns a higher-level programming language, it sets expectations for what happens when you open a file, what kind of processing has to be done, etc. It's the basis for understanding abstractions that are built upon all those 1s and 0s, about the way that a program implicitly assigns semantics to them.
(This is best done with a toy assembly language, not one that comes anywhere near reflecting the complexity of modern CPUs. Anything to do with the practical considerations of modern optimizing compilers is also missing the point by a mile.)
horsawlarway · 1d ago
> It's about conceiving of the decomposition of problems on that level. It's about understanding what data is, so that when the student later learns a higher-level programming language, it sets expectations for what happens when you open a file, what kind of processing has to be done, etc. It's the basis for understanding abstractions that are built upon all those 1s and 0s, about the way that a program implicitly assigns semantics to them.
These are all things that are your goals, as the adult and teacher.
The student who wants to engage with programming and software likely has other goals in mind.
Skip all the crap you just mentioned, focus on helping them achieve their goals. I think you'll find those are usually more in the realm of "I want to make a game" or "I want to show my stuff to friends on a website" or "I want to make the computer play music" or [insert other high level objective that's not "learn about bits and bytes"].
Will that involve the stuff you mentioned? Sure will, and a student who feels like they're achieving the thing they want by learning that stuff is engaged.
But a student who gets to just sit there and listen to you drone on and on about "abstractions" and "instructions sets" and "data is code" and "semantics" all to end up with a complicated file that functionally just adds two numbers together? That student is usually bored and disengaged.
zahlman · 1d ago
> The student who wants to engage with programming and software likely has other goals in mind.
And the student who doesn't learn these concepts will inevitably run into a roadblock soon thereafter.
> But a student who gets to just sit there and listen to you drone on and on about "abstractions" and "instructions sets" and "data is code" and "semantics"
You don't "drone on" about these things. You introduce them as it makes sense, by pointing things out about the first programs as they are developed. You don't talk about abstracting things and assigning semantics; you do it, and then point out what you've done.
horsawlarway · 1d ago
> You introduce them as it makes sense
So we agree that maybe dragging them right into the start by teaching assembly (because it's good at teaching those things) as the first time language isn't the best strategy?
At no point will I argue against learning it. Knowing how machines work is great, and I think going "down" the stack benefits a lot of developers ONCE they're developers and have an understanding that programming and computers are things they like and want to do.
But first you have to foster enthusiasm and nurture interest. You don't do that by declaring that you're going to teach your kids assembly... you do that by listening to your kids interests in the space and helping them achieve their goals.
Pet_Ant · 1d ago
They will then run into a roadblock when they’ve already done something fun and are invested in the project.
Reading is easier when you know your latin roots, but we don’t make kids speak latin before See Spot Run even if it would help.
zahlman · 1d ago
> and they have a _more_ accurate picture of the inside of a machine than anyone starting out with Python or JavaScript could hope for.
Frankly, a more accurate picture than those starting in C have, too.
t-3 · 1d ago
After learning asm, teach compilers and have them think about how to generate code stupidly, then think about how to generate efficient code. If you don't want people thinking about the stack, just teach them RISC rather than x86.
tsimionescu · 1d ago
So you think people should start their programming journey by writing a compiler in assembly? What exactly should it compile, if they haven't learned any other language?
t-3 · 1d ago
It's relatively common is university CS courses to build a compiler after the basic intro and architecture courses. It's one of the simpler projects (yes, really, compilers are rather simple, optimization is the hard part) that involves a lot high-level concepts and exposes a lot of the thought behind things otherwise obscure. A compiler for a simple 4-function calculator is enough to start with, then higher-level constructs can be added easily while introducing them.
tsimionescu · 1d ago
In my university, compilers were a third-year course. And they're anything but simple - even the most well solved part of them, parsing, used to be a research-level problem until fairly recently. To build even a simple non-optimizing compiler you have to understand a whole lot of other fundamentals, such as various data structures, that are much, much harder to understand in assembler then in any higher level language, even C.
bitwize · 1d ago
But if they know assembly, they can look at actual compiler output and form the correct mental models...
diggan · 1d ago
> it is extremely tedious, error prone, and sensitive to details.
That sounds like the perfect beginner language! If they survive that experience, they'll do very well in almost any type of programming, as it's mostly the same just a tiny bit less tedious. A bit like "hardening" but for programmers.
HeyLaughingBoy · 1d ago
Perhaps if you want to gatekeep for the most stubborn individuals, but you'll lose a lot of talent that way.
diggan · 1d ago
Isn't programming already mostly for the most stubborn individuals? I don't know many non-programmers who would willingly bang their head against the same problem for days, especially when in front of a computer.
I guess it's as much "gatekeeping" as being required to formulate plans and balance tradeoffs is "gatekeeping".
horsawlarway · 1d ago
So much this.
This is like learning to read by first being asked to memorize all the rules of grammar and being quizzed on them, or being forced to learn all the ins and outs of book binding and ink production.
It's tedious, unproductive, miserable.
There's very little reward for a lot of complexity, and the complexity isn't the "stimulating" complexity of thinking through a problem; it's complexity in the sense of "I put the wrong bit in the wrong spot and everything is broken with very little guidance on why, and I don't have the mental model to even understand".
There's a perfectly fine time to learn assembly and machine instructions, and they're useful skills to have - but they really don't need to be present at the beginning of the learning process.
---
My suggestion is to go even farther the other way. Start at the "I can make a real thing happen in the real world with code" stage as soon as possible.
Kids & adults both light up when they realize they can make motor turn, or an LED blink with code.
It's similarly "low level" in that there isn't much going on and they'll end up learning more about computers as machines, but much more satisfying and rewarding.
skydhash · 1d ago
The best way to go about that is to use a simulator for an old cpu like EdSim51[0]. Can do a lot of things with just a few lines of code.
> it's complexity in the sense of "I put the wrong bit in the wrong spot and everything is broken with very little guidance on why, and I don't have the mental model to even understand
That's the nice thing about assembly, it always works, but the result may not be as expected. But instead of having a whole lot of magic between what is happening and how you model it, it's easy to reason about the program. You don't have to deal with stack trace, types, garbage collection and null pointer exception. Execution and programming is the same mental model, linear unless you said so.
You can start with assembly and then switch to C or Python and tell them: For bigger project, assembly is tedious and this is what we invented instead.
I vote for microcontrollers. I learned assembly on Atmel's AVR and it's was easy and straightforward because there's very little abstraction underneath, there's no OS, no heap, no multiprocessing and no context switching, no syscalls and no fat libraries, and you get direct access to the hardware. You also receive actual physical feedback — doing a tiny bit of bit fiddling gets you a blinking LED or whatever.
AVR's assembly is quite mediocre, with 120+ something instructions, with lots of duplication among them (IIRC — it's been... many years already), and some people swore by PIC which only had 35 instructions to remember. But it was still easier than lobotomizing oneself by trying to write a Win32 application in x86 assembly (which came later... and went to the trash bin quickly while microcontrollers stuck for much longer).
tsimionescu · 1d ago
No, assembly doesn't "always work". It almost always does something, true, which is the worse thing about it: instead of getting some error, you get to figure out why the value at the end of your program is not the value you expected, and which of the hundred instructions before that caused it to be wrong.
mousethatroared · 1d ago
"is that it is extremely tedious, error prone, and sensitive to details. It is also unstructured,"
That's why it's such an important first language! Pedagogically it's the foundation motivating all the fancy things languages give you.
You don't teach a kid to cut wood with a table saw. You give them a hand saw!
tsimionescu · 1d ago
No, it is not the foundation motivating what other languages give you, not at all.
Programming languages are usually designed based on formal semantics. They include constructs that have been found either through experience or certain formal reasons to be good ways to structure programs.
Haskell's lazy evaluation model, for example, has no relationship to assembly code. It was not in any way designed with thought to how assembly code works, it was designed to have certain desirable theoretical properties like referential transparency.
It's also important to realize that there is no "assembly language". Each processor family has its own specific assembly code with its own particular semantics that may vary wildly from any other processor. Not to mention, there are abstract assembly codes like WebAssembly or JVM bytecode, which often have even more alien semantics.
mousethatroared · 1d ago
You don't teach lambda calculus to a first grader.
You don't teach Haskell to seventh grader.
But 4 bit assembly driving a few LEDs? That works
ziml77 · 1d ago
You give them a hand saw because power tools are far easier to inflict serious injuries with. But if you're teaching a kid who's old enough, there's no reason to start on a hand saw if you have the power tools available.
mousethatroared · 1d ago
You don't give a noob a table saw because he'll never understand why a hand saw is useful. He'll never appreciate that more often than not, the hand saw will easier and quicker.
But hey, what do I know. Im the kind of guy who gets to play with TMA and seriously considers purchasing hydrazine for work. What do I know?
taco_emoji · 1d ago
> You don't teach a kid to cut wood with a table saw. You give them a hand saw!
Okay but that's not for pedagogical reasons, it's because power saws are MUCH more dangerous than hand saw.
Contrariwise, you don't teach a kid to drill wood with a brace & bit, because a power drill is easier to use.
flohofwoe · 1d ago
Starting with assembly makes it pretty clear why higher level languages had been invented. E.g. a speed run through computing:
- machine code
- assembly
- Lisp and Forth
- C
- Pascal
- maybe a short detour into OOP and functional languages
...but in the end, all you need to understand for programming computers are "sequences, conditions and loops" (that's what my computer club teacher used to say - still good advice).
TheOtherHobbes · 1d ago
I'd change the end of that list to C, Pascal, Lisp, Python.
But in the end no one learns "assembler". Everyone learns a specific ISA, and they all have different strengths and limitations. Assembler on a 36-bit PDP-10, with 16 registers and native floating point, is a completely different experience to assembler on a Z80 with an 8-bit accumulator and no multiply or divide.
You can learn about the heap and the stack and registers and branches and jumps on both, but you're still thinking in terms of toy matchstick architecture, not modern building design.
tsimionescu · 1d ago
> but in the end, all you need to understand for programming computers are "sequences, conditions and loops"
I fully agree - and assembly language teaches you precisely 0 of these.
flohofwoe · 1d ago
Well, Z80 has DJNZ which is specifically designed for loops ;)
I think there's value in understanding how high level language constructs like if-else and loops can all be constructed from simple conditional jumps, and that a function call is just a CALL/RET pair with the return address being stored on the stack.
Also, structured programming had to be invented, and working in assembly code makes it clearer why.
It's also food for thought why CPU ISAs never made the leap to structured programming.
tsimionescu · 1d ago
Various assembly languages have various exotic features. I didn't even get into discussing which particular assembly we may want to talk about. Still, DJNZ is still a conditional jump, not a loop. You tell it where to jump if some counter is not yet 0, you don't tell it which instructions to repeat. The two are of course isomorphic concepts, but still different.
And I absolutely agree there is value in understanding the mechanics of how languages are executed. What I disagree with is that this is necessary for being a good programmer, and that it is useful as an initial learning experience.
Programming itself didn't start with assembly. It started with pseudo-code, which was always expressed in a high-level format, and then manually translated to some form of assembly language for a particular physical machine. But people have never designed their programs in terms of assembly - they have always designed them in higher level terms.
mystified5016 · 14h ago
Controversial opinion but we should be teaching new programmers how a CPU works and not hand-wave the physical machine away to the cloud.
Not doing this is how you get Electron.
wang_li · 1d ago
We teach math this way. Addition and subtraction. Then multiplication. Then division. Fractions. Once those are understood we start diversifying and teaching different techniques where these make up the building blocks, statistics, finance, algebra, etc.
It may put people off a programming career, but perhaps that is good. There are a lot of people who work in programming who don't understand the machines they use, who don't understand algorithms and data structures, they have no idea of the impact of latency, of memory use, etc. They're entire career is predicated on being able to never have to solve a problem that hasn't been solved in general terms already.
tsimionescu · 1d ago
We teach math starting with basic arithmetic, starting from the middle. We don't go explaining what numbers are in terms of sets, we don't teach Peano arithmetic or other theories that can give logical definitions of arithmetic from the ground up.
Plus, it is literally impossible to do any kind of math without knowing arithmetic. It is very possible to build a modestly advanced career knowing no assembly language.
No comments yet
9rx · 1d ago
> We teach math this way. Addition and subtraction. Then multiplication. Then division
The first graders in my neighbourhood school are currently leaning about probability. While they did cover addition earlier in the year, they have not yet delved into topics like multiplication, fractions, or anything of that sort. What you suggest is how things were back in my day, to be fair, but it is no longer the case.
flohofwoe · 1d ago
IMO It depends a lot on the assembly flavour.
The best ISA for learning is probably the Motorola 68000, followed by some 8-bit CPUs (6502, 6809, Z80), also probably ARM1, although I never had to deal with it. I always thought that x86 assembly is ugly (no matter if Intel or AT&T).
> It quickly becomes tedious to do large programs
IME with modern tooling, assembly coding can be surprisingly productive. For instance I wrote a VSCode extension for 8-bit home computers [1], and dog-fooded a little demo with it [2], and that felt a lot more productive than back in the day with an on-device assembler (or even typing in machine code by numbers).
I think you can build environments that give immediate feedback and the ability to do real things quickly in ASM. I would still recommend moving swiftly on to something higher level as soon as it started to feel like a grind.
tsimionescu · 1d ago
Sure, but learning an old ISA can leave you with a very very wrong idea about how modern processors work. Even x86 assembly paints a very misleading image of how modern processors actually work. For example, someone learning x86-64 assembly will likely believe all of the following:
- assembly instructions are executed in the order they appear in in the source code
- an x86 processor only has a handful of registers
- writing to a register is an instruction like any other and will take roughly the same time
- the largest registers on an x86 processor are 64-bit
ThrowawayR2 · 1d ago
All of which are completely irrelevant implementation details hidden behind the ISA. The x86-64 ISA promises execution of instructions in the specified order, a certain number of registers, etc. and that's all they need to know.
tsimionescu · 1d ago
The claim is that learning assembler first will build a better intuition about the inner workings, and thus performance, of the processor.
The reality is that any assembler simple enough to be taught as your first contact with programming will leave you with a wrong intuition about how modern processors work, and thus a wrong intuition about the relative performance of various operations.
Having no intuition about something is better than building a bad intuition, especially at the beginning of your learning journey.
Someone · 1d ago
> The x86-64 ISA promises execution of instructions in the specified order
It doesn’t, and out-of-order CPUs don’t do that. https://en.wikipedia.org/wiki/Out-of-order_execution: “In this paradigm, a processor executes instructions in an order governed by the availability of input data and execution units, rather than by their original order in a program.”
flohofwoe · 1d ago
Out-of-order execution is an internal optimization, from the outside results are still guaranteed to be available in order - e.g. the instruction stream appears to be executed in order when observing the CPU from the outside.
For instance you don't need to be afraid that an instruction uses garbage inputs just because a previous instruction hadn't finished computing an input value to the instruction. At worst you'll get a pipeline stall if the CPU can't fill the gap with out-of-order executed instructions.
On some CPUs it does get tricky once memory is involved though (on ARM, but not on x86).
Someone · 1d ago
> from the outside results are still guaranteed to be available in order - e.g. the instruction stream appears to be executed in order when observing the CPU from the outside.
> […]
> On some CPUs it does get tricky once memory is involved though (on ARM, but not on x86).
“Among the commonly used architectures, x86-64 processors have the strongest memory order, but may still defer memory store instructions until after memory load instructions.”
They will be disabused of any of those notions simply by reading the relevant portions of the architecture handbook. In a pedagogical environment that's very simple to arrange.
tsimionescu · 1d ago
Someone who is just learning to program will not be well served by reading a modern CPU architecture handbook. It is far too complex for someone who doesn't even know yet what a graph is, for example.
t-3 · 5h ago
They don't have to read the whole thing. Excerpts or specific pages/sections, presented under the guidance of an experienced teacher or mentor, are perfectly digestible. The instruction description pages are the best documentation for looking up how to use instructions as well.
flohofwoe · 1d ago
Peeking under the hood is a later step after getting comfortable with assembly coding. E.g. none of those details are really relevant when starting out, instead it makes a lot of sense to do a speed run through computing history in order to really understand why modern CPUs (and computers as a whole) work like they do.
whobre · 1d ago
I agree that M68k is nice, as are the 8-bit ones you mention. I just find it strange that you like Z80 and dislike x86 - they are fundamentally not that different and both are descended from 8080.
flohofwoe · 1d ago
Yeah the Z80 instruction set is quite messy (mainly because it had to fill gaps of the 8080 instruction set for backward compatibility). But as an evolution of the 8080 instruction set, the Z80 is still cleaner than x86 (IMHO!).
I started with Z-80 assembly, then BASIC, then 6502 assembly, then higher-level languages like C and perl, and I think the assembly gave me a useful foundation for what was going on under the hood. I'm not sure I'd even call assembly a "language" in the sense of the others. It has instructions, not statements, and there's really no syntax.
If I were teaching a general-interest programming course, I'd probably start with just a bit of BASIC to introduce a few broad concepts like variables and looping, then a little assembly to say, "And this is what's going on when you do those things," and then move up the chain. Then when they get to something like C and go to look at the assembly it produces for debugging, they'll at least be familiar with concepts like registers and branching. So not quite the order I happened to do it in, but similar.
criddell · 1d ago
If you have a good macro assembler, it is only a little more difficult than C. There's just more to learn up front (things like calling conventions, register usage, etc...).
I wouldn't teach it first, but after a person knows the basics in another language, seeing how it all actually works can be fun.
jcranmer · 1d ago
I was a TA for an intro to assembly language course, which means I got my office hours full of all of the students who struggled with assembly language and had to work with them one-on-one to get them over their roadblocks to pass the class.
Assembly language is not a reasonable first programming language. There's just so many things about it that make it a poor choice for programming instruction.
Chiefly, assembly lacks structure. There's no such thing as variables. There's no such thing as functions. You can fake some of this stuff with convention, but if you make mistakes--and students in intro-to-programming will make mistakes--there is nothing that's going to poke you that you did something wrong, you just get the wrong result.
ferguess_k · 1d ago
I think in most CS programs, students do learn assembly early on, perhaps not as the first language, but definitely as a second language, as required by most Arch courses.
cameldrv · 1d ago
Personally way back when, I first learned BASIC, then tried to learn C, but didn't get pointers, then learned ASM, and then pointers became obvious, and went back to C. If you're going to be using C or doing anything with hardware, learning ASM IMO is very useful just to understand how the machine really works.
zahlman · 1d ago
> Getting them to do things is not hard, the difficulty comes from tasks exceeding a scale where you can think about things at their most basic level.
Indeed - you don't actually need to work on difficult tasks to get the intellectual benefit. Once you've properly understood what a computer is, you can absorb the ideas of SICP.
vbezhenar · 1d ago
Beginners should have immediate positive feedback. It's not possible with assembly language.
t-3 · 1d ago
It's just as straightforward as in higher level languages, just not quite as interactive as interpreted languages, but I've never seen an "intro to programming" that started in a REPL even when using an interpreted language. Hello world is even shorter and simpler than most languages (in a modern OS environment).
rep_lodsb · 1d ago
The old D86 debugger[1][2] comes close to being a REPL for assembly language, helped me a lot with learning it when I found it on a shareware collection CD as a kid.
Load registers, call DOS or BIOS with 'int', etc. all interactively and with a nice full screen display of registers, flags and memory. Of course entering single instructions to run immediately only gets you so far, but you can also enter short programs into memory and single step through them.
It's too bad nothing like this seems to exist for modern systems! With the CPU virtualization features now available, you could even experiment with ring 0 code without fear of crashing your machine.
You can with a decent monitor that shows the values of registers, can look up values in memory, etc.
lo_zamoyski · 1d ago
> I have tried to convince people that ASM is reasonable as a first stage teaching language.
Unless you're teaching people preparing for engineering hardware perhaps, I think ASM is absolutely the wrong language for this. The first reason is that programming is about problem solving, not fiddling with the details of some particular architecture, and ASM is pretty bad at clearly expressing solutions in the language of the problem domain. Instead of programming in the language of the domain, you're busy flipping bits which are an implementation detail. It is really a language for interfacing with and configuring hardware.
The more insidious result is that teaching ASM will make an idol out of hardware by reinforcing the notion that computer science or programming are about computing devices. It is not. The computing device is totally auxiliary wrt subject matter. It is utterly indispensable practically, yes, but it is not what programming is concerned with per se. It is good for an astronomer to be able to operate his telescope well, but he isn't studying telescopes. Telescope engineers do that.
zahlman · 1d ago
> The first reason is that programming is about problem solving, not fiddling with the details of some particular architecture, and ASM is pretty bad at clearly expressing solutions in the language of the problem domain. Instead of programming in the language of the domain, you're busy flipping bits which are an implementation detail.
"How do I use bits to represent concepts in the problem domain?" is the fundamental, original problem of computer science.
And to teach this, you use much simpler problems.
> ... reinforcing the notion that computer science or programming are about computing devices. It is not.
It is, however, about concepts like binary place-value arithmetic, and using numbers (addresses) as a means of indirection, and about using indirection to structure data, and about being able to represent the instructions themselves as data (such that they can be stored somewhere with the same techniques, even if we don't assume a Von Neumann machine), and (putting those two ideas together) about using a number as a way to track a position in the program, and manipulating that number to alter the flow of the program.
In second year university I learned computer organization more or less in parallel with assembly. And eventually we got to the point of seeing - at least in principle - how a basic CPU could be designed, with its basic components - an ALU, instruction decoder, bus etc.
Similarly:
> It is good for an astronomer to be able to operate his telescope well, but he isn't studying telescopes.
The astronomer is, however, studying light. And should therefore have a basic mental model of what a lens is, how lenses relate to light, how they work, and why telescopes need them.
lo_zamoyski · 19h ago
> "How do I use bits to represent concepts in the problem domain?" is the fundamental, original problem of computer science.
> It is, however, about concepts like binary place-value arithmetic
That is the original problem of using a particular digital machine architecture. One shouldn't confuse the practical/instrumental problems at the time with the field proper. There's nothing special about bits per se. They're an implementation detail. We might study them for practical reasons, we may study the limits of what can be represented by or computed using binary encodings, or efficient ways to do so or whatever, but that's not the central concern of computer science.
> In second year university I learned computer organization more or less in parallel with assembly.
Sure. But just because a CS major learns these things doesn't make it computer science per se. It's interesting to learn, sure, and has practical utility, but particular computer architectures are not the domain of computer science. They're the domain of computer engineering.
> The astronomer is, however, studying light.
No, physicists studying optics study light in this capacity. Astronomers know about light, because knowledge of light is useful for things like computing interstellar distances or determining the composition of stellar objects or for calculating the rate of expansion or whatever. The same goes for knowledge of lenses and telescopes: they learn about them so they can use them, but they don't study them.
cvoss · 1d ago
Ooh, very much disagree with a lot of these assertions. The problem I always encounter when trying to teach programming is that students completely lack an understanding of how to imagine and model the state of the computational system in their heads. This leads to writing code that looks kinda like it should do what the student wants, but betrays the fact that they really don't understand what the code actually means.
In order to successfully program a solution to a problem, it is necessary to understand the system you are working with. A machine-level programming language cuts through the squishiness of that and presents a discrete and concrete system whose state can be fully explained and understood without difficulty. The part where it's all implementation details is the benefit here.
lo_zamoyski · 1d ago
I suspect your background is dominated by imperative languages, because these often bleed low-level, machine concepts into the domain of discourse, causing a conceptual muddle.
From a functional perspective, you see things properly as a matter of language. When you describe to someone some solution in English, do you worry about a "computational system"? When writing proofs or solving mathematical problems using some formal notation, are you thinking of registers? Of course not. You are using the language of the domain with its own rules.
Computer science is firmly rooted in the formal language tradition, but for historical reasons, the machine has assumed a central role it does not rightly possess. The reason students are confused is because they're still beholden to the machine going into the course, causing a compulsion to refer to the machine to know "what's really going on" at the machine level. Instead of thinking of the problem, they are worrying about distracting nonsense.
The reason why your students might feel comforted after you explain the machine model is because they already tacitly expect the machine to play a conceptual role in what they're doing. They stare and think "Okay, but what does this have to do with computers?". The problem is caused by the prior idolization of the machine in the first place.
But machine code and a machine model are not the "really real", with so-called "high-level languages" hovering above them like some illusory phantom that's just a bit of theater put on by 1s and 0s. The language exists in our heads; machines are just instruments for simulating them. And assembly language itself is just another language. It's domain just is, loosely, the machine architecture.
So my view is that introductory classes should beat the machine out of students' heads. There is no computer, no machine. The first few classes in programming should omit the computer and begin with paper and pencil and a small toy language (a pure, lispy language tends to be very good here). They should gain facility in this small language first. The aim should be to make it clear that the language is about talking about the domain, and that it stands on its own, as it were; the computer is to programming as the calculator is to mathematical calculation. Only once this have been achieved are computers permitted, because large programs are impractical to deal with using pen and paper.
This intuition is the foundation difference between a bona fide compute science curriculum and dilettante tinkering.
bigstrat2003 · 1d ago
I think that this is completely backwards. As James Mickens put it: pointers are real; you can't just put a LISP book on top of an x86 chip and hope it learns the lambda calculus by osmosis. Computer science is, to be honest, not interesting or useful without a machine to use it on. Therefore trying to teach it to people without reference to a machine is a grave error.
lo_zamoyski · 19h ago
> pointers are real
Pointers are an abstraction that are no more or less real than any other abstraction. They belong to particular languages, but they are not intrinsic to computer science as such as if they were some kind of atomic construct of the field.
> you can't just put a LISP book on top of an x86 chip [...the rest is confusing...]
I'm not talking about what, in today's contingent market and incidental state of the art, is practical. Obviously, if you want to run any program in any language, you have to target some architecture. The point is that the architecture is utterly incidental as far as the language per se is concerned. Lisp is not "less real" because you need to translate it into machine code. The machine code of a particular architecture is only there to simulate Lisp on that architecture. You can in principle have different architectures with their own machine code that can be used to simulate the very same Lisp.
> Computer science is, to be honest, not interesting or useful without a machine to use it on.
Computer science is very interesting without a machine, but how interesting you find it is neither here nor there. The point isn't to do away with machines, or that the machine has no practical importance. The point is to say that the machine is only a tool, and not the subject matter of computer science.
kazinator · 14h ago
Or, well, pointers are intrinsic to computer science, but not in any special way. No more than the un-numbered current position of the Turing tape machine along the tape, or whatever.
We give a lot of attention to pointers because electronic computers feature random access memory consisting of small, equal-sized cells of bits, keyed by binary numbers.
rep_lodsb · 1d ago
Not everything that isn't "bona fide computer science" should be considered "dilettante tinkering". In the real world, code is run on physically existing machines, and not in some abstract mathematical universe of pure functions and infinite-length tapes.
lo_zamoyski · 19h ago
My remark was contextual. I am not saying there is no value in in practical implementation. Obviously, there is enormous value! But these are secondary to what computer science is about, so if your concept of computer science and programming are machine-centric, then this is by definition not a computer science POV. If you take the machine to be the primary object of computer science, then you are either taking a computer engineering position, or you gravely misunderstand the essence of programming.
XorNot · 1d ago
This almost feels like an argument that we should teach computer science via bare metal bootstrapping.
Start out at "here's your machine code. Let's understand how x86_64 gets started" and work your way up to "now you have the automation to compile Linux and a modern compiler".
Which would certainly have stops most of the way up for things we usually include.
bitwize · 1d ago
So... NAND to Tetris?
UltraSane · 1d ago
x86 ASM absolutely is NOT a good first language due to it being a complete mess.
rep_lodsb · 1d ago
16 bit x86 isn't that complicated and (IMO) still helpful in learning some of the more modern stuff. But I'd recommend starting with either 6502, or the 8080, which is like the 8 bit "grandparent" of x86.
Avoid:
- Z80: at least as a first language. Extended 8080 with completely different syntax, even more messy and unorthogonal than x86!
LD A,(HL) ;load A from address in HL register pair
LD A,(DE) ;load A from address in DE
LD B,(HL) ;load B from address in HL
LD B,(DE) ;invalid!
JP (HL) ;load program counter with contents of HL (*not* memory)
ADD A,B ;add B to A
ADC A,B ;add B to A with carry
SBC A,B ;subtract B from A with borrow
SUB B ;subtract B from A
OR B ;logical-or B into A
etc.
- RISC-V: an architecture designed by C programmers, pretty much exclusively as a target for compiling C to & omitting anything not necessary for that goal
shortrounddev2 · 1d ago
assembly is a good first language if you have a simple instruction set or machine. When I saw new people learn java, easily the hardest initial bump to get over was "what the hell is public static void main(String[] args) ?" or "eclipse didn't build it for some reason"
Python is much easier to introduce someone to because there's no boilerplate and the tooling is very simple. Assembly on x86 machines is a royal PITA to set up, and you also need some kind of debugger to actively inspect the program counter and registers.
When I took Computer Organization & Architecture, they had us play around with MARIE[1] which really made assembly make sense to me. After that, I wrote an 8080 emulator and it made even MORE sense to me.
For actually programming in machine code this understanding of the internal opcode structure isn't all that useful though, usually - without an assembler at hand - you had a lookup table with all possible assembly instructions on the left side, and the corresponding machine code bytes on the right side.
Programming by typing machine code into a hex editor is possible, but really only recommended as absolute fallback if there's no assembler at hand - mainly because you had to keep track of all global constant and subroutine entry addresses - e.g. the main thing that an assembler does for you, and you had to leave gaps at strategic locations so that it is possible to patch the code without having to move things around.
tasty_freeze · 1d ago
For the past year or so, a couple teen boys from my neighborhood come by on sunday afternoon for a couple hours of programming in python. I started very simply and built up with text based tasks, then showed them pygame.
I am thinking about showing them what is under the hood, that python itself is just a program. When I learned to program it was the late 70s, and trs-80s and apple-IIs were easy to understand at the machine code level.
I could recapitulate that experience for them, via an emulator, but that again just feels like an abstraction. I want them to have the bare-metal experience. But x86 is such a sprawling, complicated instruction set that it is very intimidating. Of course I'd stick to a simplified subset of the instructions, but even then, it seems like a lot more work to make output running on a PC vs on the old 8-bit machines where you write to a specific location and it shows up on the screen.
ThrowawayR2 · 1d ago
Buy them a copy of "Human Resource Machine" on Steam or (preferably since there's no DRM) Good Old Games. It's a gamified version of what writing machine language on the old 8 bit CPUs of yore was like. The puzzle challenge in HRM is authentic in the sense that it derives from the natural constraints of having a single accumulator and very simple instructions rather than unnaturally injected constraints like the Zachtronics games, which are good but I wouldn't recommend as a learning tool.
TuringTourist · 1d ago
You could try the thing that made it click for me, long after x86 was dominant.
Show them a CPU running on Logisim (or the like, such as the newer Digital) and show how when you plug a program into a ROM, it turns into wires lighting up and flipping gates/activating data lines/read registers etc.
jebarker · 1d ago
ASM programming is fun. Machine code (as in what ASM encodes to) isn't scary, but it is extremely tedious to work with. I recommend the first part of Casey Muratori's Performance Aware Programming course if you want to feel that pain.
ferguess_k · 1d ago
I think you need to do it in production to retain the knowledge. If you just do it as a hobby, then most people just give up at certain point because there is no point to bang your head on the wall for nothing. You need to have a real problem to solve to go a long way.
boricj · 1d ago
Machine code isn't scary, but its nature is severely misunderstood.
Skipping over the bundling of instructions into code blocks, the next logical construct are functions. These have references to code and data in memory; if you want to relocate functions around in memory you introduce the concept of relocations to annotate these references and of a linker to fix them to a particular location.
But once the linker has done its job, the function is no longer relocatable, you can't move it around... or that is what someone sane might say.
If you can undo the work of the linker, you can extract relocatable functions from executables. These functions can then be reused into new executables, without decompiling them first; after all, if what you've extracted is equivalent to the original relocatable function, you can do the same things than it.
Repeat this process over the entire executable and you're stripped it for parts, ready to be put back together with the linker. Change some parts and you have the ability to modify it as if you're replacing object files, instead of binary patching it in place with all the constraints that comes with it.
Machine code is like Lego bricks, it just takes a rather unconventional point of view (and quite a bit of time to perfect the art of delinking) to realize it.
oleganza · 2d ago
Thank you Jimmy, great article.
My 23+ year experience in computer science and programming is a zebra of black-or-white moments. For the most time, things are mostly obscure, complicated, dark and daunting. Until suddenly you stumble upon a person who can explain those in simple terms, focus on important bits. You then can put this new knowledge into a well-organized hierarchy in your head and suddenly become wiser and empowered.
"Writing documentation", "talking at conferences", "chatting at a cooler", "writing to a blog" and all the other discussions from twitter to mailing lists - are all about trying to get some ideas and understanding from one head into another, so more people can get elucidated and build further.
And oh my how hard is that. We are lucky to sometimes have enlightenment through great RTFMs.
shakna · 2d ago
I started building a Forth recently, but decided that instead of interpreter or transpiler or whatever, I'd map to bytes in memory and just straight execute them.
This non-optimising JIT has been far, far easier than all the scary articles and comments I've seen led me to believe.
I'm already in the middle of making it work on both Aarch64 and RISC-V, a couple weeks in.
pjmlp · 1d ago
We did a similar approach back in the day, when going through the Tiger language[0], on the Java version.
Our approach was to model the compiler IR into Assembly macros, and follow the classical UNIX compiler build pipeline, thus even though it wasn't the most performant compiler in the world, we could nonetheless enjoy having our toy compiler generate real executables in the end.
I did this for WebAssembly WAT (an IR that is syntactically similar to lisp) by mapping the AST for my lisp more or less directly to the WAT IR, then emitting the bytecode from there. It was pretty fun.
simpleui · 2d ago
Very interesting, care to share the source?
shakna · 2d ago
Oh, it's still a while off that. I do plan to make it public at some point, but when I'm actually happy the code isn't completely vomit.
But for a simple taste, the push to stack function currently looks like this. (All the emit stuff just writes bytes into a mmap that gets executed later.)
Creating an assembler with Lisp syntax and then using that to bootstrap a Lisp compiler (with Lisp macros instead of standard assembler macros) is one of those otherwise pointless educational projects I’ve been wanting to do for years. One day perhaps.
pjmlp · 1d ago
Even though I tend to be a bit negative into the whole WebAssembly hype, that is exactly a good starting point.
You already have the assembler with Lisp syntax covered.
Add some macro support on top, and you can start already implementing the upper layer for your Lisp.
Naturally there are already a couple of attempts at that.
0x000xca0xfe · 1d ago
Machine code generation for RISC-V is so easy. Excellent for teaching.
simpleui · 1d ago
Thanks Shakna!
mananaysiempre · 1d ago
I mean, it’s not hard as such, the encodings of some instruction sets are just ass, with 32- and 64-bit x86 as the foremost example and Thumb-2 not far behind it. Also, if you’re dynamically patching existing code, you’ll have to contend with both modern OSes (especially “hardening” patches thereto) making your life harder in bespoke incompatible ways (see: most of libffi) and modern CPUs being very buggy around self-modifying code. Other than that, it just takes quite a bit of tedious but straightforward work to get anywhere.
shakna · 1d ago
I haven't had any issues with the OS.
I mmap, insert, mark as executable and done. Patchjumping and everything "just works".
I'm not modifying my own process, so there's no hardening issues. Just modifying an anonymous memory map.
PlunderBunny · 2d ago
I taught myself to program on an 8-bit BBC micro-computer in the mid-80s by typing in BASIC listings. I understood BASIC quite well, and could write my own structured BASIC programs, but machine code was always a bit out-of-reach. I would try to read books that started by demonstrating how to add, subtract etc, but I couldn’t see how that could build up to more complicated stuff that I could do in BASIC, like polling for input, or playing sounds, or drawing characters on the screen. Only once I got an advanced users guide and discovered the operating system commands, then it started to click with me - the complicated stuff was just arranging all the right data in the right bits of memory or registers, then (essentially) calling a particular OS command and saying ‘here’s the data you want’.
bowsamic · 2d ago
Yeah the issue is that the pedagogy doesn’t make it clear how to bridge the “calculator” with the OS stuff. I had this issue when I was a kid. How does adding eventually make something draw on the screen? Of course, it doesn’t, you need some hardware or OS specific information
eterm · 2d ago
It wasn't until I read Petzold's CODE that this stuff, especially the role of the the motherboard bridging processing, memory and I/O and what an OS is for, that it started to click for me.
bowsamic · 1d ago
Yeah my stepdad bought me CODE as a kid and that really helped. A similar book that took a "top down" approach as much as possible would have also helped though, but would be much harder to pull off
tux3 · 2d ago
But if this doesn't satisfy your curiosity, you might realize this is just pushing the magic blackbox/question mark a little further down the chain
How does the OS and the hardware draw on the screen, actually? All they have is also just calculator stuff, super basic primitives. You can't even do loops in hardware, or even real branches (hardware always "executes both sides" of a branch at once)
Anyways, if you keep digging long enough you eventually end up finding this XKCD https://xkcd.com/435/ =)
tsimionescu · 1d ago
> hardware always "executes both sides" of a branch at once
Unless you're talking about quantum hardware, that is very much not true. The whole point of transistors is to choose whether to power one part of a circuit or another.
Plus, even for hardware, the solution to all this is to modularize all the way down. One piece of hardware sets up the right state and powers up another piece of hardware - this type of logic doesn't stop at the OS level. For drawing on the screen, ultimately you reach a piece of hardware that lights up in one of three colors based on that state - but all the way there, it's the same kind of "function calls" (and even more indirection, such as communication protocols) on many levels.
MobiusHorizons · 1d ago
At least in CMOS, the power supplied to the transistor is not being modulated as part of logic operations. Modern hardware does clock gating and power gating of modules for power saving, but that is not what the OP is talking about.
In hardware the equivalent of a ternary is a mux, which can be made from a lot of parallel instances of
out0 = (a0 & cond) | (b0 & ~cond)
Or in other words, both branches must be computed and the correct value is chosen based on the condition.
tsimionescu · 1d ago
"Power" was very sloppy language on my side. I was talking about the low voltage / high voltage difference that you get from transistors. A logical gate ultimately has a single output voltage based on its inputs. If its inputs are 1 and 1 (+5V and +5V), its output will be, say, 0 (0V), not "initially both 0 and 1, but later only 1 is chosen".
Similarly, a two bit adder is not going to have all 4 possible states internally or for some time - as soon as the input voltage is applied to its inputs, its output voltages will correspond to the single result (disregarding the analog signal propagation, which I assume is not what you were talking about).
Similarly, a conditional jump instruction will not be implented natively by computing both "branches". It will do a single computation to set the instruction pointer to the correct value (either current + 1 or destination). Now sure, speculative execution is a different matter, but that is extra hardware that was added late in the processor design process.
MobiusHorizons · 1d ago
You can’t really conditionally compute something in hardware. The hard to do the computation exists and is wired up always.
The conditional jump is a great example actually. Typically this would be implemented by having one block compute PC+<instruction size> and another block compute the jump target and then choosing between the two using a mux
tsimionescu · 1d ago
That is one way of implementing such conditionals in hardware, but it's just one aspect of the computation. First, we can both agree that in the next clock cycle, a single instruction will be executed, not both instructions that could result after the jump - so clearly the hardware doesn't always do both things.
Secondly, if we think about the instruction decoding itself, it should become pretty clear that even if the hardware of course always exists and is always going to output something, that's not equivalent to saying it will compute all options. If the next instruction is `add ax, bx`, the hardware is not going to compute both ax + bx and ax - bx and ax & bx and ax | bx and so on, and then choose which result to feed back to ax through a mux. Instead, the ALU is constructed of logic gates which evaluate a single logical expression that assigns each output bit to a logical combination of the input bits and the control signal.
LegionMammal978 · 1d ago
At least for classic textbook CPUs, it's common to run both sides of a decision in parallel while it's still being made, then finally use a transistor to select the desired result after the fact. No one wants the latency of powering everything up and down every cycle, except on the largest scales where power consumption makes a big difference.
tsimionescu · 1d ago
I don't understand what you mean by decision. In a textbook CPU, where there is no speculative execution and no pipelining, the hardware runs one instruction at a time, the one from the instruction pointer. If that instruction is a conditional jump, this is a single computation that sets the instruction pointer to a single value (either next instruction or the specified jump point). After the single new value of the instruction pointer is computed, the process begins again - it fetches one instruction from the memory address it finds in the instruction point register, decodes it, and executes it.
Even if you add pipelining, the basic textbook design will stall the pipeline when it encounters a conditional/computed jump instruction. Even if you add basic speculative execution, you still don't get both branches executed at once, necessarily - assuming you had a single ALU, you'd still get only one branch executed at once, and if the wrong one was predicted, you'll revert and execute the other one once the conditional finishes being computed.
LegionMammal978 · 1d ago
> I don't understand what you mean by decision.
I'm talking on a lower level than the clock cycle or instruction. Let's say circuit A takes X and outputs foo(X), and circuit B takes X and outputs bar(X). We want to build a circuit that computes baz(X, Y) = Y ? foo(X) : bar(X), where X is available before Y is. Instead of letting Y settle, powering up one of the circuits A or B, and sending X into it, we can instead send X to circuits A and B at the same time, then use Y to select whichever output we want.
tsimionescu · 1d ago
Agreed, we can do that, and it is done for many kinds of circuits. But many others don't work like that, so it's wrong to say that hardware works exclusively like this.
One other common pattern for implementing conditional logic is to compute a boolean expression in which the control signal is just another input variable. In that model, to compute Y ? foo(X) : bar(X), we actually compute baz(X, Y) whose result is the same. This is very commonly how ALUs work.
And the other very common pattern is to split this in multiple clock cycles and use registers for intermediate results. If you don't have two circuits A and B, only one circuit that can compute either A or B based on a control signal (such as simple processors with a single ALU), this is the only option: you take one clock cycle to put Y in a register, and in the next clock cycle you feed both Y and X into your single circuit which will now compute either A or B.
neomech · 1d ago
In 1982, I programmed my ZX81 by converting assembly to hex by hand because BASIC was just too slow. I'd write my assembly on paper, convert it to hex using reference tables, then use a simple BASIC FOR loop to POKE the values into memory we'd reserved space for the machine code in a REM statement at a fixed position in memory.
When all the values were POKEd in, I'd save to tape and execute it with RAND USR 16514.
That memory address is permanently etched in my brain even now.
It wasn't good, bad or scary it was just what I had to do to make the programs I wanted to make.
stevekemp · 1d ago
I did the same thing, on the 48k Spectrum, a year or two later. I also remembered to add some NOPs between functions, to avoid me having to recalculate all the relative jump instructions if I made changes.
dedicate · 1d ago
For me the 'scary' part of machine code was never the actual logic. It was always just staring at that wall of hex or mnemonics and feeling like I needed a secret decoder ring!
jiehong · 1d ago
Yes, that does not help.
To me, it looks like some kind of complex tetris game. I guess we could maybe represent a program as such, with pieces for registers, instructions, etc.
Yet, the tooling we have is very terse, and textual.
amelius · 2d ago
Machine code was only "scary" in the old days when you had to reboot your system when you made a small mistake.
rausr · 2d ago
I enjoyed having to reload everything from tape (compact cassette tape - ie the kind of thing you'd use with a home computer in the early eighties), after a crash due to my poor code. I think the term used then was "character building" ;)
chopin · 1d ago
One of my first C programs wrote straight into the BIOS memory (1989 iirc). The machine froze and refused to reboot. We had to remove the BIOS battery to reset the BIOS. Luckily, the battery wasn't soldered to the main board.
Leo-thorne · 1d ago
I always thought machine code was something only experts could understand. But after reading this article, I realized the basic concepts aren’t that complicated, it’s really just instructions, registers, and memory.
I feel like this has given me a clearer understanding when it comes to writing code.
Thank you very much for your reminder. The comment has been corrected.
unoti · 1d ago
This is the video I wished I had seen when I was a kid, feeling like assembly was a dark art that I was too dumb to be able to do. Later in life I did a ton of assembly professionally on embedded systems. But as a kid I thought I wasn’t smart enough. This idea is poison, thinking you’re not smart enough, and it ruins lives.
The thing is: most programmers see assembly language generated by a compiler,
so no comment, and in optimised code with vector operations, it IS scary.
jmclnx · 2d ago
That is one thing that was nice about DOS, you where close to the machine. I never fully got machine language, but it was fun trying.
IIRC, debug.com could be used to create programs using machine lang.
pjmlp · 1d ago
Yes, it was a quite bare bones experience when compared to TASM and MASM, and only worked for COM executable, but it did work.
The reasoning being that COM files were a plain memory dump starting at offset 100H, thus you would type the code in memory and then dump it.
xelxebar · 1d ago
Oh, cool. A couple years ago I spent a few days disassembling a small x86-64 binary by hand. Getting familiar with the encoding was a lot of fun! The following reference was indispensable:
That is fun. But this one truly is enough: Turing Complete. You start with boolean logic gates and progressively work your way up to building your own processor, create your own assembly language, and use it to do things like solve mazes and more. Super duper fun
Indeed. In Knuth’s Art the machine code was not the “scariest” part. (Programming is hard.)
davemp · 2d ago
When I was last working with machine code, I found capstone to be very useful. Even just reading the source was helpful for some of the conditionally present amd64 fields.
Ok, what about VLIW ASM? Have you ever seen Elbrus' ASM with predicated code, asynchronous Array Prefetch Buffer, rotating registers, DAM (hardware table to memory dependencies disambiguation), registers windows etc. It's really hard to start read this.
kibwen · 1d ago
> But what if we want to represent a number larger than 12 bits? Well, the add instruction doesn't let us represent all such numbers; but setting sh to 1 lets us shift our number by 12 bits. So for example we can represent 172032172032 by leaving our 42 alone and setting sh to 1. This is a clever technique for encoding larger numbers in a small space.
This is whacky. So presumably adding with a 24-bit constant whose value isn't representable in this compressed 13-bit format would then expand to two add instructions? Or would it store a constant to a register and then use that? Or is there a different single instruction that would get used?
add_large_const:
lui a5,43
addi a5,a5,42
add a0,a0,a5
ret
Because there are only 32 bits in an instruction, you can't fit a whole 32-bit immediate into it. Contrast it with x86 which uses variable-length instructions (up to 15 bytes per instruction):
add_large_const:
lea eax, [rdi+176170] # this instruction takes 6 bytes
ret
P.S. Some people seem to be really puzzled by LEA instructions. It's intended use is to correspond to C's "dest = &base[offset]" which semantically is just "dest = base + offset", of course — but it allows one to express base and/or offset using available addressing modes.
No comments yet
GuB-42 · 1d ago
Well, machine code is scary. But like with many scary things, once you overcome your fears and get familiar with the thing, you realize that it is not that bad.
elif · 1d ago
This is a tangent but yesterday I was just pondering the abstraction layer from machine code to assembly lexers to compilers to interpreted languages. Having been lucky enough to be born at a time to witness these shifts, it's so easy to forget what we used to think was normal.
The thought came to me when testing the new Jules agentic coding platform. It occured to me that we are just in another abstraction cycle, but like those before, it's so hard to see the shift when everything is shifting.
My conclusion? There will be non-AI coders, but they will be as rare as C programmers before we realize it.
addaon · 1d ago
I think this article is missing a major point, or perhaps should be titled "Some Non-Scary Machine Code Isn't Scary". It argues that machine code isn't scary, by building a one-to-one mapping from machine code to assembly code, and then taking it as given that assembly code isn't scary. But it uses two examples -- 32-bit ARM and x86-64 -- where this one-to-one mapping isn't valid. When in Thumb mode for (some flavors of) ARM, even when you know you're in thumb mode, instructions can be a mix of 16 and 32 bits. And in x86 world, of course, instructions can be a wide range of widths. What that means is that if you're given a chunk of memory that is known to contain executable instructions... you /can't/ build a one-to-one mapping to assembly without knowing where all of the entry points are. For well-formed code you can often exclude almost all possible entry points as invalid, and maybe even end up with only a single one... but it's perfectly possible (and quite fun) to write machine code that has valid, different behavior for different entry points to the same byte sequence. There's no way to reduce this type of machine code to meaningful assembly, and it should be considered scary.
bitwize · 1d ago
Machine code ceased to be scary, or at least mysterious, to me when I opened the rudimentary debugger on a TRS-80. It was really more of a monitor, and it showed the contents of a certain chunk of memory in the top half of the screen. I loaded a program I was working on into it, and began changing instructions in memory, using the assembler output as my guide, and jumping into the program to see what the effects were. After that the little lightbulb went off. Oh, these are just bytes in memory that correspond to CPU instructions, and the CPU just reads them off and executes the instructions.
thuanao · 1d ago
I think machine code and building something like a Forth is way easier to understand than any contemporary programming language toolchain.
For the overwhelming majority of programmers, assembly offers absolutely no benefit. I learned (MC6809) assembly after learning BASIC. I went on to become an embedded systems programmer in an era where compilers were still pretty expensive, and I worked for a cheapskate. I wrote an untold amount of assembly for various microcontrollers over the first 10 years of my career. I honestly can't say I got any more benefit out of than programming in C; it just made everything take so much longer.
I once, for a side gig, had to write a 16-bit long-division routine on a processor with only one 8-bit accumulator. That was the point at which I declared that I'd never write another assembly program. Luckily, by then gcc supported some smaller processors so I could switch to using Atmel AVR series.
This is exactly the kind of job I'd enjoy! A perfectly doable technical challenge with clear requirements. Some people like solving Sudoku puzzles, I like solving programming puzzles.
I guess I'm just not "the overwhelming majority of programmers".
That's a Project Management issue, not an implementation concern.
In my case, there was no requirement that said "use 16-bit long division." However, we had committed to a particular processor family (MC68HC05), and the calculation precision required 16-bit math. IIRC, there was a compiler available, but it cost more than the rest of the project and the code it produced wouldn't have fit into the variant of the processor that I was using anyway.
The actual requirement would have looked more like "detect a 0.1% decrease in signal that persists for 10 seconds, then do X."
For most folks, that's going to be a couple days of prep work before they can get to the fun part of solving the puzzle.
But of course, might not be that rosy if under great time constraints.
I feel the same way, but I also can't help but imagine the boss jumping up and down and throwing chairs and screaming "how can you not be done yet? You're a programmer and this is a program and it's been three _hours_ already".
The "overwhelming" majority of programmers may be underwhelming
Some readers may be unimpressed by programmers who complain about and criticise assembly language, e.g., claiming it offers "no benefit" to others, especially when no one is forcing these programmers to use it
Not knowing assembler means programmers have a bit of a blind spot towards what are expensive things to do in C vs what generates the best code.
For example, debugging a program sometimes requires looking at the generated assembler. Recently I was wondering why my deliberate null pointer dereference wasn't generating an exception. Looking at the assembler, there were no instructions generated for it. It turns out that since a null pointer dereference was undefined behavior, the compiler didn't need to generate any instructions for it.
I do need to understand how indexes in db engine work, I need to understand there might be socket exhaustion in the system, I do need to understand how my framework allocates data on heap vs stack.
Having to drop to instructions that is for web servers, db, frameworks developers not for me to do. I do have a clue how low level works but there is no need for me.
That is part where parent poster is correct there are better ways for developers to spend time on - trust your database, web servers and framework and learn how those work in depth you can skip asembler, because all of those will take a lot of time anyway and most likely those are the ones you should/can tweak to fix performance not assembler.
I deeply hate this attitude in modern compiler design.
Further, changes in the ISA can open up gains in performance that weren't available in yesteryear. An example of this would be SIMD instruction usage.
It's not a bad idea to know enough assembly language to understand why code is slow. However, the general default should be to avoid writing it. Your time would be better spent getting a newer version of your compiler and potentially enabling things like PGO.
I don't follow. Why should assembly have to be useful or pleasant in a production environment, for learning it to be useful?
I was taught a couple different flavours of assembly in university, and I found it quite useful for appreciating what the machine actually does. Certainly more so than C. Abstractions do ultimately have to be rooted in something.
Your point about education is orthogonal to the point made. I agree with you that learning assembly can be a good way to teach people how computers work on a low level, but that has nothing to do with whether it is useful as a skill to learn.
As someone teaching similar things at the university level to a non-tech audience I have to always carefully wheigh how much "practically useless" lessons a typical art student can stomach. And which kind of lesson will just deter them, potentially forever.
I don't understand the distinction you're trying to make. The post I was replying to specifically discussed "learning assembly language". My entire point is that "learning assembly language" has purposes other than creating practical real-world programs in assembly.
If you can't see through field accesses and function calls to memory indirections, anything you might read about how TLBs and caches and branch prediction work doesn't connect to much.
But there are a number of things we did that are not available or difficult in C:
- Guaranteed tail calls
- Returning multiple values without touching memory
- using the stack pointer as a general purpose pointer for writing to memory
- Changing the stack pointer to support co-routimes
- Using our own register / calling convention (e.g. setting aside a register to be available to all routines
- Unpicking the stack to reduce register setup for commonly used routines or fast longjmps
- VM "jump tables" without requiring an indirection to know where to jump to
We are also getting burned out by the modern Agile web/data/whatever development scene and would like to drill really deep into one specific area without stakeholders breathing down our necks every few hours, which assembly programming conveniently provides.
I also consider the grit (forced or voluntary) to be a bath of fire which significantly improved two important things - the programmer's understanding of the system, and the programmer's capability to run low level code in his brain. Is it suffering? Definitely, but this is a suffering that bring technical prowess.
Most of us do not have the privilege to suffer properly. Do you prefer to suffer from incomplete documentation, very low level code and banging your head on a wall for tough technical problems, or do you prefer to suffer from incomplete documentation, layers and layers of abstraction, stakeholders changing requirements every day and actually know very little about technical stuffs? I think it is an easy choice, at least for me. If there is an assembly language / C job that is willing to take me in, I'll do it in half of the salary I'm earning.
Forth was quite acceptable performance wise, but that's barely above a good macro assembler.
And after the 8-bitters, assembly coding on the Amiga was pure pleasure - also for large programs, since apart from the great 68k ISA the entire Amiga hardware and operating system was written to make assembly coding convenient (and even though C was much better on the 68k, most serious programs used a mix of C and assembly).
(also, while writing assembly code today isn't all that important, reading assembly code definitely is when looking at compiler output and trying to figure out why and how the compiler butchered my high level code).
Agreed - I wouldn't be able to write any x86 assembly without a bit of help, but having done some game reverse engineering I've learned enough to make sense of compiler generated code.
Anyone curious how their JVM, CLR, V8, ART, Julia,.... gets massaged into machine code only needs to learn about the related tools on the ecosystem.
Some of them are available on online playgrounds like Compiler Explorer, Sharpio,....
I am curious what specific examples do you have of the HW and OS being made/written to make ASM convenient?
This YouTube playlist gives a nice overview of assembly coding on the Amiga (mostly via direct hardware access though): https://www.youtube.com/playlist?list=PLc3ltHgmiidpK-s0eP5hT...
Enables debugging binaries and crash dumps without complete source codes, like DLLs shipped with Windows or third-party DLLs. Allows to understand what compilers (both traditional and JIT) did to your source codes, this is useful when doing performance optimizations.
But for tinkering (e.g writing GBA/NES games), hell why not? It's fun.
I'd add "yet" - we runinafed that the reason new machines with similar shapes (quad core to quad core of a newer generation) doesn't immediately seem like a large a jump as it ought, in performance, is because it takes time for people other than intel to update their compilers to effectively make use of the new instructions. icc is obviously going to more quickly (in the sense of how long after the CPU is released, not `time`) generate faster executing code on new Intel hardware. But gcc will take longer to catch up.
There's a sweet spot from about 1-4 years after initial release where hardware speeds up, but toward the end of that run programs bloat and wipe all the benefits of the new instructions; leading to needing a new CPU, that isn't that much faster than the one you replaced.
Yet.
Which reminds me I need to benchmark a Linux kernel compile to see if my above supposition is correct, I have the timings from when I first bought it, as compared to a 10 year old HP 40 core machine (ryzen 5950 is 5% faster but used 1/4th the wall power.)
-> ruminated
In embedded land, if your microcontroller is unpopular, you don't get much in the way of optimization. The assembly GCC generates is frankly hot steaming trash and an intern with an hour of assembly experience can do better. This is not in any way an exaggeration.
I've run into several situations where hand-optimized assembly is tens of times faster than optimized C mangled by GCC.
I do not trust compilers anymore unless it's specifically for x86_64, and only for CPUs made this decade
Once I got the code sequences for this right on my AArch64 code generator, I don't have to ever figure it out again!
It is all a matter of having high quality macro assemblers, and people that actually care to write structured documented code in Assembly.
When they don't care, usually not even writing in C and C++ will save the kind of code they write.
I vaguely recall ada inline assembly looking like function calls, with arguments that sometimes referenced high-level-language variables)
unrelated to that, I distinguish between machine code which is binary/hex, and assembly as symbolic assembler or macro assembler, which can actually have high level macros and other niceties.
And one thing I can say for sure. I took assembly language as my second computer course, and it definitely added a lifelong context as to how machines worked, what everything translated to and whether it was fast or efficient.
A friend of mine who also had a CoCo wrote an assembler as a term project.
If you have a six-day deadline, probably it would be better to use a high-level language instead.
But, when you have time for them, all of these things are intrinsically rewarding. Not all the time! And not for everyone! But for some of us, some of the time, they can all be very enjoyable. And sometimes that slow effort can achieve a result that you can't get any other way.
I haven't written that much assembly, myself. Much less than you have. If I had to write everything in assembly for years, maybe I wouldn't enjoy it anymore. I've written a web server, a Tetris game, some bytecode interpreters, a threading library, a 64-byte VGA graphics demo, a sort of skeletal music synthesizer, and an interpreter for an object-oriented language with pattern-matching and multiple dispatch, as well as a couple of compilers in high-level languages targeting assembly or machine code. All of these were either 8086, ARM, RISC-V, i386, or amd64; I never had to suffer through 6809 or various microcontrollers.
Maybe most important, I've never written assembly code that someone else depended on working. Those programs I've mostly written in Python, which I regret now. It's much faster that way. However, I've found it useful in practice for debugging C and C++ programs.
I think that a farmer who says, "For the vast majority of consumers, gardening offers absolutely no benefit," is missing the point. It's not about easier access to parsley and chives. Similarly for an author who says, "For the vast majority of readers, solving crossword puzzles offers absolutely no benefit."
So I don't think assembly sucks.
But C is slow to create -- it is like using a toothpick
Writing from scratch is slow, and using C libraries also sucks. Certainly libc sucks, e.g. returning pointers to static buffers, global vars for Unicode, etc.
So yeah I have never written Assembly that anybody needs to work, but I think of it as "next level slow"
---
Probably the main case where C is nice is where you are working for a company that has developed high quality infrastructure over decades. And I doubt there is any such company in existence for Assembly
But the context where I'm doing it is very different from the context where you had to write a division routine from scratch! We never use assembly where a higher-level language would be good enough. It's only used for things that can't be written in C at all, either because it needs an instruction that the C compiler won't emit, or it involves some special calling convention that C can't express.
However, I read assembly in production all the time. Literally every day on the job. It's absolutely essential for crashes that won't reproduce locally, issues caused by compiler bugs, or extremely sensitive performance work. Now, lots of programmers very rarely have to deal with those sorts of things, but when it comes up, they'll be asking for help from the ones who know this stuff.
I hardly consider myself an expert ARM ASM programmer (or even an amateur...), but a baseline level of how to read it even if you have to look up a bunch of instructions every time can be super useful for performance work, especially if you have the abstract computer engineering know how to back it up.
For example, it turns out that gcc 7.3 (for arm64) doesn't optimize
the same as !The former was compiled into a branchless set of instructions, while the latter had a branch!
It absolutely sucks. But it's not scary. In my world (Zephyr RTOS kernel) I routinely see people go through all kinds of contortions to build abstractions around what are actually pretty straightforward hardware interfaces. Like, here's an interrupt controller with four registers. And here's a C API in the HAL with 28 functions and a bunch of function pointer typedefs and handler mechanics you're supposed to use to talk to it. Stuff like that.
It's really common to see giant abstractions built around things that could literally be two-instruction inline assembly blocks. And the reason is that everyone is scared of "__asm__".
In, say, the interrupt controller case: there's a lot of value in having a generic way for boards to declare what the IRQ for a device actually is. But the driver should be stuffing that into the hardware or masking interrupt priorities or whatever using appropriately constructed assembly where needed, and not a needless layer of C code that just wraps the asm blocks anyway.
[1] And to be clear I'm not that much of a devicetree booster and have lots of complaints within the space, both about the technology itself and the framework with which it's applied. But none that would impact the point here.
I had to implement Morton ordering on this platform. The canonical C for this blows up to over 300 instructions per iteration. I unrolled the loop, used special CPU hardware and got the entire thing in under 100 instructions.
Compilers, even modern ones, are not magic and only understand CPUs popular enough to receive specific attention from compiler devs. If your CPU is unpopular, you're doing optimizations yourself.
Assembly doesn't matter to arduino script kiddies, but it's still quite important if you care at all about execution speed, binary size, resource usage.
It quickly becomes tedious to do large programs, not really hard, just unmanagable, which is precisely it should be taught as a first language. You learn how do do simple things and you learn why programming languages are used. You teach the problem that is being solved before teaching more advanced programming concepts that solve the problem.
So why teach someone a language that doesn't have if, while, (local) variables, scopes, types, nor even real function calls?
It's a very nice exercise for understanding how a computer functions, and it has a clear role in education - I'm not arguing people shouldn't learn it at all. But I think it's a terrible first language to learn.
I see lots of people become pretty helpless when their framework isn’t working as expected or abstraction becomes leaky. Most people don’t really need to know assembly in order to get past this, but the general intuition of “there is something underneath the subtraction that I could understand” is very useful.
And I could go on with other topics. High-level languages, even something like C, are just a completely different model of looking at the world from machine language, and truly understanding how machines work is actually quite an alien model. There's a reason that people try to pretend that C is portable assembler rather than actually trying to work with a true portable assembler language.
The relationship you're looking for is not arithmetic to calculus, but set theory to arithmetic. Yes, you can build the axioms of arithmetic on top of set theory as a purer basis. But people don't think about arithmetic in terms of set theory, and we certainly don't try to teach set theory before arithmetic.
I don’t think that’s a fitting analogy.
Nearly everyone on the planet uses (basic) arithmetic. Very few use calculus.
By contrast, very few (programmers) use ASM, but nearly all of them use higher level languages.
My first language was BASIC on a V-tech. It's not quite the same but it still was such a fantastic starting point.
I've tried luring people into programming with Python for example and see them get frustrated by the amount of abstractions and indirection going on. I am really starting to like this idea of starting with assembly.
My point is that the analogy with arithmetic vs. calculus doesn't hold.
Nearly everyone uses basic arithmetic in everyday life, and a tiny fraction of those use calculus.
No programmer needs to learn ASM to be able to know how to use higher level languages. And a tiny fraction of them are using actual ASM in their everyday jobs.
Also, I think you can still learn the basic constructs of how languages work at a lower level without every learning actual ASM. There's no way you can learn calculus without an understanding of arithmetic.
The fact that our high level languages compile down to assembly doesn't mean we use assembly in any meaningful sense. My C code will be correct or not based on whether it conforms to the semantics of the C abstract machine, regardless of whether those semantics match the semantics of the assembly language that it happens to compile down to. Even worse, code that is perfectly valid in assembler may be invalid C, even if the C code compiles down to that same assembler code. The most clear example is adding 1 to an int variable that happens to have the value MAX_INT. This code will often compile down to "add, ax, 1" and set the variable to MIN_INT, but it is nevertheless invalid C code and the compiler will assume this value is impossible to happen.
This relationship between a programming language and assembler is even more tenuous for languages designed to run on heavy runtimes, like Java or JavaScript.
In contrast, you can have a very successful, very advanced career in computer science or in programming without once in your life touching a line of assembler code. It's not very likely, and you'll be all the poorer for it, but it's certainly possible.
Assembly language is much more like learning the foundations of mathematics, like Hilbert's program (except, of course, historically that came a few millenia after).
I've taught people Python as their first language, and this was their exact opinion of it.
When you're an experienced programmer you tend to have a poor gauge of how newcomers internalize things. For people who are brand new it is basically all noise. We're just trying to gradually get them used to the noise. Getting used to the noise while also trying to figure out the difference between strings, numbers, booleans, lists, etc. is more difficult for newcomers than many people realize. Even the concept of scoping can sometimes be too high-level for a beginner, IME.
I like asm from the perspective that, its semantics are extremely simple to explain. And JMP (GOTO) maps cleanly from the flowchart model of programming that most people intuit first.
In particular, Python having generators and making range() be a generator means that in order to fully explain a simple for loop that's supposed to do something X times, I have to explain generators, which are conceptually complicated. When range() just returned a list, it was much easier to explain that it was iterating over a list that I could actually see.
Like if range was used like this:
No one is going to ask how that works internally, so I don't think it's necessary to treat range(1, 100) any differently. For this usage it makes no difference if it's a generator, a list (excepting performance on large ranges), or if the local variable is incremented directly like a C-style for loop.It was already much more powerful than most people writing simple shell script replacements were aware of.
Thing is, very few bother to read the reference manuals cover to cover.
However, I don't agree at all that having strings and numbers as different things was ever a problem. On the contrary, explaining that the same data can be interpreted as both 40 and "0" is mistifying and very hard to grok, in my experience. And don't get me started on how hard it is to conceptualize pointers. Or working with the (implicit) stack in assembly instead of being able to use named variables.
You can teach them how to implement function calls, variables and loops using assembly, to show them how they work under the hood and how they should be thankful for having simple if in their high level languages like C.
They would understand code and data are in the same place, that all flow control effectively boils down to a jump, and they have a _more_ accurate picture of the inside of a machine than anyone starting out with Python or JavaScript could hope for.
Having spent 25 years to get to assembler, I wish I'd started it sooner. It's truly a lovely way to look at the machine. I'll definitely be teaching my kids how to program in assembly first (probably x86-16 using DOS as a program launcher)
Be very careful that you're not going to just kill enthusiasm for programming as an activity entirely with this approach.
I see this happen a lot (I did a lot of robotics/programming mentoring), and then adults wonder why their kids don't like any of the stuff they like - and the reason is that the adult was really a dick about making them learn the things the adult liked, and ignored most of the fun aspects of the activity, or the wishes of the kid.
This can be done with any programming language.
The point of teaching assembly isn't for someone to memorize all the details of any particular instruction set. It's about conceiving of the decomposition of problems on that level. It's about understanding what data is, so that when the student later learns a higher-level programming language, it sets expectations for what happens when you open a file, what kind of processing has to be done, etc. It's the basis for understanding abstractions that are built upon all those 1s and 0s, about the way that a program implicitly assigns semantics to them.
(This is best done with a toy assembly language, not one that comes anywhere near reflecting the complexity of modern CPUs. Anything to do with the practical considerations of modern optimizing compilers is also missing the point by a mile.)
These are all things that are your goals, as the adult and teacher.
The student who wants to engage with programming and software likely has other goals in mind.
Skip all the crap you just mentioned, focus on helping them achieve their goals. I think you'll find those are usually more in the realm of "I want to make a game" or "I want to show my stuff to friends on a website" or "I want to make the computer play music" or [insert other high level objective that's not "learn about bits and bytes"].
Will that involve the stuff you mentioned? Sure will, and a student who feels like they're achieving the thing they want by learning that stuff is engaged.
But a student who gets to just sit there and listen to you drone on and on about "abstractions" and "instructions sets" and "data is code" and "semantics" all to end up with a complicated file that functionally just adds two numbers together? That student is usually bored and disengaged.
And the student who doesn't learn these concepts will inevitably run into a roadblock soon thereafter.
> But a student who gets to just sit there and listen to you drone on and on about "abstractions" and "instructions sets" and "data is code" and "semantics"
You don't "drone on" about these things. You introduce them as it makes sense, by pointing things out about the first programs as they are developed. You don't talk about abstracting things and assigning semantics; you do it, and then point out what you've done.
So we agree that maybe dragging them right into the start by teaching assembly (because it's good at teaching those things) as the first time language isn't the best strategy?
At no point will I argue against learning it. Knowing how machines work is great, and I think going "down" the stack benefits a lot of developers ONCE they're developers and have an understanding that programming and computers are things they like and want to do.
But first you have to foster enthusiasm and nurture interest. You don't do that by declaring that you're going to teach your kids assembly... you do that by listening to your kids interests in the space and helping them achieve their goals.
Reading is easier when you know your latin roots, but we don’t make kids speak latin before See Spot Run even if it would help.
Frankly, a more accurate picture than those starting in C have, too.
That sounds like the perfect beginner language! If they survive that experience, they'll do very well in almost any type of programming, as it's mostly the same just a tiny bit less tedious. A bit like "hardening" but for programmers.
I guess it's as much "gatekeeping" as being required to formulate plans and balance tradeoffs is "gatekeeping".
This is like learning to read by first being asked to memorize all the rules of grammar and being quizzed on them, or being forced to learn all the ins and outs of book binding and ink production.
It's tedious, unproductive, miserable.
There's very little reward for a lot of complexity, and the complexity isn't the "stimulating" complexity of thinking through a problem; it's complexity in the sense of "I put the wrong bit in the wrong spot and everything is broken with very little guidance on why, and I don't have the mental model to even understand".
There's a perfectly fine time to learn assembly and machine instructions, and they're useful skills to have - but they really don't need to be present at the beginning of the learning process.
---
My suggestion is to go even farther the other way. Start at the "I can make a real thing happen in the real world with code" stage as soon as possible.
Kids & adults both light up when they realize they can make motor turn, or an LED blink with code.
It's similarly "low level" in that there isn't much going on and they'll end up learning more about computers as machines, but much more satisfying and rewarding.
> it's complexity in the sense of "I put the wrong bit in the wrong spot and everything is broken with very little guidance on why, and I don't have the mental model to even understand
That's the nice thing about assembly, it always works, but the result may not be as expected. But instead of having a whole lot of magic between what is happening and how you model it, it's easy to reason about the program. You don't have to deal with stack trace, types, garbage collection and null pointer exception. Execution and programming is the same mental model, linear unless you said so.
You can start with assembly and then switch to C or Python and tell them: For bigger project, assembly is tedious and this is what we invented instead.
[0]: https://edsim51.com/about-the-simulator/
AVR's assembly is quite mediocre, with 120+ something instructions, with lots of duplication among them (IIRC — it's been... many years already), and some people swore by PIC which only had 35 instructions to remember. But it was still easier than lobotomizing oneself by trying to write a Win32 application in x86 assembly (which came later... and went to the trash bin quickly while microcontrollers stuck for much longer).
That's why it's such an important first language! Pedagogically it's the foundation motivating all the fancy things languages give you.
You don't teach a kid to cut wood with a table saw. You give them a hand saw!
Programming languages are usually designed based on formal semantics. They include constructs that have been found either through experience or certain formal reasons to be good ways to structure programs.
Haskell's lazy evaluation model, for example, has no relationship to assembly code. It was not in any way designed with thought to how assembly code works, it was designed to have certain desirable theoretical properties like referential transparency.
It's also important to realize that there is no "assembly language". Each processor family has its own specific assembly code with its own particular semantics that may vary wildly from any other processor. Not to mention, there are abstract assembly codes like WebAssembly or JVM bytecode, which often have even more alien semantics.
You don't teach Haskell to seventh grader.
But 4 bit assembly driving a few LEDs? That works
But hey, what do I know. Im the kind of guy who gets to play with TMA and seriously considers purchasing hydrazine for work. What do I know?
Okay but that's not for pedagogical reasons, it's because power saws are MUCH more dangerous than hand saw.
Contrariwise, you don't teach a kid to drill wood with a brace & bit, because a power drill is easier to use.
- machine code
- assembly
- Lisp and Forth
- C
- Pascal
- maybe a short detour into OOP and functional languages
...but in the end, all you need to understand for programming computers are "sequences, conditions and loops" (that's what my computer club teacher used to say - still good advice).
But in the end no one learns "assembler". Everyone learns a specific ISA, and they all have different strengths and limitations. Assembler on a 36-bit PDP-10, with 16 registers and native floating point, is a completely different experience to assembler on a Z80 with an 8-bit accumulator and no multiply or divide.
You can learn about the heap and the stack and registers and branches and jumps on both, but you're still thinking in terms of toy matchstick architecture, not modern building design.
I fully agree - and assembly language teaches you precisely 0 of these.
I think there's value in understanding how high level language constructs like if-else and loops can all be constructed from simple conditional jumps, and that a function call is just a CALL/RET pair with the return address being stored on the stack.
Also, structured programming had to be invented, and working in assembly code makes it clearer why.
It's also food for thought why CPU ISAs never made the leap to structured programming.
And I absolutely agree there is value in understanding the mechanics of how languages are executed. What I disagree with is that this is necessary for being a good programmer, and that it is useful as an initial learning experience.
Programming itself didn't start with assembly. It started with pseudo-code, which was always expressed in a high-level format, and then manually translated to some form of assembly language for a particular physical machine. But people have never designed their programs in terms of assembly - they have always designed them in higher level terms.
Not doing this is how you get Electron.
It may put people off a programming career, but perhaps that is good. There are a lot of people who work in programming who don't understand the machines they use, who don't understand algorithms and data structures, they have no idea of the impact of latency, of memory use, etc. They're entire career is predicated on being able to never have to solve a problem that hasn't been solved in general terms already.
Plus, it is literally impossible to do any kind of math without knowing arithmetic. It is very possible to build a modestly advanced career knowing no assembly language.
No comments yet
The first graders in my neighbourhood school are currently leaning about probability. While they did cover addition earlier in the year, they have not yet delved into topics like multiplication, fractions, or anything of that sort. What you suggest is how things were back in my day, to be fair, but it is no longer the case.
The best ISA for learning is probably the Motorola 68000, followed by some 8-bit CPUs (6502, 6809, Z80), also probably ARM1, although I never had to deal with it. I always thought that x86 assembly is ugly (no matter if Intel or AT&T).
> It quickly becomes tedious to do large programs
IME with modern tooling, assembly coding can be surprisingly productive. For instance I wrote a VSCode extension for 8-bit home computers [1], and dog-fooded a little demo with it [2], and that felt a lot more productive than back in the day with an on-device assembler (or even typing in machine code by numbers).
[1] https://marketplace.visualstudio.com/items?itemName=floooh.v...
[2] https://floooh.github.io/kcide-sample/kc854.html?file=demo.k...
I agree about tooling, I made a pacman game in a dcpu16 emulator in a couple of days.
I experimented with a fantasy console idea using an in-browser assembler as well. https://k8.fingswotidun.com/static/ide/I think you can build environments that give immediate feedback and the ability to do real things quickly in ASM. I would still recommend moving swiftly on to something higher level as soon as it started to feel like a grind.
- assembly instructions are executed in the order they appear in in the source code
- an x86 processor only has a handful of registers
- writing to a register is an instruction like any other and will take roughly the same time
- the largest registers on an x86 processor are 64-bit
The reality is that any assembler simple enough to be taught as your first contact with programming will leave you with a wrong intuition about how modern processors work, and thus a wrong intuition about the relative performance of various operations.
Having no intuition about something is better than building a bad intuition, especially at the beginning of your learning journey.
It doesn’t, and out-of-order CPUs don’t do that. https://en.wikipedia.org/wiki/Out-of-order_execution: “In this paradigm, a processor executes instructions in an order governed by the availability of input data and execution units, rather than by their original order in a program.”
For instance you don't need to be afraid that an instruction uses garbage inputs just because a previous instruction hadn't finished computing an input value to the instruction. At worst you'll get a pipeline stall if the CPU can't fill the gap with out-of-order executed instructions.
On some CPUs it does get tricky once memory is involved though (on ARM, but not on x86).
> […]
> On some CPUs it does get tricky once memory is involved though (on ARM, but not on x86).
https://en.wikipedia.org/wiki/Memory_ordering:
“Among the commonly used architectures, x86-64 processors have the strongest memory order, but may still defer memory store instructions until after memory load instructions.”
Also, the Z8000 looks quite interesting and like the better 16-bit alternative to the x86, but it never took off: https://en.wikipedia.org/wiki/Zilog_Z8000
If I were teaching a general-interest programming course, I'd probably start with just a bit of BASIC to introduce a few broad concepts like variables and looping, then a little assembly to say, "And this is what's going on when you do those things," and then move up the chain. Then when they get to something like C and go to look at the assembly it produces for debugging, they'll at least be familiar with concepts like registers and branching. So not quite the order I happened to do it in, but similar.
I wouldn't teach it first, but after a person knows the basics in another language, seeing how it all actually works can be fun.
Assembly language is not a reasonable first programming language. There's just so many things about it that make it a poor choice for programming instruction.
Chiefly, assembly lacks structure. There's no such thing as variables. There's no such thing as functions. You can fake some of this stuff with convention, but if you make mistakes--and students in intro-to-programming will make mistakes--there is nothing that's going to poke you that you did something wrong, you just get the wrong result.
Indeed - you don't actually need to work on difficult tasks to get the intellectual benefit. Once you've properly understood what a computer is, you can absorb the ideas of SICP.
Load registers, call DOS or BIOS with 'int', etc. all interactively and with a nice full screen display of registers, flags and memory. Of course entering single instructions to run immediately only gets you so far, but you can also enter short programs into memory and single step through them.
It's too bad nothing like this seems to exist for modern systems! With the CPU virtualization features now available, you could even experiment with ring 0 code without fear of crashing your machine.
[1] http://eji.com/a86
[2] to run it under DOSBox, you may need to add the '+b4' command line switch in order to bypass some machine detection code
https://www.zachtronics.com/shenzhen-io/
https://edsim51.com/about-the-simulator/
Very immediate and positive feedback.
Unless you're teaching people preparing for engineering hardware perhaps, I think ASM is absolutely the wrong language for this. The first reason is that programming is about problem solving, not fiddling with the details of some particular architecture, and ASM is pretty bad at clearly expressing solutions in the language of the problem domain. Instead of programming in the language of the domain, you're busy flipping bits which are an implementation detail. It is really a language for interfacing with and configuring hardware.
The more insidious result is that teaching ASM will make an idol out of hardware by reinforcing the notion that computer science or programming are about computing devices. It is not. The computing device is totally auxiliary wrt subject matter. It is utterly indispensable practically, yes, but it is not what programming is concerned with per se. It is good for an astronomer to be able to operate his telescope well, but he isn't studying telescopes. Telescope engineers do that.
"How do I use bits to represent concepts in the problem domain?" is the fundamental, original problem of computer science.
And to teach this, you use much simpler problems.
> ... reinforcing the notion that computer science or programming are about computing devices. It is not.
It is, however, about concepts like binary place-value arithmetic, and using numbers (addresses) as a means of indirection, and about using indirection to structure data, and about being able to represent the instructions themselves as data (such that they can be stored somewhere with the same techniques, even if we don't assume a Von Neumann machine), and (putting those two ideas together) about using a number as a way to track a position in the program, and manipulating that number to alter the flow of the program.
In second year university I learned computer organization more or less in parallel with assembly. And eventually we got to the point of seeing - at least in principle - how a basic CPU could be designed, with its basic components - an ALU, instruction decoder, bus etc.
Similarly:
> It is good for an astronomer to be able to operate his telescope well, but he isn't studying telescopes.
The astronomer is, however, studying light. And should therefore have a basic mental model of what a lens is, how lenses relate to light, how they work, and why telescopes need them.
> It is, however, about concepts like binary place-value arithmetic
That is the original problem of using a particular digital machine architecture. One shouldn't confuse the practical/instrumental problems at the time with the field proper. There's nothing special about bits per se. They're an implementation detail. We might study them for practical reasons, we may study the limits of what can be represented by or computed using binary encodings, or efficient ways to do so or whatever, but that's not the central concern of computer science.
> In second year university I learned computer organization more or less in parallel with assembly.
Sure. But just because a CS major learns these things doesn't make it computer science per se. It's interesting to learn, sure, and has practical utility, but particular computer architectures are not the domain of computer science. They're the domain of computer engineering.
> The astronomer is, however, studying light.
No, physicists studying optics study light in this capacity. Astronomers know about light, because knowledge of light is useful for things like computing interstellar distances or determining the composition of stellar objects or for calculating the rate of expansion or whatever. The same goes for knowledge of lenses and telescopes: they learn about them so they can use them, but they don't study them.
In order to successfully program a solution to a problem, it is necessary to understand the system you are working with. A machine-level programming language cuts through the squishiness of that and presents a discrete and concrete system whose state can be fully explained and understood without difficulty. The part where it's all implementation details is the benefit here.
From a functional perspective, you see things properly as a matter of language. When you describe to someone some solution in English, do you worry about a "computational system"? When writing proofs or solving mathematical problems using some formal notation, are you thinking of registers? Of course not. You are using the language of the domain with its own rules.
Computer science is firmly rooted in the formal language tradition, but for historical reasons, the machine has assumed a central role it does not rightly possess. The reason students are confused is because they're still beholden to the machine going into the course, causing a compulsion to refer to the machine to know "what's really going on" at the machine level. Instead of thinking of the problem, they are worrying about distracting nonsense.
The reason why your students might feel comforted after you explain the machine model is because they already tacitly expect the machine to play a conceptual role in what they're doing. They stare and think "Okay, but what does this have to do with computers?". The problem is caused by the prior idolization of the machine in the first place.
But machine code and a machine model are not the "really real", with so-called "high-level languages" hovering above them like some illusory phantom that's just a bit of theater put on by 1s and 0s. The language exists in our heads; machines are just instruments for simulating them. And assembly language itself is just another language. It's domain just is, loosely, the machine architecture.
So my view is that introductory classes should beat the machine out of students' heads. There is no computer, no machine. The first few classes in programming should omit the computer and begin with paper and pencil and a small toy language (a pure, lispy language tends to be very good here). They should gain facility in this small language first. The aim should be to make it clear that the language is about talking about the domain, and that it stands on its own, as it were; the computer is to programming as the calculator is to mathematical calculation. Only once this have been achieved are computers permitted, because large programs are impractical to deal with using pen and paper.
This intuition is the foundation difference between a bona fide compute science curriculum and dilettante tinkering.
Pointers are an abstraction that are no more or less real than any other abstraction. They belong to particular languages, but they are not intrinsic to computer science as such as if they were some kind of atomic construct of the field.
> you can't just put a LISP book on top of an x86 chip [...the rest is confusing...]
I'm not talking about what, in today's contingent market and incidental state of the art, is practical. Obviously, if you want to run any program in any language, you have to target some architecture. The point is that the architecture is utterly incidental as far as the language per se is concerned. Lisp is not "less real" because you need to translate it into machine code. The machine code of a particular architecture is only there to simulate Lisp on that architecture. You can in principle have different architectures with their own machine code that can be used to simulate the very same Lisp.
> Computer science is, to be honest, not interesting or useful without a machine to use it on.
Computer science is very interesting without a machine, but how interesting you find it is neither here nor there. The point isn't to do away with machines, or that the machine has no practical importance. The point is to say that the machine is only a tool, and not the subject matter of computer science.
We give a lot of attention to pointers because electronic computers feature random access memory consisting of small, equal-sized cells of bits, keyed by binary numbers.
Start out at "here's your machine code. Let's understand how x86_64 gets started" and work your way up to "now you have the automation to compile Linux and a modern compiler".
Which would certainly have stops most of the way up for things we usually include.
Avoid:
- Z80: at least as a first language. Extended 8080 with completely different syntax, even more messy and unorthogonal than x86!
- RISC-V: an architecture designed by C programmers, pretty much exclusively as a target for compiling C to & omitting anything not necessary for that goalPython is much easier to introduce someone to because there's no boilerplate and the tooling is very simple. Assembly on x86 machines is a royal PITA to set up, and you also need some kind of debugger to actively inspect the program counter and registers.
When I took Computer Organization & Architecture, they had us play around with MARIE[1] which really made assembly make sense to me. After that, I wrote an 8080 emulator and it made even MORE sense to me.
---
[1] https://marie.js.org/
http://www.z80.info/decoding.htm
For actually programming in machine code this understanding of the internal opcode structure isn't all that useful though, usually - without an assembler at hand - you had a lookup table with all possible assembly instructions on the left side, and the corresponding machine code bytes on the right side.
Programming by typing machine code into a hex editor is possible, but really only recommended as absolute fallback if there's no assembler at hand - mainly because you had to keep track of all global constant and subroutine entry addresses - e.g. the main thing that an assembler does for you, and you had to leave gaps at strategic locations so that it is possible to patch the code without having to move things around.
I am thinking about showing them what is under the hood, that python itself is just a program. When I learned to program it was the late 70s, and trs-80s and apple-IIs were easy to understand at the machine code level.
I could recapitulate that experience for them, via an emulator, but that again just feels like an abstraction. I want them to have the bare-metal experience. But x86 is such a sprawling, complicated instruction set that it is very intimidating. Of course I'd stick to a simplified subset of the instructions, but even then, it seems like a lot more work to make output running on a PC vs on the old 8-bit machines where you write to a specific location and it shows up on the screen.
Show them a CPU running on Logisim (or the like, such as the newer Digital) and show how when you plug a program into a ROM, it turns into wires lighting up and flipping gates/activating data lines/read registers etc.
Skipping over the bundling of instructions into code blocks, the next logical construct are functions. These have references to code and data in memory; if you want to relocate functions around in memory you introduce the concept of relocations to annotate these references and of a linker to fix them to a particular location.
But once the linker has done its job, the function is no longer relocatable, you can't move it around... or that is what someone sane might say.
If you can undo the work of the linker, you can extract relocatable functions from executables. These functions can then be reused into new executables, without decompiling them first; after all, if what you've extracted is equivalent to the original relocatable function, you can do the same things than it.
Repeat this process over the entire executable and you're stripped it for parts, ready to be put back together with the linker. Change some parts and you have the ability to modify it as if you're replacing object files, instead of binary patching it in place with all the constraints that comes with it.
Machine code is like Lego bricks, it just takes a rather unconventional point of view (and quite a bit of time to perfect the art of delinking) to realize it.
My 23+ year experience in computer science and programming is a zebra of black-or-white moments. For the most time, things are mostly obscure, complicated, dark and daunting. Until suddenly you stumble upon a person who can explain those in simple terms, focus on important bits. You then can put this new knowledge into a well-organized hierarchy in your head and suddenly become wiser and empowered.
"Writing documentation", "talking at conferences", "chatting at a cooler", "writing to a blog" and all the other discussions from twitter to mailing lists - are all about trying to get some ideas and understanding from one head into another, so more people can get elucidated and build further.
And oh my how hard is that. We are lucky to sometimes have enlightenment through great RTFMs.
This non-optimising JIT has been far, far easier than all the scary articles and comments I've seen led me to believe.
I'm already in the middle of making it work on both Aarch64 and RISC-V, a couple weeks in.
Our approach was to model the compiler IR into Assembly macros, and follow the classical UNIX compiler build pipeline, thus even though it wasn't the most performant compiler in the world, we could nonetheless enjoy having our toy compiler generate real executables in the end.
[0] - https://www.cs.princeton.edu/~appel/modern
But for a simple taste, the push to stack function currently looks like this. (All the emit stuff just writes bytes into a mmap that gets executed later.)
Creating an assembler with Lisp syntax and then using that to bootstrap a Lisp compiler (with Lisp macros instead of standard assembler macros) is one of those otherwise pointless educational projects I’ve been wanting to do for years. One day perhaps.
You already have the assembler with Lisp syntax covered.
Add some macro support on top, and you can start already implementing the upper layer for your Lisp.
Naturally there are already a couple of attempts at that.
I mmap, insert, mark as executable and done. Patchjumping and everything "just works".
I'm not modifying my own process, so there's no hardening issues. Just modifying an anonymous memory map.
How does the OS and the hardware draw on the screen, actually? All they have is also just calculator stuff, super basic primitives. You can't even do loops in hardware, or even real branches (hardware always "executes both sides" of a branch at once)
Anyways, if you keep digging long enough you eventually end up finding this XKCD https://xkcd.com/435/ =)
Unless you're talking about quantum hardware, that is very much not true. The whole point of transistors is to choose whether to power one part of a circuit or another.
Plus, even for hardware, the solution to all this is to modularize all the way down. One piece of hardware sets up the right state and powers up another piece of hardware - this type of logic doesn't stop at the OS level. For drawing on the screen, ultimately you reach a piece of hardware that lights up in one of three colors based on that state - but all the way there, it's the same kind of "function calls" (and even more indirection, such as communication protocols) on many levels.
In hardware the equivalent of a ternary is a mux, which can be made from a lot of parallel instances of
Or in other words, both branches must be computed and the correct value is chosen based on the condition.Similarly, a two bit adder is not going to have all 4 possible states internally or for some time - as soon as the input voltage is applied to its inputs, its output voltages will correspond to the single result (disregarding the analog signal propagation, which I assume is not what you were talking about).
Similarly, a conditional jump instruction will not be implented natively by computing both "branches". It will do a single computation to set the instruction pointer to the correct value (either current + 1 or destination). Now sure, speculative execution is a different matter, but that is extra hardware that was added late in the processor design process.
The conditional jump is a great example actually. Typically this would be implemented by having one block compute PC+<instruction size> and another block compute the jump target and then choosing between the two using a mux
Secondly, if we think about the instruction decoding itself, it should become pretty clear that even if the hardware of course always exists and is always going to output something, that's not equivalent to saying it will compute all options. If the next instruction is `add ax, bx`, the hardware is not going to compute both ax + bx and ax - bx and ax & bx and ax | bx and so on, and then choose which result to feed back to ax through a mux. Instead, the ALU is constructed of logic gates which evaluate a single logical expression that assigns each output bit to a logical combination of the input bits and the control signal.
Even if you add pipelining, the basic textbook design will stall the pipeline when it encounters a conditional/computed jump instruction. Even if you add basic speculative execution, you still don't get both branches executed at once, necessarily - assuming you had a single ALU, you'd still get only one branch executed at once, and if the wrong one was predicted, you'll revert and execute the other one once the conditional finishes being computed.
I'm talking on a lower level than the clock cycle or instruction. Let's say circuit A takes X and outputs foo(X), and circuit B takes X and outputs bar(X). We want to build a circuit that computes baz(X, Y) = Y ? foo(X) : bar(X), where X is available before Y is. Instead of letting Y settle, powering up one of the circuits A or B, and sending X into it, we can instead send X to circuits A and B at the same time, then use Y to select whichever output we want.
One other common pattern for implementing conditional logic is to compute a boolean expression in which the control signal is just another input variable. In that model, to compute Y ? foo(X) : bar(X), we actually compute baz(X, Y) whose result is the same. This is very commonly how ALUs work.
And the other very common pattern is to split this in multiple clock cycles and use registers for intermediate results. If you don't have two circuits A and B, only one circuit that can compute either A or B based on a control signal (such as simple processors with a single ALU), this is the only option: you take one clock cycle to put Y in a register, and in the next clock cycle you feed both Y and X into your single circuit which will now compute either A or B.
When all the values were POKEd in, I'd save to tape and execute it with RAND USR 16514.
That memory address is permanently etched in my brain even now.
It wasn't good, bad or scary it was just what I had to do to make the programs I wanted to make.
To me, it looks like some kind of complex tetris game. I guess we could maybe represent a program as such, with pieces for registers, instructions, etc.
Yet, the tooling we have is very terse, and textual.
(I think you mean https://news.ycombinator.com/item?id=44184900)
https://youtu.be/ep7gcyrbutA?si=8HiMqH2mMwsJRNDg
No comments yet
IIRC, debug.com could be used to create programs using machine lang.
The reasoning being that COM files were a plain memory dump starting at offset 100H, thus you would type the code in memory and then dump it.
http://ref.x86asm.net/
Assembly as a game, I loved playing it.
That is fun. But this one truly is enough: Turing Complete. You start with boolean logic gates and progressively work your way up to building your own processor, create your own assembly language, and use it to do things like solve mazes and more. Super duper fun
https://store.steampowered.com/app/1444480/Turing_Complete/
https://github.com/capstone-engine/capstone
This is whacky. So presumably adding with a 24-bit constant whose value isn't representable in this compressed 13-bit format would then expand to two add instructions? Or would it store a constant to a register and then use that? Or is there a different single instruction that would get used?
(Also, typo, 42 << 12 is 172032).
No comments yet
The thought came to me when testing the new Jules agentic coding platform. It occured to me that we are just in another abstraction cycle, but like those before, it's so hard to see the shift when everything is shifting.
My conclusion? There will be non-AI coders, but they will be as rare as C programmers before we realize it.