The article didn't nail down an exact reason. Here is my guess. The quote from Andy Hertzfeld suggests the limiting factor was the memory bandwidth not the memory volume:
> The most important decision was admitting that the software would never fit into 64K of memory and going with a full 16-bit memory bus, requiring 16 RAM chips instead of 8. The extra memory bandwidth allowed him to double the display resolution, going to dimensions of 512 by 342 instead of 384 by 256
If you look at the specs for the machine, you see that during an active scan line, the video is using exactly half of the available memory bandwidth, with the CPU able to use the other half (during horizontal and vertical blanking periods the CPU can use the entire memory bandwidth)[1]. That dictated the scanline duration.
If the computer had any more scan lines, something would have had to give, as every nanosecond was already accounted for[2]. The refresh rate would have to be lower, or the blanking periods would have had to been shorter, or the memory bandwidth would have to be higher, or the memory bandwidth would have had to be divided unevenly between the CPU and video which was probably harder to implement. I don't know which of those things they would have been able to adjust and which were hard requirements of the hardware they could find, but I'm guessing that they couldn't do 384 scan lines given the memory bandwidth of the RAM chips, and the blanking times of the CRT they selected, if they wanted to hit 60Hz.
A lot of those old machines had clock speeds and video pixel rates that meshed together. On some color machines the system clock was an integer multiple of the standard colorburst frequency.
The Timex Sinclair did all of its computation during the blanking interval which is why it was so dog slow.
implements · 7h ago
There’s an interesting blog post about how a far simpler machine generates its video signal, if people are curious about the signals involved:
“The CPU then only produces a TV picture when BASIC is waiting for input (or paused). At other times it does not bother to produce a video picture, so the CPU can run the program at full speed.”
krige · 9h ago
The Commodore Amigas had their 68k clock speed differ based on region due to carrier frequency difference (more specifically, 2x freq for NTSC, 1.6x for PAL, which resulted in almost the same, but not quite, clock speed).
It's interesting how the differing vertical resolutions between these two (200p /400i vs 256p /512i) also had some secondary effects on software design, it was always easy to tell if a game was made in NTSC regions or with global releases in mind because the bottom 20% of the screen was black in PAL.
ido · 9h ago
To save the curious a search: the Timex Sinclair is the American variant of the ZX Spectrum.
rwmj · 5h ago
The ZX Spectrum had (primitive) video hardware. The GP commenter means the ZX80 and ZX81 which used the Z80 CPU to generate the display and so really were unable to both "think" and generate the display at the same time. On the ZX81 there were two modes, SLOW mode and FAST mode. In FAST mode the Z80 CPU prioritized computations over generating the display, so the display would go fuzzy grey while programs were running, then would reappear when the program ended or it was waiting for keyboard input.
implements · 4h ago
There was an adventure game that showed “The Mists of Time” for 30 seconds during initial map generation - it was a very creative way to describe the analogue tv noise caused by missing video signal.
ido · 4h ago
You're right! I was thinking of the Timex Sinclair 2068. It was preceded by the 1000 and 1500 (ZX80 & ZX81 respectively as you say).
pragma_x · 16h ago
It's also interesting to look at other architectures at the time to get an idea of how fiendish a problem this is. At this time, Commodore, Nintendo, and some others, had dedicated silicon for video rendering. This frees the CPU from having to generate a video signal directly, using a fraction of those cycles to talk to the video subsystem instead. The major drawback with a video chip of some kind is of course cost (custom fabrication, part count), which clearly the Macintosh team was trying to keep as low as possible.
jnaina · 13h ago
Both the key 8-bit contenders of yore, Atari 8-bit series and Commodore 64 custom graphics chips (Antic and Vic-II) “stole” cycles from the 6502 (or 6510 in the case of C64) did "cycle stealing", when it needed to access memory.
I remember writing a cpu intensive code on the Atari and using video blanking to speed up the code.
MBCook · 13h ago
Plus those weren’t raw bitmaps but tile based to help keep memory and bandwidth costs down.
nradov · 12h ago
And yet despite the lower parts count the Macintosh was more expensive than competing products from Commodore and Atari that had dedicated silicon for video rendering. I guess Apple must have had huge gross margins on hardware sales given how little was in the box.
IAmBroom · 2h ago
Plus ça change...
stefan_ · 16h ago
Displays are still bandwidth killers today, we kept scaling them up with everything else. Today you might have a 4k 30bpp 144hz display and just keeping that fed takes 33Gbit/s purely for scanout, not even composing it.
danudey · 15h ago
I have a 4k 60Hz monitor connected to my laptop over one USB-C cable for data and power, but because of bandwidth limitations my options are 4k30 and USB 3.x support or 4k60 and USB 2.0.
I love the monitor, it's sharp and clear and almost kind of HDR a lot of the time, but the fact that it has a bunch of USB 3.0 ports that only get USB 2.0 speeds because I don't want choppy 30Hz gaming is just... weird.
wmf · 14h ago
Everything is amazing and nobody's happy.
Jaygles · 13h ago
Identifying areas to improve is how we make progress
kragen · 10h ago
Being more judicious about who you wank in front of, for example. (Just in case the reference isn't obvious, I'm not talking about either wmf or Jaygles here.)
tveyben · 9h ago
Thats the exact reason i dichted my dock and connected the monitor directly to my labtop.
30 Hz is way too low, I need 60 (or maybe 50 would have been enough - I’m in the PAL part of the world ;-)
wkat4242 · 12h ago
Should have gone for thunderbolt :)
monkeyelite · 11h ago
4k jumped the gun. It’s just too many pixels and too many cycles. And unfortunately was introduced when pixel shaders starting doing more work.
Consequently almost nothing actually renders at 4k. It’s all upscaling - or even worse your display is wired to double up on inputs.
Once we can comfortably get 60 FPS, 1080p, 4x msaa, no upscaling, then let’s revisit this 4k idea.
account42 · 4h ago
WTF are you talking about, 60 FPS for 4K isn't even that challenging for reasonably optimized applications. Just requires something better than a bargain bin GPU. And 120+ FPS is already the new standard for displays.
monkeyelite · 1h ago
I’m telling you that even if those are the numbers on paper, the software and hardware are using tricks to achieve it which lower the overall quality.
So yes you get 4k pixels lit up on your display, but is it actually a better image?
And yes there may be high end hardware which can handle it, but the sw still made design choices for everyone else.
There are also image algorithms which are not as amenable to GPUs which are now impossible to compute effectively.
bobmcnamara · 15h ago
We see this in embedded systems all the time too.
It doesn't help if your crossbar memory interconnect only has static priorities.
hulitu · 10h ago
And marketing said, when LCDs were pushing CRT out of the market, that you don't need to send the whole image to change a pixel on an LCD, you can change only that pixel.
p_l · 6h ago
except DVI is essentially VGA without Digital-to-Analog part, and original HDMI is DVI with encryption, some predefined "must have" timings, and extra data stuffed into empty spaces of blasting a signal designed for CRT.
I think partial refresh capability only came with some optional extensions to DisplayPort.
account42 · 4h ago
DVI is just a connector - it can be either analog or digital or support both.
rasz · 2h ago
Im not even sure if partial refresh is a thing outside of e-Paper displays. The best we can do on DP is Variable Refresh going all the way down to Panel Self-Refresh.
nothercastle · 13h ago
Why did they need 60hz? Why not 50 like Europe? Is there some massive advantage to syncing with the ac frequency of the local power grid?
kragen · 11h ago
Conventional wisdom a few years after the Macintosh was that 50Hz was annoyingly flickery. Obviously this depends on your phosphors. Maybe it was already conventional wisdom at the time?
I feel like the extra 16% of screen real estate would have been worth it.
If you’re used to seeing 60Hz everywhere like Americans are 50Hz stands out like a sore thumb.
But mostly I suspect it’s just far easier.
ajross · 17h ago
Exactly. Like the Apple ][, the original Mac framebuffer was set up with alternating accesses, relying on the framebuffer reads to manage DRAM refresh.
It looks like DRAM was set up on a 6-CPU-cycle period, as 512 bits (32 16-bit bus accesses) x 342 lines x 60 Hz x 6 cycles x 2 gives 7.87968 MHz, which is just slightly faster than the nominal 7.83 MHz, the remaining .6% presumably being spent during vblank.
meatmanek · 15h ago
But why 342 and tune the clock speed down instead of keeping the clock speed at 8MHz and having floor(8e6/2/6/60/32) = 347 lines?
You could reduce the gain on the horizontal deflection drive coil by 2% to get back to 3:2. In fact, I doubt that it was precise to within 2%.
ajross · 15h ago
That doesn't sound right. The tube the mac was displaying on was much closer to a TV-style 4:3 ratio anyway, there were significant blank spaces at the top and bottom.
If I was placing bets, it was another hardware limitation. Maybe 342 put them right at some particular DRAM timing limit for the chips they were signing contracts for. Or maybe more likely, the ~21.5 kHz scan rate was a hard limit from the tube supplier (that was much faster than TVs could do) and they had a firm 60 Hz requirement from Jobs or whoever.
pezezin · 10h ago
You guys are ignoring the horizontal and vertical blanking periods, which were usually 25~35% of the active horizontal period and 8~10% of the active vertical period. I could find these two links that explain that the actual signal sent to the CRT was around 704x370 pixels:
The title is incorrect, because b&w Macs have 512×342 resolution, not 512x324.
It wouldn't've been too crazy had Apple went with 64K x 4 chips, so they'd've just needed four of them to get 128 KB at a full 16 bits wide.
512x342 was 16.7% of 128 KB of memory, as opposed to 18.75% with 512x384. Not much of a difference. But having square pixels is nice.
jerbear4328 · 18h ago
It looks like it's just the HN submitted title which is wrong (currently "Why the Original Macintosh Had a Screen Resolution of 512×324"). The article's title is "Why the Original Macintosh Had a Screen Resolution of 512×342", and "324" doesn't appear anywhere on the page.
Really, John? You really had to make me parse that word?
webstrand · 18h ago
It's a great word, I use it all the time.
kragen · 10h ago
You shouldn't've tho. Who'd've complained if you hadn't've?
kevin_thibedeau · 17h ago
It usually isn't transcribed with Klingon orthography.
90s_dev · 15h ago
I bet you also work for the IRS don't you
brookst · 13h ago
You version of shouldn’t’ve’s punctuation isn’t like that?
kstrauser · 13h ago
Who’d’ve thought?
JKCalhoun · 17h ago
Worth adding? The (almost [1]) omni-present menu bar ate 20 pixels of vertical space as well — so you could say the application had 322 of useable rows.
[1] To be sure, many games hide the menu bar.
bane · 12h ago
The answer is something that's harder and harder to do these days with all the layers of abstraction -- set a performance target and use arithmetic to arrive at the specifications that you hit and still achieve your performance goal.
It's a bit of work, but I suspect you can arithmetic your way through the problem. Supposing they wanted 60 Hz on the display and a framebuffer you need 196,608 bits/24,576 bytes/24 kbytes [below] on a 1-bit display at 512x384.
The Mac 128k shipped with a Motorola 68k at 7.8336 Mhz giving it 130560 Hz per frame @ 60 fps.
IIR the word length of the 68k is 32bits, so imagining a scenario where the screen was plotted in words, it's something like 20 cycles per fetch [1], you can get about 6528 fetches per frame. At 32-bits a fetch, you need 6144 or so fetches from memory to fill the screen. You need a moment for horizontal refresh so you lose time waiting for that, thus 6528-6144 = (drumroll) 384, the number of horizontal lines on a display.
I'm obviously hitting the wavetops here, and missing lots of details. But my point is that it's calculable with enough information, which is how engineers of yor used to spec things out.
below - why bits? The original Mac used 1-bit display, meaning each pixel used 1-bit to set it as either on or off. Because it didn't need 3 subpixels to produce color, the display was tighter and sharper than color displays, and even at the lower resolution appeared somewhat paperlike. The article is correct that the DPI was around 72. Another way to think about it, and what the Mac was targeting was pre-press desktop publishing. Many printing houses could print at around 150-200 lines per inch. Houses with very good equipment could hit 300 or more. Different measures, but the Mac, being positioned as a WYSISWYG tool, did a good job of approximating analog printing equipment of the time. (source: grew up in a family printing business)
p_l · 6h ago
Motorola 68000 used had 16 data lines and 24 address lines, so it took at least two cycles to just transfer a CPU full word (disregarding timings on address latches etc).
Some of the code AFAIK used fancy multi-register copies to increase cycle efficiency in graphics code.
As for screen, IIRC making it easy to correlate "what's on screen" and "what's on paper" was major part of what drove Mac to be nearly synonymous with DTP for years.
wmf · 11h ago
In typography there are 72 points per inch so they made 1 pixel = 1 point.
simne · 17h ago
Impressed to see, how many people read whole article, not see just one phrase: "We don’t need a lot of the things that other personal computers have, so let’s optimize a few areas and make sure the software is designed around them".
Mac was not cheap machine and Apple that time was not rich to make unnecessary thing - they really need to make a hit this time, and they succeed.
And yes, it is true, they was limited by bandwidth, it is also true they was limited by semi-32bit CPU speed.
But Mac was real step ahead at the moment, and had significant resources to grow when new technology will arrive.
That what I think lack PCs of that time.
badc0ffee · 10h ago
It doesn't say exactly why 512x342 was chosen. But I'm more interested in why it was changed to 512x384 on later Macs. Is it just to fill the full 4:3 screen?
Beyond that, this article really wants to tell you how amazing that resolution was in 1984. Never mind that you could get an IBM XT clone with "budget" 720x348 monochrome Hercules graphics that year and earlier.
fredoralive · 9h ago
The 512x384 models are Macintosh LC adjacent, so the original LC monitor (the LC itself can do 640x480), or the Colour Classics. AFAIK it was partly in order to making the LC work better with the Apple IIe card (although the IIe software uses a 560x384 mode).
A Hercules card, whilst nice does suffer from the same non-square pixels issue as the Lisa, so not as nice for creating a GUI.
rasz · 1h ago
Both MDA and Hercules were 50 Hz. Real mid eighties king of cheap crisp displays would be 12 inch 640x400@71Hz Atari SM124 monitor. You could buy Atari ST + SM124 + Atari SLM804 laser printer + Calamus DTP package for the price of just the Apple laser printer alone :)
hyperhello · 18h ago
The article really didn’t explain why they picked that number.
kmill · 18h ago
I don't know, but I can do some numerology: a 3:2 aspect ratio that's 512 pixels wide would need a 341 and a third lines, so round up and you get 512 by 342.
The later 384 number corresponds to an exact 4:3 aspect ratio.
bryanlarsen · 18h ago
For efficient graphics routines on a 32 bit machine, it's important that the scan line direction (aka horizontal for normally mounted CRT's) be a factor of 32, preferably one that's a power of 2.
The article mentions the desire for square pixels. So presumably they chose the horizontal resolution first and then chose the vertical resolution that gave them square pixels for a 512 pixel horizontal resolution.
nssnsjsjsjs · 18h ago
It was 32bit?!
mayoff · 18h ago
The data and address registers of the 68000 were 32 bits wide.
I remember the "enable 32-bit addressing" part (but it's not pictured..)
tom_ · 18h ago
The 68000 is 16 bit internally, and can access memory only 16 bits at a time, but the instruction set was designed with future iterations in mind, and most instructions can operate on 32 bit quantities - with a performance penalty. (Because in essence it has to do the work in 2 stages.)
Whether this is enough to make it count as actually 32 bits is one for the philosophers.
Externally it had 16 bits for databus and 24 bits for addresses. That is why we later got the 32 bit clean ROMs as Apple used the upper unused 8 address bits for flags.
jdswain · 10h ago
It has 32-bit registers, but it has a 16-bit ALU, so it's a matter of opinion if that makes it a 16 or 32-bit processor. I'd go with 32-bit in that it's instruction set gives the impression to the programmer that they are working with a 32-bit system.
And for more evidence, the Z80 is referred to as an 8-bit processor but has a 4-bit ALU.
tom_ · 15h ago
The cycle counts really don't back this theory up. And indeed, from that link:
> Internally, it uses a 16-bit data arithmetic logic unit (ALU) and two more 16-bit ALUs used mostly for addresses,[4] and has a 16-bit external data bus.
monocasa · 14h ago
Like a lot of things, the taxonomy kind of breaks down at the edges and arguments can be made either way.
I will throw out there though that ALU width and buses are generally seen as orthogonal to 'bitness' of a processor, and more an implementation detail. The Z80 had a 4bit ALU, but is considered an 8bit CPU. The PDP-8/s and SERV have single bit ALUs, but are considered 12 and 32 bits respectively. The 8088 is considered a 16bit CPU despite having both an 8bit ALU and bus.
'Bitness' is generally defined more as 'what is the width of whatever is the closest thing to a GPR'.
kstrauser · 13h ago
Seconded. If you were an ASM programmer, you’d have no idea it had a 16 bit ALU. All the ops were 32 bit, regardless of what the underlying silicon looked like.
Most 32 bit operations are slower than 16 bit operations because the external data bus is only 16 bits and most operations use the external data bus. But simple internal ops are faster at 32 bits, so that seems to indicate the 68000 is 32 bit internally.
ack_complete · 10h ago
There's a subtlety -- word adds are only 8 cycles when adding to an address register. They're 4 cycles to a data register. This is because the 68000 always does address computations in 32-bit, and 16-bit operands are sign extended to 32-bit when adding to an address register. A word add to a data register, on the other hand, only produces a 16-bit result. This is reflected by the canonical instruction being ADDA.W instead of ADD.W for address register destinations.
tom_ · 14h ago
Interesting, thanks. I'd missed that particular detail, possibly because I used to do this stupid shit on the Atari ST and its instructions were quantized to the nearest nop (and so 6 cycles wasn't really a thing). Address register operations are always longs, and clearly the sign extension imposes some overhead. Given that pretty much every other long operation is slower, I imagine this is a case of getting lucky with the timing of the 16-bit internal operations.
ADDQ and ADDX are better instructions to look at, as are any with a Dn,Dn addressing mode. The long and word cases are the same number of instruction bytes, but the long case is still slower.
(Register-to-register moves are the same regardless of width, so presumably it has a 32 bit path for this. That's nice. But not as nice as it would be if it had a 32 bit path for everything. Which it really looks like it doesn't. This CPU has registers, but that can't save it.)
p_l · 6h ago
It started out as 16bit enhancement to previous product, and evolved into being a 32bit architecture over development time.
The separation of Data and Address registers are also result of how it evolved over time, AFAIK, ultimately because it allowed to make the CPU cheaper/easier to make. Another element is that 68000 at least has two layers of microcode - first microcode engine generates instructions interpreted by second microcode engine which finally actually drives execution units.
rsynnott · 17h ago
The 386SX was a similar story, and is normally thought of as basically 32bit. I think the perception difference may be down to timing; the 386SX came out _after_ the DX (with 32 bit data bus), so was thought of as a cheap 32bit chip, vs the 68000 which started off life with a 16bit data bus.
(Fun fact: there was also the 68008, which was a 68k with an 8 bit bus!)
EndsOfnversion · 17h ago
From memory, the primary advantage of the 386SX was the ability to use a cheaper 16-bit motherboard layout and components. The lack of a 32bit bus mattered less when most software was written with 286 compatibility in mind, and the ISA bus was only 16-bits wide, which limited the utility of the 32-bit bus for fast graphics transfers.
The reduced 24-bit address bus was never a significant bottleneck during its commercial lifetime, as little consumer software at the time would require more than 4mb of RAM, and by the time it did the 486SX (32bit busses with no maths coprocessor) was the new value champion.
badc0ffee · 10h ago
The 286/386SX/486SLC could address 16MB, the full 24 bit address space.
> the ISA bus was only 16-bits wide, which limited the utility of the 32-bit bus for fast graphics transfers.
Not only that, it was 8MHz to match the speed of the fastest IBM AT. VLB on a 486/33 or 66 ran at 33 MHz and was a godsend, 8x the bandwidth of 16-bit ISA.
sitkack · 17h ago
The 68008 saw a lot of use in embedded and could easily create a whole microcomputer on a breadboard.
UncleSlacky · 17h ago
And was also used in the Sinclair QL.
edwinjm · 18h ago
The article says:
In short, there’s no easy answer to explain why early compact Macs ran at a screen resolution of 512×342. Rather, Apple was doing what it does best: designing a product with the right trade-offs for performance, ease of use, and cost.
detourdog · 17h ago
It was noticeably better than anything else I had ever seen.
phendrenad2 · 15h ago
I always assumed it was a compromise between memory usage, refresh speed, and the GUI that they wanted. Don't forget that the Macintosh was preceded by the Lisa (800x364) and the IIGS (640x200), so they probably had a good sense for what was comfortable given a certain resolution.
dragonwriter · 13h ago
> Don't forget that the Macintosh was preceded by the Lisa (800x364) and the IIGS (640x200),
The Lisa was also about twice as expensive as the Macintosh which is why it failed hard. So the price limited the hardware and that caused this display bandwidth constraint.
I remember in the early '80s using a computer (a Xerox Star, perhaps?) that used the CPU to generate the display. To speed up CPU-intensive tasks, you could blank the screen.
p_l · 6h ago
Alto had its entire display control in microcode, IIRC.
Out of similar tricks, Symbolics 3600 (at least first model) had major portions of disk driver implemented as one of the tasks in microcode (yes, the microcode was a multi-tasking system with preemption). Don't know how much of MFM wrangling was going there, but ultimately it meant that reading and writing a page from/to disk was done by means of single high level instruction
kazinator · 10h ago
> but given the name of this website, it was pretty embarrassing.
Why, the name of the website is 512pixels.net not 342pixels.net; he nailed the 512 dimension. :)
Reason077 · 17h ago
> “To minimize CRT flicker, Apple worked to achieve a vertical refresh rate of 60 Hz”
… a limitation that many Macs, and even some iPhones, are still stuck with over 40 years later!
perching_aix · 17h ago
It's always surprising for me to see people regard 60 Hz CRT as "flicker-free", or "minimal flicker", etc. Whenever I saw a CRT running at 60 Hz, I'd be immediately be able to tell. Always used at minimum 75 Hz but preferably 85 Hz at home (early 2000s, Windows).
bluGill · 17h ago
Have you ever seen something running at 30 Hz? Or even 15? The difference in flicker between 30 and 60 is much much larger than the difference between 60 and 120! Yeah 60 isn't flicker free, any finite number is not (there is probably quantum limits), but realistically you reach a point where you can't really tell. For most purposes 60Hz is close enough, though you can still tell.
perching_aix · 17h ago
I don't remember frankly. For what it's worth, TV sets would always be 50 Hz here (PAL) (unless they did some tomfoolery I'm not aware of and ran at 100 Hz "in secret" or something) and evidently I could watch those on end without too many holdups for years and years, so clearly it wasn't a dealbreaker. But on monitors, yeah, I just wouldn't tolerate it, whereas 85 Hz felt perfect (no discernible flicker for me that I'd recall).
nyanpasu64 · 17h ago
- I hear that some later digital PAL TVs stored an image in a framebuffer and scanned it out twice at 100 Hz, which retro gamers today avoid because it increases latency relative to direct scanout.
- I've heard mixed reports over whether CRT monitors had faster-decaying phosphors than televisions. Maybe part of it is a computer has a white image, which causes more noticeable flicker than a dark background with white text (or darker TV scenes).
kragen · 10h ago
I took film photos of my family's color TV in the 90s with a fast shutter speed. About 20% of the screen was fading at any given time with the (NTSC) 60Hz field rate, the rest being black, so to get a screen that was physically flicker-free with those phosphors, you'd have to refresh it at somewhere around 500Hz. I doubt color CRT monitors had faster-decaying phosphors than my color TV.
nyanpasu64 · 8h ago
I looked up my brightness measurements of a Gateway VX720 VGA monitor (like Diamond Pro 710), and it seems the blue and green phosphors decay to 50% brightness in under 30 microseconds while red stays bright for 300 microseconds. I didn't measure how long it took for them to go to near-black levels. Sadly I never took measurements of my Trinitron TV when I still had it. All in all the results are inconclusive.
wkat4242 · 12h ago
My old amber and green CRTs definitely had slower phosphor than any TV. They couldn't show a normal TV frame rate without huge ghosting. They also didn't have noticeable flicker though even in black on white mode (some programs could do that and my monitors also had an inverse video button)
msgodel · 16h ago
That's interesting. 60hz TVs always gave me headaches but my 75 hz computer monitor didn't.
I think it was actually the interlacing and not the refresh rate that did it.
rasz · 2h ago
> TV sets would always be 50 Hz here (PAL)
but only half the screen at a time so in effect every other line was flickering at 25Hz
pezezin · 10h ago
I have recently been playing with CRTs again, and something that I have noticed is that for fast-paced games running at 60 or 70 Hz* I don't notice the flicker much, but for text anything less than 85 Hz is headache inducing. Luckily the monitor I got can do 1024x768 at 100 Hz :)
* The original VGA and thus most MS-DOS games ran at 70 Hz.
p_l · 6h ago
I remember when I got my first computer for myself, instead of sharing with others, it was "obvious requirement" that the screen runs at least 72Hz, preferably higher. Which was why 15" CRT had to run at 800x600.
Later on, and with graphic card that had more than 2MB of RAM, I remember experimenting a lot with modelines to pull higher refresh rates and higher resolution on the 17" CRT I inherited when my father switched to a laptop :)
kragen · 10h ago
On a green ZnS:Cu phosphor, even 20Hz is minimal flicker.
wkat4242 · 12h ago
Me too. I'm also really sensitive to PWM. I tried using 85Hz on my VGA monitor but the higher signal bandwidth and cheap hardware made the video noticeably blurrier. 70 wasn't a great compromise either.
Since TFTs came I was bothered a lot less by it because the lack of flicker (though some 4 bit cheap TN LCDs still had it with some colours)
npunt · 15h ago
Monochrome CRT phosphors like P4 (zinc sulfide w silver) have longer persistence than ones used in color CRTs, so flicker is less noticeable.
Suppafly · 17h ago
>Whenever I saw a CRT running at 60 Hz, I'd be immediately be able to tell. Always used at minimum 75 Hz but preferably 85 Hz at home (early 2000s, Windows).
Same, I remember installing some program that would let you quickly change the display settings on basically every computer I ever interacted with. It was especially bad if the crt was in a room with fluorescent lighting.
bluGill · 17h ago
If your lighting and display have flicker at mathematical ratio you will notice unless the frequency is extremely high. 1:1 is most likely because it is easy to sync lights and the CRT to the AC line frequency which is 60Hz in the US (50Hz in Europe). 1:2 (used to be somewhat common) or 4:5 ratios would also cause issues.
Though now that I think of it, the CRT should be syncing with the signal and there is no reason that sync needs to be related to the AC line, but it does anyway (all the computers I know of generate their own sync from a crystal, I have no idea where TV stations get their sync but I doubt AC line frequency).
meatmanek · 16h ago
Wikipedia for NTSC alludes to a couple reasons why you'd want your refresh rate to be based on your power line frequency:
> Matching the field refresh rate to the power source avoided intermodulation (also called beating), which produces rolling bars on the screen. Synchronization of the refresh rate to the power incidentally helped kinescope cameras record early live television broadcasts, as it was very simple to synchronize a film camera to capture one frame of video on each film frame by using the alternating current frequency to set the speed of the synchronous AC motor-drive camera.
(I suspect shows that were pre-recorded and telecined for broadcast would've also been filmed at 30fps using a synchronous AC motor.)
> In early TV systems, a master voltage-controlled oscillator was run at twice the horizontal line frequency, and this frequency was divided down by the number of lines used (in this case 525) to give the field frequency (60 Hz in this case). This frequency was then compared with the 60 Hz power-line frequency and any discrepancy corrected by adjusting the frequency of the master oscillator.
I think later TVs would've just synchronized to the received signal.
But there is less need because LCDs do not flicker (except some designed for videogames that strobe the backlight for some strange reason IIUC).
I know I found the flicker of CRTs annoying even at 60 Hz.
kragen · 10h ago
Strobing the backlight seems like it would allow you to not illuminate the new frame of video until the liquid crystals have finished rotating, so you only have to contend with the persistence of vision on your retina instead of additionally the persistence of the liquid crystals.
hollerith · 2h ago
My "for some strange reason" was the wrong choice of words. I don't wish to imply that I disapprove of the reason that gaming monitors do it, just that I haven't done the work to try to understand.
johnb231 · 12h ago
MacBook Pro is 120 Hz
russellbeattie · 18h ago
The answer is probably more akin to: "As small of a resolution as they could make it without Steve bitching at them about it."
codr7 · 16h ago
Apple could use some Steve drama, they seem to be moving backwards lately.
perching_aix · 17h ago
Regardless of whether we go with 512x324, 512x342, or 512x384, the claim of 72 PPI (exact) and 9" of diagonal size (exact) are not simultaneously possible.
Extremely nitpicky thing I know, but this kinda stuff really bugs me, could somebody please clarify what was the real size (and/or PPI) here?
For reference:
512x324 @ 72 PPI = 8.42" (or 214 mm) (rounded)
512x342 @ 72 PPI = 8.55" (or 217 mm) (rounded)
512x384 @ 72 PPI = 8.89" (or 226 mm) (rounded)
The first two don't even yield an integer result for the number of diagonal pixels, let alone yield an integer multiple of 72. Or would there be bars around the screen, or how would this work?
pavlov · 17h ago
For CRTs, the diagonal measurement was of the physical tube. The actual viewable area was smaller. Part of the tube’s edges were covered by plastic, and there was always also some margin that wasn’t used for picture so it was just black.
It was a 9” tube with 3:2 aspect ratio. Your calculation of a 8.5” image at 72 dpi sounds right.
Suppafly · 17h ago
>For CRTs, the diagonal measurement was of the physical tube. The actual viewable area was smaller.
That's also why TVs and monitors of that era always seemed smaller than advertised. I remember having to explain that to a lot of people.
msephton · 15h ago
You're quite right, the screen would be centred with margin/border around it.
Whilst the CRT is 9", according to period repair guides the screen should be adjusted so that the visible image was 7.11" x 4.75", pretty much exactly 1.5:1. This meant 72dpi, which was to match PostScript point size for print output and WYSIWYG.
So it's your 8.55" diagonal.
Some classic Macintosh users today are unaware of this screen size reasoning, or don't agree with it, and stretch the screen to fill the whole CRT. Yikes!
A square that's one thousand units by one thousand units doesn't give a rational number, much less an integer one, for the diagonal.
A 9" CRT would never be precisely 9", because beam trace width and height are analog, plus there's overscan, so a 9" screen would simply give something pretty close to 9".
kragen · 10h ago
You're right, it's 8.55 inches at 512×324. There were black bars at the top and bottom.
> The most important decision was admitting that the software would never fit into 64K of memory and going with a full 16-bit memory bus, requiring 16 RAM chips instead of 8. The extra memory bandwidth allowed him to double the display resolution, going to dimensions of 512 by 342 instead of 384 by 256
If you look at the specs for the machine, you see that during an active scan line, the video is using exactly half of the available memory bandwidth, with the CPU able to use the other half (during horizontal and vertical blanking periods the CPU can use the entire memory bandwidth)[1]. That dictated the scanline duration.
If the computer had any more scan lines, something would have had to give, as every nanosecond was already accounted for[2]. The refresh rate would have to be lower, or the blanking periods would have had to been shorter, or the memory bandwidth would have to be higher, or the memory bandwidth would have had to be divided unevenly between the CPU and video which was probably harder to implement. I don't know which of those things they would have been able to adjust and which were hard requirements of the hardware they could find, but I'm guessing that they couldn't do 384 scan lines given the memory bandwidth of the RAM chips, and the blanking times of the CRT they selected, if they wanted to hit 60Hz.
[1]https://archive.org/details/Guide_to_the_Macintosh_Family_Ha...
[2]https://archive.org/details/Guide_to_the_Macintosh_Family_Ha...
The Timex Sinclair did all of its computation during the blanking interval which is why it was so dog slow.
http://blog.tynemouthsoftware.co.uk/2023/10/how-the-zx80-gen...
“The CPU then only produces a TV picture when BASIC is waiting for input (or paused). At other times it does not bother to produce a video picture, so the CPU can run the program at full speed.”
It's interesting how the differing vertical resolutions between these two (200p /400i vs 256p /512i) also had some secondary effects on software design, it was always easy to tell if a game was made in NTSC regions or with global releases in mind because the bottom 20% of the screen was black in PAL.
I remember writing a cpu intensive code on the Atari and using video blanking to speed up the code.
I love the monitor, it's sharp and clear and almost kind of HDR a lot of the time, but the fact that it has a bunch of USB 3.0 ports that only get USB 2.0 speeds because I don't want choppy 30Hz gaming is just... weird.
Consequently almost nothing actually renders at 4k. It’s all upscaling - or even worse your display is wired to double up on inputs.
Once we can comfortably get 60 FPS, 1080p, 4x msaa, no upscaling, then let’s revisit this 4k idea.
So yes you get 4k pixels lit up on your display, but is it actually a better image?
And yes there may be high end hardware which can handle it, but the sw still made design choices for everyone else.
There are also image algorithms which are not as amenable to GPUs which are now impossible to compute effectively.
It doesn't help if your crossbar memory interconnect only has static priorities.
I think partial refresh capability only came with some optional extensions to DisplayPort.
I feel like the extra 16% of screen real estate would have been worth it.
But mostly I suspect it’s just far easier.
It looks like DRAM was set up on a 6-CPU-cycle period, as 512 bits (32 16-bit bus accesses) x 342 lines x 60 Hz x 6 cycles x 2 gives 7.87968 MHz, which is just slightly faster than the nominal 7.83 MHz, the remaining .6% presumably being spent during vblank.
I suspect kmill is right: https://news.ycombinator.com/item?id=44110611 -- 512x342 is very close to 3:2 aspect ratio, whereas 347 would give you an awkward 1.476:1 aspect ratio.
If I was placing bets, it was another hardware limitation. Maybe 342 put them right at some particular DRAM timing limit for the chips they were signing contracts for. Or maybe more likely, the ~21.5 kHz scan rate was a hard limit from the tube supplier (that was much faster than TVs could do) and they had a firm 60 Hz requirement from Jobs or whoever.
https://nerdhut.de/2016/06/26/macintosh-classic-crt-1/
https://bobparadiso.com/wp-content/uploads/2014/09/timing.pn...
It wouldn't've been too crazy had Apple went with 64K x 4 chips, so they'd've just needed four of them to get 128 KB at a full 16 bits wide.
512x342 was 16.7% of 128 KB of memory, as opposed to 18.75% with 512x384. Not much of a difference. But having square pixels is nice.
Really, John? You really had to make me parse that word?
[1] To be sure, many games hide the menu bar.
It's a bit of work, but I suspect you can arithmetic your way through the problem. Supposing they wanted 60 Hz on the display and a framebuffer you need 196,608 bits/24,576 bytes/24 kbytes [below] on a 1-bit display at 512x384.
The Mac 128k shipped with a Motorola 68k at 7.8336 Mhz giving it 130560 Hz per frame @ 60 fps.
IIR the word length of the 68k is 32bits, so imagining a scenario where the screen was plotted in words, it's something like 20 cycles per fetch [1], you can get about 6528 fetches per frame. At 32-bits a fetch, you need 6144 or so fetches from memory to fill the screen. You need a moment for horizontal refresh so you lose time waiting for that, thus 6528-6144 = (drumroll) 384, the number of horizontal lines on a display.
I'm obviously hitting the wavetops here, and missing lots of details. But my point is that it's calculable with enough information, which is how engineers of yor used to spec things out.
1 - https://wiki.neogeodev.org/index.php?title=68k_instructions_...
below - why bits? The original Mac used 1-bit display, meaning each pixel used 1-bit to set it as either on or off. Because it didn't need 3 subpixels to produce color, the display was tighter and sharper than color displays, and even at the lower resolution appeared somewhat paperlike. The article is correct that the DPI was around 72. Another way to think about it, and what the Mac was targeting was pre-press desktop publishing. Many printing houses could print at around 150-200 lines per inch. Houses with very good equipment could hit 300 or more. Different measures, but the Mac, being positioned as a WYSISWYG tool, did a good job of approximating analog printing equipment of the time. (source: grew up in a family printing business)
Some of the code AFAIK used fancy multi-register copies to increase cycle efficiency in graphics code.
As for screen, IIRC making it easy to correlate "what's on screen" and "what's on paper" was major part of what drove Mac to be nearly synonymous with DTP for years.
Mac was not cheap machine and Apple that time was not rich to make unnecessary thing - they really need to make a hit this time, and they succeed.
And yes, it is true, they was limited by bandwidth, it is also true they was limited by semi-32bit CPU speed.
But Mac was real step ahead at the moment, and had significant resources to grow when new technology will arrive. That what I think lack PCs of that time.
Beyond that, this article really wants to tell you how amazing that resolution was in 1984. Never mind that you could get an IBM XT clone with "budget" 720x348 monochrome Hercules graphics that year and earlier.
A Hercules card, whilst nice does suffer from the same non-square pixels issue as the Lisa, so not as nice for creating a GUI.
The later 384 number corresponds to an exact 4:3 aspect ratio.
The article mentions the desire for square pixels. So presumably they chose the horizontal resolution first and then chose the vertical resolution that gave them square pixels for a 512 pixel horizontal resolution.
I remember the "enable 32-bit addressing" part (but it's not pictured..)
Whether this is enough to make it count as actually 32 bits is one for the philosophers.
Externally it had 16 bits for databus and 24 bits for addresses. That is why we later got the 32 bit clean ROMs as Apple used the upper unused 8 address bits for flags.
And for more evidence, the Z80 is referred to as an 8-bit processor but has a 4-bit ALU.
> Internally, it uses a 16-bit data arithmetic logic unit (ALU) and two more 16-bit ALUs used mostly for addresses,[4] and has a 16-bit external data bus.
I will throw out there though that ALU width and buses are generally seen as orthogonal to 'bitness' of a processor, and more an implementation detail. The Z80 had a 4bit ALU, but is considered an 8bit CPU. The PDP-8/s and SERV have single bit ALUs, but are considered 12 and 32 bits respectively. The 8088 is considered a 16bit CPU despite having both an 8bit ALU and bus.
'Bitness' is generally defined more as 'what is the width of whatever is the closest thing to a GPR'.
Most 32 bit operations are slower than 16 bit operations because the external data bus is only 16 bits and most operations use the external data bus. But simple internal ops are faster at 32 bits, so that seems to indicate the 68000 is 32 bit internally.
ADDQ and ADDX are better instructions to look at, as are any with a Dn,Dn addressing mode. The long and word cases are the same number of instruction bytes, but the long case is still slower.
(Register-to-register moves are the same regardless of width, so presumably it has a 32 bit path for this. That's nice. But not as nice as it would be if it had a 32 bit path for everything. Which it really looks like it doesn't. This CPU has registers, but that can't save it.)
The separation of Data and Address registers are also result of how it evolved over time, AFAIK, ultimately because it allowed to make the CPU cheaper/easier to make. Another element is that 68000 at least has two layers of microcode - first microcode engine generates instructions interpreted by second microcode engine which finally actually drives execution units.
(Fun fact: there was also the 68008, which was a 68k with an 8 bit bus!)
The reduced 24-bit address bus was never a significant bottleneck during its commercial lifetime, as little consumer software at the time would require more than 4mb of RAM, and by the time it did the 486SX (32bit busses with no maths coprocessor) was the new value champion.
> the ISA bus was only 16-bits wide, which limited the utility of the 32-bit bus for fast graphics transfers.
Not only that, it was 8MHz to match the speed of the fastest IBM AT. VLB on a 486/33 or 66 ran at 33 MHz and was a godsend, 8x the bandwidth of 16-bit ISA.
Lisa was January 1983
Macintosh was January 1984
Apple IIgs was September 1986
The 1st edition of macworld, notably the first page is an advert for microsoft's products, multiplan spreadsheet, word, etc. https://archive.org/details/MacWorld_8404_April_1984_premier...
The original floppy used on the mac was a single-sided 400KB disk. I imagine that was another set of trade-offs. https://folklore.org/Disk_Swappers_Elbow.html
Originally they planned on using custom 870K drives, but they were too unreliable so at the last minute they switched to the Sony 400K 3.5" disks
https://en.wikipedia.org/wiki/Apple_FileWare
https://folklore.org/Hide_Under_This_Desk.html
Out of similar tricks, Symbolics 3600 (at least first model) had major portions of disk driver implemented as one of the tasks in microcode (yes, the microcode was a multi-tasking system with preemption). Don't know how much of MFM wrangling was going there, but ultimately it meant that reading and writing a page from/to disk was done by means of single high level instruction
Why, the name of the website is 512pixels.net not 342pixels.net; he nailed the 512 dimension. :)
… a limitation that many Macs, and even some iPhones, are still stuck with over 40 years later!
- I've heard mixed reports over whether CRT monitors had faster-decaying phosphors than televisions. Maybe part of it is a computer has a white image, which causes more noticeable flicker than a dark background with white text (or darker TV scenes).
I think it was actually the interlacing and not the refresh rate that did it.
but only half the screen at a time so in effect every other line was flickering at 25Hz
* The original VGA and thus most MS-DOS games ran at 70 Hz.
Later on, and with graphic card that had more than 2MB of RAM, I remember experimenting a lot with modelines to pull higher refresh rates and higher resolution on the 17" CRT I inherited when my father switched to a laptop :)
Since TFTs came I was bothered a lot less by it because the lack of flicker (though some 4 bit cheap TN LCDs still had it with some colours)
Same, I remember installing some program that would let you quickly change the display settings on basically every computer I ever interacted with. It was especially bad if the crt was in a room with fluorescent lighting.
Though now that I think of it, the CRT should be syncing with the signal and there is no reason that sync needs to be related to the AC line, but it does anyway (all the computers I know of generate their own sync from a crystal, I have no idea where TV stations get their sync but I doubt AC line frequency).
> Matching the field refresh rate to the power source avoided intermodulation (also called beating), which produces rolling bars on the screen. Synchronization of the refresh rate to the power incidentally helped kinescope cameras record early live television broadcasts, as it was very simple to synchronize a film camera to capture one frame of video on each film frame by using the alternating current frequency to set the speed of the synchronous AC motor-drive camera.
(I suspect shows that were pre-recorded and telecined for broadcast would've also been filmed at 30fps using a synchronous AC motor.)
> In early TV systems, a master voltage-controlled oscillator was run at twice the horizontal line frequency, and this frequency was divided down by the number of lines used (in this case 525) to give the field frequency (60 Hz in this case). This frequency was then compared with the 60 Hz power-line frequency and any discrepancy corrected by adjusting the frequency of the master oscillator.
I think later TVs would've just synchronized to the received signal.
https://en.wikipedia.org/wiki/NTSC#Resolution_and_refresh_ra...
I know I found the flicker of CRTs annoying even at 60 Hz.
Extremely nitpicky thing I know, but this kinda stuff really bugs me, could somebody please clarify what was the real size (and/or PPI) here?
For reference:
512x324 @ 72 PPI = 8.42" (or 214 mm) (rounded)
512x342 @ 72 PPI = 8.55" (or 217 mm) (rounded)
512x384 @ 72 PPI = 8.89" (or 226 mm) (rounded)
The first two don't even yield an integer result for the number of diagonal pixels, let alone yield an integer multiple of 72. Or would there be bars around the screen, or how would this work?
It was a 9” tube with 3:2 aspect ratio. Your calculation of a 8.5” image at 72 dpi sounds right.
That's also why TVs and monitors of that era always seemed smaller than advertised. I remember having to explain that to a lot of people.
Whilst the CRT is 9", according to period repair guides the screen should be adjusted so that the visible image was 7.11" x 4.75", pretty much exactly 1.5:1. This meant 72dpi, which was to match PostScript point size for print output and WYSIWYG.
So it's your 8.55" diagonal.
Some classic Macintosh users today are unaware of this screen size reasoning, or don't agree with it, and stretch the screen to fill the whole CRT. Yikes!
BTW, I posted pretty much the same info earlier today at https://news.ycombinator.com/item?id=44105531 — what synchronicity!
A 9" CRT would never be precisely 9", because beam trace width and height are analog, plus there's overscan, so a 9" screen would simply give something pretty close to 9".