When I was on the ColorSync team at Apple we, the engineers, got an invite to his place-in-the-woods one day.
I knew who he was at the time, but for some reason I felt I was more or less beholden to conversing only about color-related issues and how they applied to a computer workflow. Having retired, I have been kicking myself for some time not just chatting with him about ... whatever.
He was at the time I met him very in to a kind of digital photography. My recollection was that he had a high-end drum scanner and was in fact scanning film negatives (medium format camera?) and then going with a digital workflow from that point on. I remember he was excited about the way that "darks" could be captured (with the scanner?). A straight analog workflow would, according to him, cause the darks to roll off (guessing the film was not the culprit then, perhaps the analog printing process).
He excitedly showed us on his computer photos he took along the Pacific ocean of large rock outcroppings against the ocean — pointing out the detail that you could see in the shadow of the rocks. He was putting together a coffee table book of his photos at the time.
I have to say that I mused at the time about a wealthy, retired, engineer who throws money at high end photo gear and suddenly thinks they're a photographer. I think I was weighing his "technical" approach to photography vs. a strictly artistic one. Although, having learned more about Ansel Adams technical chops, perhaps for the best photographers there is overlap.
rezmason · 58m ago
> I have been kicking myself for some time not just chatting with him about ... whatever.
Maybe I should show some initiative! See, for a little while now I've wanted to just chat with you about whatever.
At this moment I'm working on a little research project about the advent of color on the Macintosh, specifically the color picker. Would you be interested in a casual convo that touches on that? If so, I can create a BlueSky account and reach out to you over there. :)
There probably still isn't a good way to get that kind of dynamic range entirely in the digital domain. Oh, I'm sure the shortfall today is smaller, say maybe four or five stops versus probably eight or twelve back then. Nonetheless, I've done enough work in monochrome to recognize an occasional need to work around the same limitations he was, even though very few of my subjects are as demanding.
JKCalhoun · 4h ago
I wish a good monochrome digital camera didn't cost a small fortune. And I'm too scared to try to remove the Bayer grid from a "color" CCD.
Seems that, without the color/Bayer thing, you could get an extra stop or two for low-light.
I had a crazy notion to make a camera around an astronomical CCD (often monochrome) but they're not cheap either — at least one with a good pixel count.
formerly_proven · 1h ago
You would also remove the microlenses, which increase sensitivity.
throwanem · 3h ago
I've replaced my D5300's viewfinder focusing screen a couple of times, back before I outgrew the need for focusing aids. I also wouldn't try debayering its sensor! But that sort of thing is what cheap beater bodies off your friendly local camera store's used counter, or eBay, were made for. Pixel count isn't everything, and how better to find out whether the depth of your interest would reward serious investment, than to see whether and how soon it outgrows unserious? Indeed, my own entire interest in photography has developed just so, out of a simple annoyance at having begun to discover what a 2016 phone camera couldn't do.
JKCalhoun · 2h ago
I like that idea. I should start watching eBay.
lanyard-textile · 3h ago
:) Color in the computer is a good “whatever” topic.
Sometimes it’s just nice to talk about the progress of humanity. Nothing better than being a part of it, the gears that make the world turn.
JKCalhoun · 2h ago
Ha ha, but it's also "talking shop". I'm sure Bill preferred it to talking about his Quickdraw days.
Aloha · 2h ago
You always lose something when doing optical printing - you can often gain things too, but its not 1:1.
I adore this hybrid workflow, because I can pick how the photo will look, color palate, grain, whatever by picking my film, then I can use digital to fix (most if not all of) the inherent limitations in analog film.
Sadly, film is too much of a pain today, photography has long been about composition for me, not cameras or process - I liked film because I got a consistent result, but I can use digital too, and I do today.
hugs · 1h ago
"When art critics get together they talk about form and structure and meaning. When artists get together they talk about where you can buy cheap turpentine."
gxs · 6h ago
> I have to say that I mused at the time about a wealthy, retired, engineer who throws money at high end photo gear and suddenly thinks they're a photographer
I think this says more about you than it does about him
dang · 5h ago
Please don't cross into personal attack. The cost outweighs any benefit.
And this is absolutely true - there is a benefit but it doesn’t mean it’s worth it
Either way my bad, I should have elaborated and been more gentle instead of just that quip
viccis · 5h ago
It's true though. This effect is what keeps companies like PRS in business.
bombcar · 5h ago
There’s a whole industry of prosumer stuff in … well, many industries.
Power tools definitely have it!
JKCalhoun · 5h ago
I don't deny that. That's probably true about a lot of observations.
spiralcoaster · 5h ago
This is absolutely true and I don't understand why you're being downvoted. Especially in the context of this man just recently dying, there's someone throwing in their elitist opinion about photographers and how photography SHOULD be done, and apparently Bill was doing it wrong.
JKCalhoun · 5h ago
Well, I certainly didn't mean for it to come across that way. I wasn't saying this was the case with Bill. To be clear, I saw nothing bad about Bill's photos. (Also I'm not really versed enough in professional photography to have a valid opinion even if I didn't like them and so would not have publicly weighed in on them anyway.)
I was though being honest about how I felt at that time — debated whether to keep it to myself or not today (but I always foolishly error on the side of being forthcoming).
Perhaps it's a strange thing to imagine that someone would pursue in their spare time, especially after retired, what they did professionally.
brulard · 3h ago
He said "at the time". If I say "I thought X at the time" it implies I have reconsidered since. Your parents comment was unnecessarily condescending
gxs · 3h ago
It’s just the timing and how he said it, especially considering the tone of the message overall
But the irony isn’t lost on me that I myself shouldn’t have been so mean about it
JKCalhoun · 2h ago
You're right about the timing.
brentjanderson · 37m ago
Bill's contribution with HyperCard is of course legendary. Apart from the experience of classrooms and computer labs in elementary schools, it was also the primary software powering a fusion of bridge-simulator-meets-live-action-drama field trips (among many other things) for over 20 years at the Space Center in central Utah.[0] I was one of many beneficiaries of this program as a participant, volunteer, and staff member. It was among the best things I've ever done.
That seed crystal of software shaped hundreds of thousands of students that to this day continue to rave about this program (although the last bits of HyperCard retired permanently about 12 years ago, nowadays it's primarily web based tech).
HyperCard's impact on teaching students to program starship simulators, and then telling compelling, interactive, immersive, multi-player dramatic stories in those ships is something enabled by Atkinson's dream in 1985.
May your consciousness journey between infinite pools of light, Bill.
Also, if you've read this far, go donate to Pancreatic Cancer research.[1]
> One of Bill Atkinson’s amazing feats (which we are so accustomed to nowadays that we rarely marvel at it) was to allow the windows on a screen to overlap so that the “top” one clipped into the ones “below” it. Atkinson made it possible to move these windows around, just like shuffling papers on a desk, with those below becoming visible or hidden as you moved the top ones. Of course, on a computer screen there are no layers of pixels underneath the pixels that you see, so there are no windows actually lurking underneath the ones that appear to be on top. To create the illusion of overlapping windows requires complex coding that involves what are called “regions.” Atkinson pushed himself to make this trick work because he thought he had seen this capability during his visit to Xerox PARC. In fact the folks at PARC had never accomplished it, and they later told him they were amazed that he had done so. “I got a feeling for the empowering aspect of naïveté”, Atkinson said. “Because I didn’t know it couldn’t be done, I was enabled to do it.” He was working so hard that one morning, in a daze, he drove his Corvette into a parked truck and nearly killed himself. Jobs immediately drove to the hospital to see him. “We were pretty worried about you”, he said when Atkinson regained consciousness. Atkinson gave him a pained smile and replied, “Don’t worry, I still remember regions.”
JKCalhoun · 6h ago
With overlapping rectangular windows (slightly simpler case than ones with rounded corners) you can expect visible regions of windows that are not foremost to be, for example, perhaps "L" shaped, perhaps "T" shaped (if there are many windows and they overlap left and right edges). Bill's region structure was, as I understand it, more or less a RLE (run-length encoded) representation of the visible rows of a window's bounds. The region for the topmost window (not occluded in any way) would indicate the top row as running from 0 to width-of-window (or right edge of the display if clipped by the display). I believe too there was a shortcut to indicate "oh, and the following rows are identical" so that an un-occluded rectangular window would have a pretty compact region representation.
Windows partly obscured would have rows that may not begin at 0, may not continue to width-of-window. Window regions could even have holes if a skinnier window was on top and within the width of the larger background window.
The cleverness, I think, was then to write fast routines to add, subtract, intersect, and union regions, and rectangles of this structure. Never mind quickly traversing them, clipping to them, etc.
duskwuff · 4h ago
The QuickDraw source code refers to the contents of the Region structure as an "unpacked array of sorted inversion points". It's a little short on details, but you can sort of get a sense of how it works by looking at the implementation of PtInRgn(Point, RegionHandle):
As far as I can tell, it's a bounding box (in typical L/T/R/B format), followed by a sequence of the X/Y coordinates of every "corner" inside the region. It's fairly compact for most region shapes which arise from overlapping rectangular windows, and very fast to perform hit tests on.
JKCalhoun · 2h ago
Thanks for digging deeper.
rjsw · 7h ago
I think the difference between the Apple and Xerox approach may be more complicated than the people at PARC not knowing how to do this. The Alto doesn't have a framebuffer, each window has its own buffer and the microcode walks the windows to work out what to put on each scanline.
peter303 · 2h ago
Frame buffer memory was still incredibly expensive in 1980. Our labs 512 x 512 x 8bit table lookup color buffer cost $30,000 in 1980. Mac's 512 x 384 x 8bit buffer in 1984 had to fit the Macs $2500 price. The Xerox Alto was earlier than these two devices and would have cost even more if it had a full frame buffer.
JKCalhoun · 7h ago
Not doubting that, but what is the substantive difference here? Does the fact that there is a screen buffer on the Mac facilitate clipping that is otherwise not possible on the Alto?
It definitely makes it simpler. You can do a per-screen window sort, rather than per-pixel :).
Per-pixel sorting while racing the beam is tricky, game consoles usually did it by limiting the number of objects (sprites) per-line, and fetching+caching them before the line is reached.
lambdaone · 6h ago
It allows the Mac to use far less RAM to display overlapping windows, and doesn't require any extra hardware. Individual regions are refreshed independently of the rest of the screen, with occlusion, updates, and clipping managed automatically,
saghm · 5h ago
Yeah, it seems like the hard part of this problem isn't merely coming up with a solution that technically is correct, but one that also is efficient enough to be actually useful. Throwing specialized or more expensive hardware at something is a valid approach for problems like this, but all else being equal, having a lower hardware requirement is better.
al_borland · 3h ago
I was just watching an interview with Andy Hertzfeld earlier today and he said this was the main challenge of the Macintosh project. How to take a $10k system (Lisa) and run it on a $3k system (Macintosh).
He said they drew a lot of inspiration from Woz on the hardware side. Woz was well known for employing lots of little hacks to make things more efficient, and the Macintosh team had to apply the same approach to software.
rsync · 5h ago
Displaying graphics (of any kind) without a framebuffer is called "racing the beam" and is technically quite difficult and involves managing the real world speed of the electron beam with the cpu clock speed ... as in, if you tax the cpu too much the beam goes by and you missed it ...
The very characteristic horizontally stretched graphics of the Atari 2600 are due to this - the CPU was actually too slow, in a sense, for the electron beam which means your horizontal graphic elements had a fairly large minimum width - you couldn't change the output fast enough.
... which goes into great detail on this topic and is one of my favorite books.
mjevans · 7h ago
Reminds me of a GPU's general workflow. (like the sibling comment, 'isn't that the obvious way this is done'? Different drawing areas being hit by 'firmware' / 'software' renderers?)
pducks32 · 4h ago
Would someone mind explaining the technical aspect here? I feel with modern compute and OS paradigms I can’t appreciate this. But even now I know that feeling when you crack it and the thrill of getting the imposible to work.
It’s on all of us to keep the history of this field alive and honor the people who made it all possible. So if anyone would nerd out on this, I’d love to be able to remember him that way.
There were far fewer abstraction layers than today. Today when your desktop application draws something, it gets drawn into a context (a "buffer") which holds the picture of the whole window. Then the window manager / compositor simply paints all the windows on the screen, one on top of the other, in the correct priority (I'm simplifying a lot, but just to get the idea). So when you are programing your application, you don't care about other applications on the screen; you just draw the contents of your window and that's done.
Back at the time, there wouldn't be enough memory to hold a copy of the full contents all possible windows. In fact, there were actually zero abstraction layers: each application was responsible to draw itself directly into the framebuffer (array of pixels), into its correct position. So how to handle overlapping windows? How could each application draw itself on the screen, but only on the pixels not covered by other windows?
QuickDraw (the graphics API written by Atkinson) contained this data structure called "region" which basically represent a "set of pixels", like a mask. And QuickDraw drawing primitives (eg: text) supported clipping to a region. So each application had a region instance representing all visible pixels of the window at any given time; the application would then clip all its drawing to the region, so that only the visibile pixels would get updated.
But how was the region implemented? Obviously it could have not been a mask of pixels (as in, a bitmask) as it would use too much RAM and would be slow to update. In fact, think that the region datastructure had to be quick at doing also operations like intersections, unions, etc. as the operating system had to update the regions for each window as windows got dragged around by the mouse.
So the region was implemented as a bounding box plus a list of visible horizontal spans (I think, I don't know exactly the details). When you represent a list of spans, a common hack is to use simply a list of coordinates that represent the coordinates at which the "state" switches between "inside the span" to "outside the span". This approach makes it for some nice tricks when doing operations like intersections.
Hope this answers the question. I'm fuzzy on many details so there might be several mistakes in this comment (and I apologize in advance) but the overall answer should be good enough to highlight the differences compared to what computers to today.
II2II · 37m ago
It's a good description, but I'm going to add a couple of details since details that are obvious to someone who lived through that era may not be obvious to those who came after.
> Obviously it could have not been a mask of pixels
To be more specific about your explanation of too much memory: many early GUIs were 1 bit-per-pixel, so the bitmask would use the same amount of memory as the window contents.
There was another advantage to the complexity of only drawing regions: the OS could tell the application when a region was exposed, so you only had to redraw a region if it was exposed and needed an update or it was just exposed. Unless you were doing something complex and could justify buffering the results, you were probably re-rendering it. (At least that is my recollections from making a Mandelbrot fractal program for a compact Mac, several decades back.)
bluedino · 5h ago
> In fact the folks at PARC had never accomplished it, and they later told him they were amazed that he had done so.
Reminds me of the story where some company was making a new VGA card, and it was rumored a rival company had implemented a buffer of some sort in their card. When both cards came out the rival had either not actually implemented it or implemented a far simpler solution
Grosvenor · 1h ago
Michael Abrash's black book of graphics programming. They heard about a "buffer", so implemented the only non-stupid thing - a write FIFO. Turns out the competition had done the most stupid thing and built a read buffer.
I teach this lesson to my mentees. Knowing that something is possible gives you significant information. Also, don't brag - It gives away significant information.
Just knowing something is possible makes it much, much easier to achieve.
An infamous Starcraft example also contains notes of a similar story where they were so humbled by a competitor's demo (and criticism that their own game was simply "Warcraft in space") that they went back and significantly overhauled their game.
Former Ion Storm employees later revealed that Dominion’s E3 1996 demo was pre-rendered, with actors pretending to play, not live gameplay.
stevenwoo · 1h ago
I got a look at an early version of StarCraft source code as a reference for the sound library for Diablo 2 and curiosity made me do a quick analysis of the other stuff - they used a very naive approach to C++ and object inheritance to which first time C++ programmers often fall victim. It might have been their first C++ project so they probably needed to start over again anyways. We had an edict on Diablo 2 to make the C++ look like recognizable C for Dave Brevik's benefit which turned out pretty well I think (it was a year late but we shipped).
genewitch · 32m ago
Diablo II is in my top 3 games of all time, i still play it all the time. Thanks for contributing so much fun to my life!
(for ref, diablo III is also in my top 3 :)
mhh__ · 3h ago
Similar tale with propaganda and stats with asterisks missing about the MiG-25 leading to the requirements for the F-15 being very high.
I like how he pronounces "pix-els", learning how we arrived at our current pronunciation is the kind of computer history I can't get enough of
JKCalhoun · 4h ago
That's a great video. Everything he does gets applause and he is all (embarrassed?) grins.
jajko · 7h ago
Pretty awesome story, but also with a bit of dark lining. Of course any owner, and triple that for Jobs, loves over-competent guys who work themselves to the death, here almost literally.
But that's not a recipe for personal happiness for most people, and most of us would not end up contributing revolutionary improvements even if done so. World needs awesome workers, and we also need ie awesome parents or just happy balanced content people (or at least some part of those).
1123581321 · 6h ago
Pretty much. Most of us have creative itches to scratch that make us a bit miserable if we never get to pursue them, even if given a comfortable life. It’s circumstantial whether we get to pursue them as entrepreneurs or employees. The users or enjoyers of our work benefit either way.
kevinventullo · 5h ago
Just to add on, some of us have creative itches that are not directly monetizable, and for which there may be no users or enjoyers of our work at all (if there are, all the better!).
Naturally I don’t expect to do such things for a living.
duskwuff · 4h ago
That's not quite how I read the story. Jobs didn't ask Atkinson if he remembered regions - Atkinson brought it up.
asveikau · 2h ago
It's also a joke, and a pretty good one at that. Shows a sense of humor.
richardw · 5h ago
Survivorship bias. The guys going home at 5 went home at 5 and their companies are not written about. It’s dark but we’ve been competing for a while as life forms and this is “dark-lite” compared to what our previous generations had to do.
Some people are competing, and need to make things happen that can’t be done when you check out at 5. Or more generally: the behaviour that achieves the best outcome for a given time and place, is what succeeds and forms the legends of those companies.
If you choose one path, know your competitors are testing the other paths. You succeed or fail partly based on what your most extreme competitors are willing to do, sometimes with some filters for legality and morality. (I.e. not universally true for all countries or times.)
Edit: I currently go home at 5, but have also been the person who actually won the has-no-life award. It’s a continuum, and is context specific. Both are right and sometimes one is necessary.
bowsamic · 5h ago
What is the dark lining? Do you think Atkinson did not feel totally satisfied with his labour?
And I don't think anyone said that that's the only way to be
matthewn · 8h ago
In an alternate timeline, HyperCard was not allowed to wither and die, but instead continued to mature, embraced the web, and inspired an entire genre of software-creating software. In this timeline, people shape their computing experiences as easily as one might sculpt a piece of clay, creating personal apps that make perfect sense to them and fit like a glove; computing devices actually become (for everyone, not just programmers) the "bicycle for the mind" that Steve Jobs spoke of. I think this is the timeline that Atkinson envisioned, and I wish I lived in it. We've lost a true visionary. Memory eternal!
asveikau · 7h ago
Maybe there's some sense of longing for a tool that's similar today, but there's no way of knowing how much hypercard did have the impact you are talking about. For example many of us reading here experienced HyperCard. It planted seeds in our future endeavors.
I remember in elementary school, I had some computer lab classes where the whole class worked in hypercard on some task. Multiply that by however many classrooms did something like that in the 80s and 90s. That's a lot of brains that can be influenced and have been.
We can judge it as a success in its own right, even if it never entered the next paradigm or never had quite an equivalent later on.
lambdaone · 6h ago
HyperCard was undoubtedly the inspiration for Visual Basic, which for quite some time dominated the bespoke UI industry in the same way web frameworks do today.
Stratoscope · 3h ago
HyperCard was great, but it wasn't the inspiration for Visual Basic.
I was on the team that built Ruby (no relation to the programming language), which became the "Visual" side of Visual Basic.
Alan Cooper did the initial design of the product, via a prototype he called Tripod.
Alan had an unusual design philosophy at the time. He preferred to not look at any existing products that may have similar goals, so he could "design in a vacuum" from first principles.
I will ask him about it, but I'm almost certain that he never looked at HyperCard.
cortesoft · 7h ago
HyperCard was the foundation of my programming career. I treated the HyperCard Bible like an actual Bible.
jkestner · 7h ago
Word. This is the Papert philosophy of constructionism, learning to think by making that so many of us still carry. I’m still trying to build software-building software. We do live in that timeline; it’s just unevenly distributed.
nostrademons · 5h ago
The Web was significantly influenced by HyperCard. Tim Berners-Lee's original prototypes envisioned it as bidirectional, with a hypertext editor shipping alongside the browser. In that sense it does live on, and serves as the basis for much of the modern Internet.
zahlman · 5h ago
Mr. Atkinson's passing was sad enough without thinking about this.
(More seriously: I can still recall using ResEdit to hack a custom FONT resource into a HyperCard stack, then using string manipulation in a text field to create tiled graphics. This performed much better than button icons or any other approach I could find. And then it stopped working in System 7.)
garyrob · 1h ago
"In an alternate timeline, HyperCard was not allowed to wither and die, but instead continued to mature, embraced the web..."
In yet another alternate timeline, someone thought to add something like URLs with something like GET, PUT, etc. to HyperCard, and Tim Berners-Lee's invention of the Web browser never happened because Hypercard already did it all.
jandrese · 1h ago
On one hand this would be simply amazing, on the other hand it would have been a total security nightmare that makes early Javascript look like a TPM Secure Enclave.
duskwuff · 26m ago
Those who were there will remember:
on openbackground --merryxmas
merryxmas "on openbackground --merryxmas"
end openbackground
(And now I'm curious if this post will trip anyone's antivirus software...)
Arathorn · 7h ago
It’s ironic that the next graphical programming environment similar to Hypercard was probably Flash - and it obviously died too.
What actually are the best successors now, at least for authoring generic apps for the open web? (Other than vibe coding things)
I knew who he was at the time, but for some reason I felt I was more or less beholden to conversing only about color-related issues and how they applied to a computer workflow. Having retired, I have been kicking myself for some time not just chatting with him about ... whatever.
He was at the time I met him very in to a kind of digital photography. My recollection was that he had a high-end drum scanner and was in fact scanning film negatives (medium format camera?) and then going with a digital workflow from that point on. I remember he was excited about the way that "darks" could be captured (with the scanner?). A straight analog workflow would, according to him, cause the darks to roll off (guessing the film was not the culprit then, perhaps the analog printing process).
He excitedly showed us on his computer photos he took along the Pacific ocean of large rock outcroppings against the ocean — pointing out the detail that you could see in the shadow of the rocks. He was putting together a coffee table book of his photos at the time.
I have to say that I mused at the time about a wealthy, retired, engineer who throws money at high end photo gear and suddenly thinks they're a photographer. I think I was weighing his "technical" approach to photography vs. a strictly artistic one. Although, having learned more about Ansel Adams technical chops, perhaps for the best photographers there is overlap.
Maybe I should show some initiative! See, for a little while now I've wanted to just chat with you about whatever.
At this moment I'm working on a little research project about the advent of color on the Macintosh, specifically the color picker. Would you be interested in a casual convo that touches on that? If so, I can create a BlueSky account and reach out to you over there. :)
https://merveilles.town/deck/@rezmason/114586460712518867
Seems that, without the color/Bayer thing, you could get an extra stop or two for low-light.
I had a crazy notion to make a camera around an astronomical CCD (often monochrome) but they're not cheap either — at least one with a good pixel count.
Sometimes it’s just nice to talk about the progress of humanity. Nothing better than being a part of it, the gears that make the world turn.
I adore this hybrid workflow, because I can pick how the photo will look, color palate, grain, whatever by picking my film, then I can use digital to fix (most if not all of) the inherent limitations in analog film.
Sadly, film is too much of a pain today, photography has long been about composition for me, not cameras or process - I liked film because I got a consistent result, but I can use digital too, and I do today.
I think this says more about you than it does about him
https://news.ycombinator.com/newsguidelines.html
I was about to argue but then I saw this part
> The cost outweighs any benefit.
And this is absolutely true - there is a benefit but it doesn’t mean it’s worth it
Either way my bad, I should have elaborated and been more gentle instead of just that quip
Power tools definitely have it!
I was though being honest about how I felt at that time — debated whether to keep it to myself or not today (but I always foolishly error on the side of being forthcoming).
Perhaps it's a strange thing to imagine that someone would pursue in their spare time, especially after retired, what they did professionally.
But the irony isn’t lost on me that I myself shouldn’t have been so mean about it
That seed crystal of software shaped hundreds of thousands of students that to this day continue to rave about this program (although the last bits of HyperCard retired permanently about 12 years ago, nowadays it's primarily web based tech).
HyperCard's impact on teaching students to program starship simulators, and then telling compelling, interactive, immersive, multi-player dramatic stories in those ships is something enabled by Atkinson's dream in 1985.
May your consciousness journey between infinite pools of light, Bill.
Also, if you've read this far, go donate to Pancreatic Cancer research.[1]
[0]: https://spacecenter.alpineschools.org [1]: https://pancan.org
> One of Bill Atkinson’s amazing feats (which we are so accustomed to nowadays that we rarely marvel at it) was to allow the windows on a screen to overlap so that the “top” one clipped into the ones “below” it. Atkinson made it possible to move these windows around, just like shuffling papers on a desk, with those below becoming visible or hidden as you moved the top ones. Of course, on a computer screen there are no layers of pixels underneath the pixels that you see, so there are no windows actually lurking underneath the ones that appear to be on top. To create the illusion of overlapping windows requires complex coding that involves what are called “regions.” Atkinson pushed himself to make this trick work because he thought he had seen this capability during his visit to Xerox PARC. In fact the folks at PARC had never accomplished it, and they later told him they were amazed that he had done so. “I got a feeling for the empowering aspect of naïveté”, Atkinson said. “Because I didn’t know it couldn’t be done, I was enabled to do it.” He was working so hard that one morning, in a daze, he drove his Corvette into a parked truck and nearly killed himself. Jobs immediately drove to the hospital to see him. “We were pretty worried about you”, he said when Atkinson regained consciousness. Atkinson gave him a pained smile and replied, “Don’t worry, I still remember regions.”
Windows partly obscured would have rows that may not begin at 0, may not continue to width-of-window. Window regions could even have holes if a skinnier window was on top and within the width of the larger background window.
The cleverness, I think, was then to write fast routines to add, subtract, intersect, and union regions, and rectangles of this structure. Never mind quickly traversing them, clipping to them, etc.
https://github.com/historicalsource/supermario/blob/9dd3c4be...
As far as I can tell, it's a bounding box (in typical L/T/R/B format), followed by a sequence of the X/Y coordinates of every "corner" inside the region. It's fairly compact for most region shapes which arise from overlapping rectangular windows, and very fast to perform hit tests on.
Per-pixel sorting while racing the beam is tricky, game consoles usually did it by limiting the number of objects (sprites) per-line, and fetching+caching them before the line is reached.
He said they drew a lot of inspiration from Woz on the hardware side. Woz was well known for employing lots of little hacks to make things more efficient, and the Macintosh team had to apply the same approach to software.
The very characteristic horizontally stretched graphics of the Atari 2600 are due to this - the CPU was actually too slow, in a sense, for the electron beam which means your horizontal graphic elements had a fairly large minimum width - you couldn't change the output fast enough.
I strongly recommend:
https://en.wikipedia.org/wiki/Racing_the_Beam
... which goes into great detail on this topic and is one of my favorite books.
It’s on all of us to keep the history of this field alive and honor the people who made it all possible. So if anyone would nerd out on this, I’d love to be able to remember him that way.
(I did read this https://www.folklore.org/I_Still_Remember_Regions.html but might be not understanding it fully)
Back at the time, there wouldn't be enough memory to hold a copy of the full contents all possible windows. In fact, there were actually zero abstraction layers: each application was responsible to draw itself directly into the framebuffer (array of pixels), into its correct position. So how to handle overlapping windows? How could each application draw itself on the screen, but only on the pixels not covered by other windows?
QuickDraw (the graphics API written by Atkinson) contained this data structure called "region" which basically represent a "set of pixels", like a mask. And QuickDraw drawing primitives (eg: text) supported clipping to a region. So each application had a region instance representing all visible pixels of the window at any given time; the application would then clip all its drawing to the region, so that only the visibile pixels would get updated.
But how was the region implemented? Obviously it could have not been a mask of pixels (as in, a bitmask) as it would use too much RAM and would be slow to update. In fact, think that the region datastructure had to be quick at doing also operations like intersections, unions, etc. as the operating system had to update the regions for each window as windows got dragged around by the mouse.
So the region was implemented as a bounding box plus a list of visible horizontal spans (I think, I don't know exactly the details). When you represent a list of spans, a common hack is to use simply a list of coordinates that represent the coordinates at which the "state" switches between "inside the span" to "outside the span". This approach makes it for some nice tricks when doing operations like intersections.
Hope this answers the question. I'm fuzzy on many details so there might be several mistakes in this comment (and I apologize in advance) but the overall answer should be good enough to highlight the differences compared to what computers to today.
> Obviously it could have not been a mask of pixels
To be more specific about your explanation of too much memory: many early GUIs were 1 bit-per-pixel, so the bitmask would use the same amount of memory as the window contents.
There was another advantage to the complexity of only drawing regions: the OS could tell the application when a region was exposed, so you only had to redraw a region if it was exposed and needed an update or it was just exposed. Unless you were doing something complex and could justify buffering the results, you were probably re-rendering it. (At least that is my recollections from making a Mandelbrot fractal program for a compact Mac, several decades back.)
Reminds me of the story where some company was making a new VGA card, and it was rumored a rival company had implemented a buffer of some sort in their card. When both cards came out the rival had either not actually implemented it or implemented a far simpler solution
I teach this lesson to my mentees. Knowing that something is possible gives you significant information. Also, don't brag - It gives away significant information.
Just knowing something is possible makes it much, much easier to achieve.
https://valvedev.info/archives/abrash/abrash.pdf
Former Ion Storm employees later revealed that Dominion’s E3 1996 demo was pre-rendered, with actors pretending to play, not live gameplay.
(for ref, diablo III is also in my top 3 :)
But that's not a recipe for personal happiness for most people, and most of us would not end up contributing revolutionary improvements even if done so. World needs awesome workers, and we also need ie awesome parents or just happy balanced content people (or at least some part of those).
Naturally I don’t expect to do such things for a living.
Some people are competing, and need to make things happen that can’t be done when you check out at 5. Or more generally: the behaviour that achieves the best outcome for a given time and place, is what succeeds and forms the legends of those companies.
If you choose one path, know your competitors are testing the other paths. You succeed or fail partly based on what your most extreme competitors are willing to do, sometimes with some filters for legality and morality. (I.e. not universally true for all countries or times.)
Edit: I currently go home at 5, but have also been the person who actually won the has-no-life award. It’s a continuum, and is context specific. Both are right and sometimes one is necessary.
And I don't think anyone said that that's the only way to be
I remember in elementary school, I had some computer lab classes where the whole class worked in hypercard on some task. Multiply that by however many classrooms did something like that in the 80s and 90s. That's a lot of brains that can be influenced and have been.
We can judge it as a success in its own right, even if it never entered the next paradigm or never had quite an equivalent later on.
I was on the team that built Ruby (no relation to the programming language), which became the "Visual" side of Visual Basic.
Alan Cooper did the initial design of the product, via a prototype he called Tripod.
Alan had an unusual design philosophy at the time. He preferred to not look at any existing products that may have similar goals, so he could "design in a vacuum" from first principles.
I will ask him about it, but I'm almost certain that he never looked at HyperCard.
(More seriously: I can still recall using ResEdit to hack a custom FONT resource into a HyperCard stack, then using string manipulation in a text field to create tiled graphics. This performed much better than button icons or any other approach I could find. And then it stopped working in System 7.)
In yet another alternate timeline, someone thought to add something like URLs with something like GET, PUT, etc. to HyperCard, and Tim Berners-Lee's invention of the Web browser never happened because Hypercard already did it all.
What actually are the best successors now, at least for authoring generic apps for the open web? (Other than vibe coding things)