Bill's contribution with HyperCard is of course legendary. Apart from the experience of classrooms and computer labs in elementary schools, it was also the primary software powering a fusion of bridge-simulator-meets-live-action-drama field trips (among many other things) for over 20 years at the Space Center in central Utah.[0] I was one of many beneficiaries of this program as a participant, volunteer, and staff member. It was among the best things I've ever done.
That seed crystal of software shaped hundreds of thousands of students that to this day continue to rave about this program (although the last bits of HyperCard retired permanently about 12 years ago, nowadays it's primarily web based tech).
HyperCard's impact on teaching students to program starship simulators, and then telling compelling, interactive, immersive, multi-player dramatic stories in those ships is something enabled by Atkinson's dream in 1985.
May your consciousness journey between infinite pools of light, Bill.
Also, if you've read this far, go donate to Pancreatic Cancer research.[1]
When I was on the ColorSync team at Apple we, the engineers, got an invite to his place-in-the-woods one day.
I knew who he was at the time, but for some reason I felt I was more or less beholden to conversing only about color-related issues and how they applied to a computer workflow. Having retired, I have been kicking myself for some time not just chatting with him about ... whatever.
He was at the time I met him very in to a kind of digital photography. My recollection was that he had a high-end drum scanner and was in fact scanning film negatives (medium format camera?) and then going with a digital workflow from that point on. I remember he was excited about the way that "darks" could be captured (with the scanner?). A straight analog workflow would, according to him, cause the darks to roll off (guessing the film was not the culprit then, perhaps the analog printing process).
He excitedly showed us on his computer photos he took along the Pacific ocean of large rock outcroppings against the ocean — pointing out the detail that you could see in the shadow of the rocks. He was putting together a coffee table book of his photos at the time.
I have to say that I mused at the time about a wealthy, retired, engineer who throws money at high end photo gear and suddenly thinks they're a photographer. I think I was weighing his "technical" approach to photography vs. a strictly artistic one. Although, having learned more about Ansel Adams technical chops, perhaps for the best photographers there is overlap.
rezmason · 28m ago
> I have been kicking myself for some time not just chatting with him about ... whatever.
Maybe I should show some initiative! See, for a little while now I've wanted to just chat with you about whatever.
At this moment I'm working on a little research project about the advent of color on the Macintosh, specifically the color picker. Would you be interested in a casual convo that touches on that? If so, I can create a BlueSky account and reach out to you over there. :)
There probably still isn't a good way to get that kind of dynamic range entirely in the digital domain. Oh, I'm sure the shortfall today is smaller, say maybe four or five stops versus probably eight or twelve back then. Nonetheless, I've done enough work in monochrome to recognize an occasional need to work around the same limitations he was, even though very few of my subjects are as demanding.
JKCalhoun · 4h ago
I wish a good monochrome digital camera didn't cost a small fortune. And I'm too scared to try to remove the Bayer grid from a "color" CCD.
Seems that, without the color/Bayer thing, you could get an extra stop or two for low-light.
I had a crazy notion to make a camera around an astronomical CCD (often monochrome) but they're not cheap either — at least one with a good pixel count.
formerly_proven · 1h ago
You would also remove the microlenses, which increase sensitivity.
throwanem · 3h ago
I've replaced my D5300's viewfinder focusing screen a couple of times, back before I outgrew the need for focusing aids. I also wouldn't try debayering its sensor! But that sort of thing is what cheap beater bodies off your friendly local camera store's used counter, or eBay, were made for. Pixel count isn't everything, and how better to find out whether the depth of your interest would reward serious investment, than to see whether and how soon it outgrows unserious? Indeed, my own entire interest in photography has developed just so, out of a simple annoyance at having begun to discover what a 2016 phone camera couldn't do.
JKCalhoun · 1h ago
I like that idea. I should start watching eBay.
lanyard-textile · 3h ago
:) Color in the computer is a good “whatever” topic.
Sometimes it’s just nice to talk about the progress of humanity. Nothing better than being a part of it, the gears that make the world turn.
JKCalhoun · 1h ago
Ha ha, but it's also "talking shop". I'm sure Bill preferred it to talking about his Quickdraw days.
Aloha · 2h ago
You always lose something when doing optical printing - you can often gain things too, but its not 1:1.
I adore this hybrid workflow, because I can pick how the photo will look, color palate, grain, whatever by picking my film, then I can use digital to fix (most if not all of) the inherent limitations in analog film.
Sadly, film is too much of a pain today, photography has long been about composition for me, not cameras or process - I liked film because I got a consistent result, but I can use digital too, and I do today.
hugs · 1h ago
"When art critics get together they talk about form and structure and meaning. When artists get together they talk about where you can buy cheap turpentine."
gxs · 5h ago
> I have to say that I mused at the time about a wealthy, retired, engineer who throws money at high end photo gear and suddenly thinks they're a photographer
I think this says more about you than it does about him
dang · 4h ago
Please don't cross into personal attack. The cost outweighs any benefit.
And this is absolutely true - there is a benefit but it doesn’t mean it’s worth it
Either way my bad, I should have elaborated and been more gentle instead of just that quip
viccis · 5h ago
It's true though. This effect is what keeps companies like PRS in business.
bombcar · 5h ago
There’s a whole industry of prosumer stuff in … well, many industries.
Power tools definitely have it!
JKCalhoun · 4h ago
I don't deny that. That's probably true about a lot of observations.
spiralcoaster · 4h ago
This is absolutely true and I don't understand why you're being downvoted. Especially in the context of this man just recently dying, there's someone throwing in their elitist opinion about photographers and how photography SHOULD be done, and apparently Bill was doing it wrong.
JKCalhoun · 4h ago
Well, I certainly didn't mean for it to come across that way. I wasn't saying this was the case with Bill. To be clear, I saw nothing bad about Bill's photos. (Also I'm not really versed enough in professional photography to have a valid opinion even if I didn't like them and so would not have publicly weighed in on them anyway.)
I was though being honest about how I felt at that time — debated whether to keep it to myself or not today (but I always foolishly error on the side of being forthcoming).
Perhaps it's a strange thing to imagine that someone would pursue in their spare time, especially after retired, what they did professionally.
brulard · 3h ago
He said "at the time". If I say "I thought X at the time" it implies I have reconsidered since. Your parents comment was unnecessarily condescending
gxs · 2h ago
It’s just the timing and how he said it, especially considering the tone of the message overall
But the irony isn’t lost on me that I myself shouldn’t have been so mean about it
JKCalhoun · 1h ago
You're right about the timing.
yardie · 5m ago
The Mac, Hypercard, MacPaint, and General Magic he's one of the few engineers who's such a substantial impact on my life. Rest in Peace.
bhk · 19s ago
There were giants in the Earth in those days...
dkislyuk · 7h ago
From Walter Isaacson's _Steve Jobs_:
> One of Bill Atkinson’s amazing feats (which we are so accustomed to nowadays that we rarely marvel at it) was to allow the windows on a screen to overlap so that the “top” one clipped into the ones “below” it. Atkinson made it possible to move these windows around, just like shuffling papers on a desk, with those below becoming visible or hidden as you moved the top ones. Of course, on a computer screen there are no layers of pixels underneath the pixels that you see, so there are no windows actually lurking underneath the ones that appear to be on top. To create the illusion of overlapping windows requires complex coding that involves what are called “regions.” Atkinson pushed himself to make this trick work because he thought he had seen this capability during his visit to Xerox PARC. In fact the folks at PARC had never accomplished it, and they later told him they were amazed that he had done so. “I got a feeling for the empowering aspect of naïveté”, Atkinson said. “Because I didn’t know it couldn’t be done, I was enabled to do it.” He was working so hard that one morning, in a daze, he drove his Corvette into a parked truck and nearly killed himself. Jobs immediately drove to the hospital to see him. “We were pretty worried about you”, he said when Atkinson regained consciousness. Atkinson gave him a pained smile and replied, “Don’t worry, I still remember regions.”
JKCalhoun · 5h ago
With overlapping rectangular windows (slightly simpler case than ones with rounded corners) you can expect visible regions of windows that are not foremost to be, for example, perhaps "L" shaped, perhaps "T" shaped (if there are many windows and they overlap left and right edges). Bill's region structure was, as I understand it, more or less a RLE (run-length encoded) representation of the visible rows of a window's bounds. The region for the topmost window (not occluded in any way) would indicate the top row as running from 0 to width-of-window (or right edge of the display if clipped by the display). I believe too there was a shortcut to indicate "oh, and the following rows are identical" so that an un-occluded rectangular window would have a pretty compact region representation.
Windows partly obscured would have rows that may not begin at 0, may not continue to width-of-window. Window regions could even have holes if a skinnier window was on top and within the width of the larger background window.
The cleverness, I think, was then to write fast routines to add, subtract, intersect, and union regions, and rectangles of this structure. Never mind quickly traversing them, clipping to them, etc.
duskwuff · 3h ago
The QuickDraw source code refers to the contents of the Region structure as an "unpacked array of sorted inversion points". It's a little short on details, but you can sort of get a sense of how it works by looking at the implementation of PtInRgn(Point, RegionHandle):
As far as I can tell, it's a bounding box (in typical L/T/R/B format), followed by a sequence of the X/Y coordinates of every "corner" inside the region. It's fairly compact for most region shapes which arise from overlapping rectangular windows, and very fast to perform hit tests on.
JKCalhoun · 1h ago
Thanks for digging deeper.
rjsw · 6h ago
I think the difference between the Apple and Xerox approach may be more complicated than the people at PARC not knowing how to do this. The Alto doesn't have a framebuffer, each window has its own buffer and the microcode walks the windows to work out what to put on each scanline.
peter303 · 1h ago
Frame buffer memory was still incredibly expensive in 1980. Our labs 512 x 512 x 8bit table lookup color buffer cost $30,000 in 1980. Mac's 512 x 384 x 8bit buffer in 1984 had to fit the Macs $2500 price. The Xerox Alto was earlier than these two devices and would have cost even more if it had a full frame buffer.
JKCalhoun · 6h ago
Not doubting that, but what is the substantive difference here? Does the fact that there is a screen buffer on the Mac facilitate clipping that is otherwise not possible on the Alto?
It definitely makes it simpler. You can do a per-screen window sort, rather than per-pixel :).
Per-pixel sorting while racing the beam is tricky, game consoles usually did it by limiting the number of objects (sprites) per-line, and fetching+caching them before the line is reached.
lambdaone · 6h ago
It allows the Mac to use far less RAM to display overlapping windows, and doesn't require any extra hardware. Individual regions are refreshed independently of the rest of the screen, with occlusion, updates, and clipping managed automatically,
saghm · 5h ago
Yeah, it seems like the hard part of this problem isn't merely coming up with a solution that technically is correct, but one that also is efficient enough to be actually useful. Throwing specialized or more expensive hardware at something is a valid approach for problems like this, but all else being equal, having a lower hardware requirement is better.
al_borland · 3h ago
I was just watching an interview with Andy Hertzfeld earlier today and he said this was the main challenge of the Macintosh project. How to take a $10k system (Lisa) and run it on a $3k system (Macintosh).
He said they drew a lot of inspiration from Woz on the hardware side. Woz was well known for employing lots of little hacks to make things more efficient, and the Macintosh team had to apply the same approach to software.
rsync · 5h ago
Displaying graphics (of any kind) without a framebuffer is called "racing the beam" and is technically quite difficult and involves managing the real world speed of the electron beam with the cpu clock speed ... as in, if you tax the cpu too much the beam goes by and you missed it ...
The very characteristic horizontally stretched graphics of the Atari 2600 are due to this - the CPU was actually too slow, in a sense, for the electron beam which means your horizontal graphic elements had a fairly large minimum width - you couldn't change the output fast enough.
... which goes into great detail on this topic and is one of my favorite books.
mjevans · 6h ago
Reminds me of a GPU's general workflow. (like the sibling comment, 'isn't that the obvious way this is done'? Different drawing areas being hit by 'firmware' / 'software' renderers?)
pducks32 · 4h ago
Would someone mind explaining the technical aspect here? I feel with modern compute and OS paradigms I can’t appreciate this. But even now I know that feeling when you crack it and the thrill of getting the imposible to work.
It’s on all of us to keep the history of this field alive and honor the people who made it all possible. So if anyone would nerd out on this, I’d love to be able to remember him that way.
There were far fewer abstraction layers than today. Today when your desktop application draws something, it gets drawn into a context (a "buffer") which holds the picture of the whole window. Then the window manager / compositor simply paints all the windows on the screen, one on top of the other, in the correct priority (I'm simplifying a lot, but just to get the idea). So when you are programing your application, you don't care about other applications on the screen; you just draw the contents of your window and that's done.
Back at the time, there wouldn't be enough memory to hold a copy of the full contents all possible windows. In fact, there were actually zero abstraction layers: each application was responsible to draw itself directly into the framebuffer (array of pixels), into its correct position. So how to handle overlapping windows? How could each application draw itself on the screen, but only on the pixels not covered by other windows?
QuickDraw (the graphics API written by Atkinson) contained this data structure called "region" which basically represent a "set of pixels", like a mask. And QuickDraw drawing primitives (eg: text) supported clipping to a region. So each application had a region instance representing all visible pixels of the window at any given time; the application would then clip all its drawing to the region, so that only the visibile pixels would get updated.
But how was the region implemented? Obviously it could have not been a mask of pixels (as in, a bitmask) as it would use too much RAM and would be slow to update. In fact, think that the region datastructure had to be quick at doing also operations like intersections, unions, etc. as the operating system had to update the regions for each window as windows got dragged around by the mouse.
So the region was implemented as a bounding box plus a list of visible horizontal spans (I think, I don't know exactly the details). When you represent a list of spans, a common hack is to use simply a list of coordinates that represent the coordinates at which the "state" switches between "inside the span" to "outside the span". This approach makes it for some nice tricks when doing operations like intersections.
Hope this answers the question. I'm fuzzy on many details so there might be several mistakes in this comment (and I apologize in advance) but the overall answer should be good enough to highlight the differences compared to what computers to today.
II2II · 7m ago
It's a good description, but I'm going to add a couple of details since details that are obvious to someone who lived through that era may not be obvious to those who came after.
> Obviously it could have not been a mask of pixels
To be more specific about your explanation of too much memory: many early GUIs were 1 bit-per-pixel, so the bitmask would use the same amount of memory as the window contents.
There was another advantage to the complexity of only drawing regions: the OS could tell the application when a region was exposed, so you only had to redraw a region if it was exposed and needed an update or it was just exposed. Unless you were doing something complex and could justify buffering the results, you were probably re-rendering it. (At least that is my recollections from making a Mandelbrot fractal program for a compact Mac, several decades back.)
bluedino · 4h ago
> In fact the folks at PARC had never accomplished it, and they later told him they were amazed that he had done so.
Reminds me of the story where some company was making a new VGA card, and it was rumored a rival company had implemented a buffer of some sort in their card. When both cards came out the rival had either not actually implemented it or implemented a far simpler solution
Grosvenor · 56m ago
Michael Abrash's black book of graphics programming. They heard about a "buffer", so implemented the only non-stupid thing - a write FIFO. Turns out the competition had done the most stupid thing and built a read buffer.
I teach this lesson to my mentees. Knowing that something is possible gives you significant information. Also, don't brag - It gives away significant information.
Just knowing something is possible makes it much, much easier to achieve.
An infamous Starcraft example also contains notes of a similar story where they were so humbled by a competitor's demo (and criticism that their own game was simply "Warcraft in space") that they went back and significantly overhauled their game.
Former Ion Storm employees later revealed that Dominion’s E3 1996 demo was pre-rendered, with actors pretending to play, not live gameplay.
stevenwoo · 40m ago
I got a look at an early version of StarCraft source code as a reference for the sound library for Diablo 2 and curiosity made me do a quick analysis of the other stuff - they used a very naive approach to C++ and object inheritance to which first time C++ programmers often fall victim. It might have been their first C++ project so they probably needed to start over again anyways. We had an edict on Diablo 2 to make the C++ look like recognizable C for Dave Brevik's benefit which turned out pretty well I think (it was a year late but we shipped).
genewitch · 1m ago
Diablo II is in my top 3 games of all time, i still play it all the time. Thanks for contributing so much fun to my life!
(for ref, diablo III is also in my top 3 :)
mhh__ · 3h ago
Similar tale with propaganda and stats with asterisks missing about the MiG-25 leading to the requirements for the F-15 being very high.
I like how he pronounces "pix-els", learning how we arrived at our current pronunciation is the kind of computer history I can't get enough of
JKCalhoun · 4h ago
That's a great video. Everything he does gets applause and he is all (embarrassed?) grins.
jajko · 6h ago
Pretty awesome story, but also with a bit of dark lining. Of course any owner, and triple that for Jobs, loves over-competent guys who work themselves to the death, here almost literally.
But that's not a recipe for personal happiness for most people, and most of us would not end up contributing revolutionary improvements even if done so. World needs awesome workers, and we also need ie awesome parents or just happy balanced content people (or at least some part of those).
That seed crystal of software shaped hundreds of thousands of students that to this day continue to rave about this program (although the last bits of HyperCard retired permanently about 12 years ago, nowadays it's primarily web based tech).
HyperCard's impact on teaching students to program starship simulators, and then telling compelling, interactive, immersive, multi-player dramatic stories in those ships is something enabled by Atkinson's dream in 1985.
May your consciousness journey between infinite pools of light, Bill.
Also, if you've read this far, go donate to Pancreatic Cancer research.[1]
[0]: https://spacecenter.alpineschools.org [1]: https://pancan.org
I knew who he was at the time, but for some reason I felt I was more or less beholden to conversing only about color-related issues and how they applied to a computer workflow. Having retired, I have been kicking myself for some time not just chatting with him about ... whatever.
He was at the time I met him very in to a kind of digital photography. My recollection was that he had a high-end drum scanner and was in fact scanning film negatives (medium format camera?) and then going with a digital workflow from that point on. I remember he was excited about the way that "darks" could be captured (with the scanner?). A straight analog workflow would, according to him, cause the darks to roll off (guessing the film was not the culprit then, perhaps the analog printing process).
He excitedly showed us on his computer photos he took along the Pacific ocean of large rock outcroppings against the ocean — pointing out the detail that you could see in the shadow of the rocks. He was putting together a coffee table book of his photos at the time.
I have to say that I mused at the time about a wealthy, retired, engineer who throws money at high end photo gear and suddenly thinks they're a photographer. I think I was weighing his "technical" approach to photography vs. a strictly artistic one. Although, having learned more about Ansel Adams technical chops, perhaps for the best photographers there is overlap.
Maybe I should show some initiative! See, for a little while now I've wanted to just chat with you about whatever.
At this moment I'm working on a little research project about the advent of color on the Macintosh, specifically the color picker. Would you be interested in a casual convo that touches on that? If so, I can create a BlueSky account and reach out to you over there. :)
https://merveilles.town/deck/@rezmason/114586460712518867
Seems that, without the color/Bayer thing, you could get an extra stop or two for low-light.
I had a crazy notion to make a camera around an astronomical CCD (often monochrome) but they're not cheap either — at least one with a good pixel count.
Sometimes it’s just nice to talk about the progress of humanity. Nothing better than being a part of it, the gears that make the world turn.
I adore this hybrid workflow, because I can pick how the photo will look, color palate, grain, whatever by picking my film, then I can use digital to fix (most if not all of) the inherent limitations in analog film.
Sadly, film is too much of a pain today, photography has long been about composition for me, not cameras or process - I liked film because I got a consistent result, but I can use digital too, and I do today.
I think this says more about you than it does about him
https://news.ycombinator.com/newsguidelines.html
I was about to argue but then I saw this part
> The cost outweighs any benefit.
And this is absolutely true - there is a benefit but it doesn’t mean it’s worth it
Either way my bad, I should have elaborated and been more gentle instead of just that quip
Power tools definitely have it!
I was though being honest about how I felt at that time — debated whether to keep it to myself or not today (but I always foolishly error on the side of being forthcoming).
Perhaps it's a strange thing to imagine that someone would pursue in their spare time, especially after retired, what they did professionally.
But the irony isn’t lost on me that I myself shouldn’t have been so mean about it
> One of Bill Atkinson’s amazing feats (which we are so accustomed to nowadays that we rarely marvel at it) was to allow the windows on a screen to overlap so that the “top” one clipped into the ones “below” it. Atkinson made it possible to move these windows around, just like shuffling papers on a desk, with those below becoming visible or hidden as you moved the top ones. Of course, on a computer screen there are no layers of pixels underneath the pixels that you see, so there are no windows actually lurking underneath the ones that appear to be on top. To create the illusion of overlapping windows requires complex coding that involves what are called “regions.” Atkinson pushed himself to make this trick work because he thought he had seen this capability during his visit to Xerox PARC. In fact the folks at PARC had never accomplished it, and they later told him they were amazed that he had done so. “I got a feeling for the empowering aspect of naïveté”, Atkinson said. “Because I didn’t know it couldn’t be done, I was enabled to do it.” He was working so hard that one morning, in a daze, he drove his Corvette into a parked truck and nearly killed himself. Jobs immediately drove to the hospital to see him. “We were pretty worried about you”, he said when Atkinson regained consciousness. Atkinson gave him a pained smile and replied, “Don’t worry, I still remember regions.”
Windows partly obscured would have rows that may not begin at 0, may not continue to width-of-window. Window regions could even have holes if a skinnier window was on top and within the width of the larger background window.
The cleverness, I think, was then to write fast routines to add, subtract, intersect, and union regions, and rectangles of this structure. Never mind quickly traversing them, clipping to them, etc.
https://github.com/historicalsource/supermario/blob/9dd3c4be...
As far as I can tell, it's a bounding box (in typical L/T/R/B format), followed by a sequence of the X/Y coordinates of every "corner" inside the region. It's fairly compact for most region shapes which arise from overlapping rectangular windows, and very fast to perform hit tests on.
Per-pixel sorting while racing the beam is tricky, game consoles usually did it by limiting the number of objects (sprites) per-line, and fetching+caching them before the line is reached.
He said they drew a lot of inspiration from Woz on the hardware side. Woz was well known for employing lots of little hacks to make things more efficient, and the Macintosh team had to apply the same approach to software.
The very characteristic horizontally stretched graphics of the Atari 2600 are due to this - the CPU was actually too slow, in a sense, for the electron beam which means your horizontal graphic elements had a fairly large minimum width - you couldn't change the output fast enough.
I strongly recommend:
https://en.wikipedia.org/wiki/Racing_the_Beam
... which goes into great detail on this topic and is one of my favorite books.
It’s on all of us to keep the history of this field alive and honor the people who made it all possible. So if anyone would nerd out on this, I’d love to be able to remember him that way.
(I did read this https://www.folklore.org/I_Still_Remember_Regions.html but might be not understanding it fully)
Back at the time, there wouldn't be enough memory to hold a copy of the full contents all possible windows. In fact, there were actually zero abstraction layers: each application was responsible to draw itself directly into the framebuffer (array of pixels), into its correct position. So how to handle overlapping windows? How could each application draw itself on the screen, but only on the pixels not covered by other windows?
QuickDraw (the graphics API written by Atkinson) contained this data structure called "region" which basically represent a "set of pixels", like a mask. And QuickDraw drawing primitives (eg: text) supported clipping to a region. So each application had a region instance representing all visible pixels of the window at any given time; the application would then clip all its drawing to the region, so that only the visibile pixels would get updated.
But how was the region implemented? Obviously it could have not been a mask of pixels (as in, a bitmask) as it would use too much RAM and would be slow to update. In fact, think that the region datastructure had to be quick at doing also operations like intersections, unions, etc. as the operating system had to update the regions for each window as windows got dragged around by the mouse.
So the region was implemented as a bounding box plus a list of visible horizontal spans (I think, I don't know exactly the details). When you represent a list of spans, a common hack is to use simply a list of coordinates that represent the coordinates at which the "state" switches between "inside the span" to "outside the span". This approach makes it for some nice tricks when doing operations like intersections.
Hope this answers the question. I'm fuzzy on many details so there might be several mistakes in this comment (and I apologize in advance) but the overall answer should be good enough to highlight the differences compared to what computers to today.
> Obviously it could have not been a mask of pixels
To be more specific about your explanation of too much memory: many early GUIs were 1 bit-per-pixel, so the bitmask would use the same amount of memory as the window contents.
There was another advantage to the complexity of only drawing regions: the OS could tell the application when a region was exposed, so you only had to redraw a region if it was exposed and needed an update or it was just exposed. Unless you were doing something complex and could justify buffering the results, you were probably re-rendering it. (At least that is my recollections from making a Mandelbrot fractal program for a compact Mac, several decades back.)
Reminds me of the story where some company was making a new VGA card, and it was rumored a rival company had implemented a buffer of some sort in their card. When both cards came out the rival had either not actually implemented it or implemented a far simpler solution
I teach this lesson to my mentees. Knowing that something is possible gives you significant information. Also, don't brag - It gives away significant information.
Just knowing something is possible makes it much, much easier to achieve.
https://valvedev.info/archives/abrash/abrash.pdf
Former Ion Storm employees later revealed that Dominion’s E3 1996 demo was pre-rendered, with actors pretending to play, not live gameplay.
(for ref, diablo III is also in my top 3 :)
But that's not a recipe for personal happiness for most people, and most of us would not end up contributing revolutionary improvements even if done so. World needs awesome workers, and we also need ie awesome parents or just happy balanced content people (or at least some part of those).