In the 1990s, I was at a startup that had a need for a message queue. The only thing we found at the time was a product from TIBCO that was priced way-way-way out of our reach. IIRC, it didn't even run on PCs, only mainframes and minis. Microsoft Exchange Server (Microsoft's email server) had just been released at the time, and we decided to use it as a message queue.
Message-submitting clients used SMTP libraries. Message-consuming clients used Exchange APIs. Consumers would only look at unread messages, they would mark messages as read when they started processing, and move them to a folder other than the Inbox if they succeeded. Many of the queues were multi-producer, but all queues were single-consumer (CPUs were pricey at the time - our servers were all Pentiums and Pentium Pros), which simplified things a lot.
Need a new queue / topic? Add an email address. Need to inspect a queue? Load up an email client. An unexpected benefit was that we could easily put humans in the loop for handling certain queues (using HTML in the messages).
It worked surprisingly well for the 5 years that the company was around. Latency was okay, but not great. Throughput was much better than we would have hoped for - Exchange was almost never the bottleneck.
stephenlf · 1h ago
Remember when Amazon Video moved from serverless back to a monolith because they were using S3 for storing video streams for near realtime processing? This feels the same. Except Amazon Video is an actual company trying to build real software.
IIRC they were storing individual frames in S3 buckets and hitting their own internal lambda limits. Funny story tbh.
LeifCarrotson · 50m ago
You remember correctly:
> The main scaling bottleneck in the architecture was the orchestration management that was implemented using AWS Step Functions. Our service performed multiple state transitions for every second of the stream, so we quickly reached account limits. Besides that, AWS Step Functions charges users per state transition.
> The second cost problem we discovered was about the way we were passing video frames (images) around different components. To reduce computationally expensive video conversion jobs, we built a microservice that splits videos into frames and temporarily uploads images to an Amazon Simple Storage Service (Amazon S3) bucket. Defect detectors (where each of them also runs as a separate microservice) then download images and processed it concurrently using AWS Lambda. However, the high number of Tier-1 calls to the S3 bucket was expensive.
They were really deeply drinking the AWS serverless kool-aid if they thought the right way to stream video was multiple microservices accessing individual frames on S3...
pythonaut_16 · 42m ago
It’s more honesty that you see from most service providers, both dogfooding the approach and not handwaving the costs.
mikepurvis · 49m ago
Has a lot of “orders from on high to dog food all the things” energy.
moi2388 · 56m ago
That’s hilarious
lloydatkinson · 1h ago
They deleted their own post?
It couldn’t possibly be because AWS execs were pissed or anything… /s
The truly cursed thing in the article is this bit near the end (unless this is part of the satire):
"Something amusing about this is that it is something that technically steps into the realm of things that my employer does. This creates a unique kind of conflict where I can't easily retain the intellectial property (IP) for this without getting it approved from my employer. It is a bit of the worst of both worlds where I'm doing it on my own time with my own equipment to create something that will be ultimately owned by my employer. This was a bit of a sour grape at first and I almost didn't implement this until the whole Air Canada debacle happened and I was very bored."
mananaysiempre · 57m ago
Yes, I guess this is how we learn that Tailscale will lay claim to things you do on your own time using your own machine.
redbell · 2h ago
On a totally unrelated topic, I once read a meme online that says: "If you ever felt useless, remember ueue in queue!"
No comments yet
spectraldrift · 59m ago
People often forget a message queue is just a simple, high-throughput state machine.
It's tempting to roll your own by polling a database table, but that approach breaks down- sometimes even at fairly low traffic levels. Once you move beyond a simple cron job, you're suddenly fighting row locking and race conditions just to prevent significant duplicate processing; effectively reinventing a wheel, poorly (potentially 5 or 10 times in the same service).
A service like SQS solves this with its state management. A message becomes 'invisible' while being processed. If it's not deleted within the configurable visibility timeout, it transitions back to available. That 'fetch next and mark invisible' state transition is the key, and it's precisely what's so difficult to implement correctly and performantly in a database every single time you need it.
groone · 48m ago
Message becomes invisible in a regular relational database when using `SELECT FOR UPDATE SKIP LOCKED`
kerblang · 23m ago
Overall it's completely feasible to build a message queue with RDBMS _because_ they have locking. You might end up doing extra work compared to some other products that make message queueing easy/fun/so-simple-caveman-etc.
Now if SQS has some super-scalar mega-cluster capability where one instance can deliver 100 billion messages a day across the same group of consumers, ok, I'm impressed, because most MQ's can't, because... locking. Thus Kafka (which is not a message queue).
I think the RDBMS MQ should be treated as the "No worse than this" standard - if my fancy new message queueing product is even harder to set up, it isn't worth your trouble. But SQS itself IS pretty easy to use.
I thought the "multiple anime personalities explaining things to each other" style of tech blogging was so 2018
unmotivated-hmn · 32m ago
My first time seeing it. I was somewhat pleasantly confused.
stego-tech · 1h ago
This is beyond cursed and I love it.
packetlost · 53m ago
I once had a coworker use GitLab + a git repo + webhooks to implement a queued event system. Some change (I think it was in Jenkins) would call a webhook which would append to some JSON array in a repo, commit it, which would itself trigger something else downstream. It was horrifying and glorious.
IIAOPSW · 2h ago
Even HN comment sections?
npteljes · 28m ago
Of course. A message queue is database, and software that handles it in a specific way to make it a message queue. So, HN could basically be that database backend for that imaginary software that turns it into a message queue.
I don't have fun examples with message queues, but I do remember some with filesystems - a popular target to connect cursed backends to. You can store data in Ping packets [0]. You can store data in the digits of Pi - achieving unbelievable compression [1]. You can store data in the metadata and other unused blocks of images - also known as steganography [2]. People wrote software to use Gmail emails as a file system [3].
That's just from the top of my head, and it really shows that sky's the limit with software.
I had a developer colleague a while back who was toying with an idea that would require emitting and consuming a _lot_ of messages. I think it was somewhere on the order of 10k-100k/second. He was looking at some pretty expensive solutions IIRC.
I asked if the messages were all under 1.5kb, he said yes. I asked if at-most-one delivery was ok, he said yes. So I proposed he just grab a router and fire messages through it as UDP packets, then use BGP/ECMP to balance the packets between receivers. Add some queues on the router, then just let the receivers pick up the packets as fast as they could. You'd need some kind of feedback to manage back pressure, but ¯\_(ツ)_/¯
A fairly cheap way to achieve 1M+ messages per second.
I never got the chance to flesh-out the idea fully, but the simplicity of it tickled me. Maybe it would have worked, maybe not.
HeyLaughingBoy · 4m ago
Isn't a fundamental property of a queue that it's FIFO?
UDP message delivery order is not guaranteed. Hell, UDP delivery itself is not guaranteed (although IME, messages don't usually get dropped unless they cross subnets).
DoneWithAllThat · 14m ago
Corollary: every message queue can be a database if you use it wrongly enough.
ranger_danger · 2h ago
sounds like "parasitic storage" and/or steganography
Kye · 1h ago
One of my favorite dinosaurs
pluto_modadic · 2h ago
muahahahaha, muahahaha!
lstodd · 2h ago
so very true.
devmor · 59m ago
This is utterly incredible and inspiring in the worst way. Mad engineering!
metadat · 2h ago
Fiendishly outlandish idea, incredibly wrong that it should even be possible for the existence of Hoshino to even have ever been a thought, yet here we are. I love it!
On a related note, have you seen the prices at Whole Foods lately? $6 for a packet of dehydrated miso soup. This usually costs $2.50 served prepared at a sushi restaurant. AWS network egress fees are similarly blasphemous.
Shame on Amazon, lol. Though it's really capitalisms fault, if you think it through all the way.
BiteCode_dev · 1h ago
This is not a situation where you have zero alternatives. You have ton of cheap hosting out there. Most people using AWS don't need the level of reliability and scaling it provides, they pay the price for nothing.
sneak · 2h ago
Why is it Amazon’s fault that people voluntarily choose to use Amazon?
Even with the massive margins, cloud computing is far cheaper for most SMEs than hiring an FTE sysadmin and racking machines in a colo.
The problem is that people forget to switch back to the old way when it’s time.
devmor · 55m ago
> Even with the massive margins, cloud computing is far cheaper for most SMEs than hiring an FTE sysadmin and racking machines in a colo.
That very much depends on your use case and billing period. Most of my public web applications run in a colo in Atlanta on containers hosted by less than $2k in hardware and cached by Cloudflare. This replaced an AWS/Digitalocean combination that used to bill about $400/mo.
Definitely worth it for me, but there are some workloads that aren’t worth it and I stick with cloud services to handle.
I would estimate that a significant amount of services hosted on AWS are paid for by small businesses with less reliability and uptime requirements than I have.
immibis · 1h ago
SMEs hire someone (an MSP) to manage their IT. They don't use AWS because AWS services are too low-level. AWS is chosen by people who should know better and mostly on the basis of marketing inertia.
Edit: And by people with too much money, which was until recently most tech companies.
shermantanktop · 1h ago
Another of my online lives is on guitar forums (TGP etc), populated by diverse set of non-geek characters. An eternal question that comes up is “why are they charging so much for this guitar? The parts can’t be that expensive. I bet I could just…”
And the only viable answer is the ol’ capitalist saw: they charge what buyers are willing to pay.
That never quite satisfies people though.
ecshafer · 1h ago
Employing labor full time is incredibly expensive in the US. Once you include overhead, taxes, benefits, etc. you can easily be paying 2x wage for a worker. Not to mention buying the goods. So yeah the parts for the guitar might cost X, but then it costs Y to store them and Z for the space to assemble them then A to pay the workers and B to ship them and C to market. It adds up. Without jumping to the EVILS of "Capitalism" a business costs money to run. I can't imagine guitar manufacturer margins are anything close to techs, probably <5%. Gemini tells me industry is around 3.8% so I don't think I am far off.
BiteCode_dev · 1h ago
In this case, you would need to pay someone anyway. I never heard about an AWS account that didn't require at least one engineer in charge of it.
Message-submitting clients used SMTP libraries. Message-consuming clients used Exchange APIs. Consumers would only look at unread messages, they would mark messages as read when they started processing, and move them to a folder other than the Inbox if they succeeded. Many of the queues were multi-producer, but all queues were single-consumer (CPUs were pricey at the time - our servers were all Pentiums and Pentium Pros), which simplified things a lot.
Need a new queue / topic? Add an email address. Need to inspect a queue? Load up an email client. An unexpected benefit was that we could easily put humans in the loop for handling certain queues (using HTML in the messages).
It worked surprisingly well for the 5 years that the company was around. Latency was okay, but not great. Throughput was much better than we would have hoped for - Exchange was almost never the bottleneck.
Amazon Video’s original blog post is gone, but here is a third party writeup. https://medium.com/@hellomeenu1/why-amazon-prime-video-rever...
> The main scaling bottleneck in the architecture was the orchestration management that was implemented using AWS Step Functions. Our service performed multiple state transitions for every second of the stream, so we quickly reached account limits. Besides that, AWS Step Functions charges users per state transition.
> The second cost problem we discovered was about the way we were passing video frames (images) around different components. To reduce computationally expensive video conversion jobs, we built a microservice that splits videos into frames and temporarily uploads images to an Amazon Simple Storage Service (Amazon S3) bucket. Defect detectors (where each of them also runs as a separate microservice) then download images and processed it concurrently using AWS Lambda. However, the high number of Tier-1 calls to the S3 bucket was expensive.
They were really deeply drinking the AWS serverless kool-aid if they thought the right way to stream video was multiple microservices accessing individual frames on S3...
It couldn’t possibly be because AWS execs were pissed or anything… /s
"Something amusing about this is that it is something that technically steps into the realm of things that my employer does. This creates a unique kind of conflict where I can't easily retain the intellectial property (IP) for this without getting it approved from my employer. It is a bit of the worst of both worlds where I'm doing it on my own time with my own equipment to create something that will be ultimately owned by my employer. This was a bit of a sour grape at first and I almost didn't implement this until the whole Air Canada debacle happened and I was very bored."
No comments yet
It's tempting to roll your own by polling a database table, but that approach breaks down- sometimes even at fairly low traffic levels. Once you move beyond a simple cron job, you're suddenly fighting row locking and race conditions just to prevent significant duplicate processing; effectively reinventing a wheel, poorly (potentially 5 or 10 times in the same service).
A service like SQS solves this with its state management. A message becomes 'invisible' while being processed. If it's not deleted within the configurable visibility timeout, it transitions back to available. That 'fetch next and mark invisible' state transition is the key, and it's precisely what's so difficult to implement correctly and performantly in a database every single time you need it.
Now if SQS has some super-scalar mega-cluster capability where one instance can deliver 100 billion messages a day across the same group of consumers, ok, I'm impressed, because most MQ's can't, because... locking. Thus Kafka (which is not a message queue).
I think the RDBMS MQ should be treated as the "No worse than this" standard - if my fancy new message queueing product is even harder to set up, it isn't worth your trouble. But SQS itself IS pretty easy to use.
Anything can be a message queue if you use it wrongly enough - https://news.ycombinator.com/item?id=36186176 - June 2023 (239 comments)
I don't have fun examples with message queues, but I do remember some with filesystems - a popular target to connect cursed backends to. You can store data in Ping packets [0]. You can store data in the digits of Pi - achieving unbelievable compression [1]. You can store data in the metadata and other unused blocks of images - also known as steganography [2]. People wrote software to use Gmail emails as a file system [3].
That's just from the top of my head, and it really shows that sky's the limit with software.
[0] https://github.com/yarrick/pingfs
[1] https://github.com/ajeetdsouza/pifs
[2] https://en.wikipedia.org/wiki/Steganographic_file_system
[3] https://lwn.net/Articles/99933/
I asked if the messages were all under 1.5kb, he said yes. I asked if at-most-one delivery was ok, he said yes. So I proposed he just grab a router and fire messages through it as UDP packets, then use BGP/ECMP to balance the packets between receivers. Add some queues on the router, then just let the receivers pick up the packets as fast as they could. You'd need some kind of feedback to manage back pressure, but ¯\_(ツ)_/¯
A fairly cheap way to achieve 1M+ messages per second.
I never got the chance to flesh-out the idea fully, but the simplicity of it tickled me. Maybe it would have worked, maybe not.
UDP message delivery order is not guaranteed. Hell, UDP delivery itself is not guaranteed (although IME, messages don't usually get dropped unless they cross subnets).
On a related note, have you seen the prices at Whole Foods lately? $6 for a packet of dehydrated miso soup. This usually costs $2.50 served prepared at a sushi restaurant. AWS network egress fees are similarly blasphemous.
Shame on Amazon, lol. Though it's really capitalisms fault, if you think it through all the way.
Even with the massive margins, cloud computing is far cheaper for most SMEs than hiring an FTE sysadmin and racking machines in a colo.
The problem is that people forget to switch back to the old way when it’s time.
That very much depends on your use case and billing period. Most of my public web applications run in a colo in Atlanta on containers hosted by less than $2k in hardware and cached by Cloudflare. This replaced an AWS/Digitalocean combination that used to bill about $400/mo.
Definitely worth it for me, but there are some workloads that aren’t worth it and I stick with cloud services to handle.
I would estimate that a significant amount of services hosted on AWS are paid for by small businesses with less reliability and uptime requirements than I have.
Edit: And by people with too much money, which was until recently most tech companies.
And the only viable answer is the ol’ capitalist saw: they charge what buyers are willing to pay.
That never quite satisfies people though.