Ask HN: Why hasn't x86 caught up with Apple M series?
431 points by stephenheron 3d ago 614 comments
Ask HN: Best codebases to study to learn software design?
103 points by pixelworm 4d ago 90 comments
Anything can be a message queue if you use it wrongly enough (2023)
166 crescit_eundo 56 8/28/2025, 3:14:03 PM xeiaso.net ↗
Message-submitting clients used SMTP libraries. Message-consuming clients used Exchange APIs. Consumers would only look at unread messages, they would mark messages as read when they started processing, and move them to a folder other than the Inbox if they succeeded. Many of the queues were multi-producer, but all queues were single-consumer (CPUs were pricey at the time - our servers were all Pentiums and Pentium Pros), which simplified things a lot.
Need a new queue / topic? Add an email address. Need to inspect a queue? Load up an email client. An unexpected benefit was that we could easily put humans in the loop for handling certain queues (using HTML in the messages).
It worked surprisingly well for the 5 years that the company was around. Latency was okay, but not great. Throughput was much better than we would have hoped for - Exchange was almost never the bottleneck.
Amazon Video’s original blog post is gone, but here is a third party writeup. https://medium.com/@hellomeenu1/why-amazon-prime-video-rever...
> The main scaling bottleneck in the architecture was the orchestration management that was implemented using AWS Step Functions. Our service performed multiple state transitions for every second of the stream, so we quickly reached account limits. Besides that, AWS Step Functions charges users per state transition.
> The second cost problem we discovered was about the way we were passing video frames (images) around different components. To reduce computationally expensive video conversion jobs, we built a microservice that splits videos into frames and temporarily uploads images to an Amazon Simple Storage Service (Amazon S3) bucket. Defect detectors (where each of them also runs as a separate microservice) then download images and processed it concurrently using AWS Lambda. However, the high number of Tier-1 calls to the S3 bucket was expensive.
They were really deeply drinking the AWS serverless kool-aid if they thought the right way to stream video was multiple microservices accessing individual frames on S3...
I would guess this was part of a process when new videos were uploaded and transcoded to different formats. Likely they were taking transcoded frames at some sample rate and uploading them to S3 where some workers were then analyzing the images to look for encoding artifacts.
This would most likely be a one-time sanity check for new videos that have to go through some conversion pipelines. However, once converted to their final form I would suspect the video files are statically distributed using a CDN.
It couldn’t possibly be because AWS execs were pissed or anything… /s
No comments yet
It's tempting to roll your own by polling a database table, but that approach breaks down- sometimes even at fairly low traffic levels. Once you move beyond a simple cron job, you're suddenly fighting row locking and race conditions just to prevent significant duplicate processing; effectively reinventing a wheel, poorly (potentially 5 or 10 times in the same service).
A service like SQS solves this with its state management. A message becomes 'invisible' while being processed. If it's not deleted within the configurable visibility timeout, it transitions back to available. That 'fetch next and mark invisible' state transition is the key, and it's precisely what's so difficult to implement correctly and performantly in a database every single time you need it.
Now if SQS has some super-scalar mega-cluster capability where one instance can deliver 100 billion messages a day across the same group of consumers, ok, I'm impressed, because most MQ's can't, because... locking. Thus Kafka (which is not a message queue).
I think the RDBMS MQ should be treated as the "No worse than this" standard - if my fancy new message queueing product is even harder to set up, it isn't worth your trouble. But SQS itself IS pretty easy to use.
In practice, I've never seen this implemented correctly in the wild- most people don't seem to care enough to handle the transactions properly. Additionally, if you want additional features like DLQs or metrics on stuck message age, you'll end up with a lot more complexity just to get parity with a standard queue system.
A common library could help with this though.
"Something amusing about this is that it is something that technically steps into the realm of things that my employer does. This creates a unique kind of conflict where I can't easily retain the intellectial property (IP) for this without getting it approved from my employer. It is a bit of the worst of both worlds where I'm doing it on my own time with my own equipment to create something that will be ultimately owned by my employer. This was a bit of a sour grape at first and I almost didn't implement this until the whole Air Canada debacle happened and I was very bored."
I don't have fun examples with message queues, but I do remember some with filesystems - a popular target to connect cursed backends to. You can store data in Ping packets [0]. You can store data in the digits of Pi - achieving unbelievable compression [1]. You can store data in the metadata and other unused blocks of images - also known as steganography [2]. People wrote software to use Gmail emails as a file system [3].
That's just from the top of my head, and it really shows that sky's the limit with software.
[0] https://github.com/yarrick/pingfs
[1] https://github.com/ajeetdsouza/pifs
[2] https://en.wikipedia.org/wiki/Steganographic_file_system
[3] https://lwn.net/Articles/99933/
I asked if the messages were all under 1.5kb, he said yes. I asked if at-most-one delivery was ok, he said yes. So I proposed he just grab a router and fire messages through it as UDP packets, then use BGP/ECMP to balance the packets between receivers. Add some queues on the router, then just let the receivers pick up the packets as fast as they could. You'd need some kind of feedback to manage back pressure, but ¯\_(ツ)_/¯
A fairly cheap way to achieve 1M+ messages per second.
I never got the chance to flesh-out the idea fully, but the simplicity of it tickled me. Maybe it would have worked, maybe not.
UDP message delivery order is not guaranteed. Hell, UDP delivery itself is not guaranteed (although IME, messages don't usually get dropped unless they cross subnets).
> I asked if at-most-one delivery was ok, he said yes.
Use case satisfied.
>> I asked if at-most-one delivery was ok, he said yes.
> Use case satisfied.
No. https://wiki.wireshark.org/DuplicatePackets
"ConnectionlessProtocols such as UDP won't detect duplicate packets, because there's no information in, for example, the UDP header to identify a packet so that packets can be recognized as duplicates. The data from that packet will be indicated twice (or even more) to the application; it's the responsibility of the application to detect duplicates (perhaps by supplying enough information in its headers to do so) and process them appropriately, if necessary"
My thinking was that ordering would be pretty unaffected when there is only a single hop. But yeah, we would have needed to test that under load.
Anything can be a message queue if you use it wrongly enough - https://news.ycombinator.com/item?id=36186176 - June 2023 (239 comments)
On a related note, have you seen the prices at Whole Foods lately? $6 for a packet of dehydrated miso soup. This usually costs $2.50 served prepared at a sushi restaurant. AWS network egress fees are similarly blasphemous.
Shame on Amazon, lol. Though it's really capitalisms fault, if you think it through all the way.
Even with the massive margins, cloud computing is far cheaper for most SMEs than hiring an FTE sysadmin and racking machines in a colo.
The problem is that people forget to switch back to the old way when it’s time.
Now every developer also has to be DevOps, learning docker, kubernetes and CI systems instead of just focusing on development.
Also we all still have ops teams.
Edit: And by people with too much money, which was until recently most tech companies.
That very much depends on your use case and billing period. Most of my public web applications run in a colo in Atlanta on containers hosted by less than $2k in hardware and cached by Cloudflare. This replaced an AWS/Digitalocean combination that used to bill about $400/mo.
Definitely worth it for me, but there are some workloads that aren’t worth it and I stick with cloud services to handle.
I would estimate that a significant amount of services hosted on AWS are paid for by small businesses with less reliability and uptime requirements than I have.
And the only viable answer is the ol’ capitalist saw: they charge what buyers are willing to pay.
That never quite satisfies people though.