> {"error":"Too many requests, please try again later."}
I guess it still works.
lgl · 5h ago
Bug report: when the server is overloaded, the No's are no longer random :)
NotMichaelBay · 5h ago
It's so elegant. Even in failure, it's still operational.
riquito · 5h ago
Love it, it's brilliant, but I think the rate limiting logic is not doing what the author really wants, it actually costs more cpu to detect and produce the error than returning the regular response (then my mind goes on how to actually over optimize this thing, but that's another story :-D )
hotheadhacker · 4h ago
Rate limiting has been removed
kenrick95 · 5h ago
Classic Hacker News hug of death
xnorswap · 5h ago
It looks like it's limited to 10 requests per minute, it's less of a hug and more of a gentle brush past.
It's documented as "Per IP", but I'm willing to bet either that documentation is wrong, or it's picking up the IP address of the reverse proxy or whatever else is in-front of the application server, rather than the originator IP.
It'd be easier to add new ones if they were in there a single time each. Maybe the duplication is meant to handle distribution?
finnh · 5h ago
ah, yes, the "memory is no object" way of obtaining a weighted distribution. If you need that sweet sweet O(1) selection time, maybe check out the Alias Method :)
justin_oaks · 3h ago
Knowing that there are only 25 responses, it makes it all the more funny that rate limiting is mentioned.
And you can host the service yourself! Hard pass. I'll read the 25 responses from your gist. Thanks!
ziddoap · 6h ago
Fun idea. I wonder why the rejection messages are repeated so often in the "reasons" file.
"I truly value our connection, and I hope my no doesn't change that." shows up 45 times.
Seems like most of the rejections appear between 30 and 50 times.
mikepurvis · 5h ago
A single large file is also sadness for incorporating suggestions from collaborators as you're always dealing with merge conflicts. Better might be a folder of plain text files, where each can have multiple lines in it, and they're grouped by theme or contributor or something.
spiffyk · 5h ago
A folder of plain text files will be sadness for performance. It's a file with basically line-wise entries, merge conflicts in that will be dead easy to resolve with Git locally. It won't be single-click in GitHub, but not too much of a hassle.
mikepurvis · 5h ago
In fairness, I doubt most of these kinds of meme projects have a maintainer active enough to be willing to conduct local merges, even if it's "dead easy" to do so.
Maybe then this is really a request for Github to get better/smarter merge tools in the Web UI, particularly syntax-aware ones for structured files like JSON and YAML, where it would be much easier to guess, or even just preset AB and BA as the two concrete options available when both changes inserted new content at the same point. It could even read your .gitattributes file for supported mergers that would be able to telegraph "I don't care about the order" or "Order new list entries alphabetically" or whatever.
It's ~fine for performance if you load them once at service startup. But I agree, merging is also no big deal.
MalbertKerman · 5h ago
There are 25 unique responses in that 1000-line file.
justin_oaks · 3h ago
Once you remove the duplicates that are different only because of the typos in them, yes, that's correct.
KTibow · 5h ago
It might be a weighted random.
ziddoap · 5h ago
Might be!
Not the way I'd approach it, but as a joke service, if it works it works.
khanan · 6h ago
Was wondering the same thing.. Probably cruft so it looks impressive at a glance.
Retr0id · 6h ago
If you ask LLMs for a long enough list of things, they often repeat entries.
hombre_fatal · 5h ago
I made a lot of things like this as a noob and threw them up on github.
As you gain experience, these projects become a testament to how far you've come.
"An http endpoint that returns a random array element" becomes so incredibly trivial that you can't believe you even made a repo for it, and one day you sheepishly delete it.
blahaj · 5h ago
I don't think things have to be impressive to be shown. A funny little idea is all you need, no matter how simple the code. Actually I find exactly that quite neat.
Well this is something... someone creating a service off the back of a meme that's been flying around my networks for the past two days...
seabass · 5h ago
{"error":"Too many requests, please try again later."}
a missed opportunity for some humor
readthenotes1 · 5h ago
Beats "I have a headache"
richrichardsson · 5h ago
{"error":"Computer says no."}
Retr0id · 6h ago
It could be genuinely useful for testing HTTP clients if it had a wider array of failure modes.
Some ideas:
- All the different HTTP status codes
- expired/invalid TLS cert
- no TLS cipher overlap
- invalid syntax at the TLS and/or HTTP level
- hang/timeout
- endless slowloris-style response
- compression-bomb
- DNS failure (and/or round-robin DNS where some IPs are bad)
- infinite redirect loop
- ipv6-only
- ipv4-only
- Invalid JSON or XML syntax
zikani_03 · 5h ago
Not exactly what you are asking for, but reminded me that Toxiproxy[0] exists if you want to test your applications or even HTTP clients against various kinds of failures:
Oh great, it's Balatro's Wheel of Fortune card as a Service (WoFaaS)
hotheadhacker · 4h ago
The API rate limiting has been removed.
n8m8 · 5h ago
inb4 someone genuinely doesn't understand why you wouldn't do this with an LLM
Haeuserschlucht · 6h ago
:)
artogahr · 6h ago
:)
blahaj · 6h ago
> Rate Limit: 10 requests per minute per IP
I understand that one wants some rate limiting so that others don't just use this as a backend for their own service causing every single request for their service to also create an API request.
But this is as simple and resource unintensive as it gets for an HTTP server. 10 requests per minute is just silly.
Also could it be that the limit isn't enforced against the origin IP address but against the whole Cloudflare reverse proxy?
arp242 · 6h ago
Mate, it's a joke, not a serous service. The only silly thing here is going off on a tangent about the rate limit.
jaywcarman · 6h ago
10 requests per minute per IP is plenty enough to play around with and have a little fun. For anything more than that you could (should!) host it yourself.
blahaj · 6h ago
So it is just purposefully made to be less useful? Is that part of the joke?
The rate limit still pretty surely isn't applied per IP.
mindtricks · 5h ago
If it helps you, think of the rate limiter as the "no" final boss.
I guess it still works.
It's documented as "Per IP", but I'm willing to bet either that documentation is wrong, or it's picking up the IP address of the reverse proxy or whatever else is in-front of the application server, rather than the originator IP.
Why do I think that? Well these headers:
Which means it's not being rate-limited by cloudflare, it's express doing the rate limiting.And I haven't yet made 10 requests, so unless it's very bad at picking up my IP, it's picking up the cloudflare IP instead.
It'd be easier to add new ones if they were in there a single time each. Maybe the duplication is meant to handle distribution?
And you can host the service yourself! Hard pass. I'll read the 25 responses from your gist. Thanks!
"I truly value our connection, and I hope my no doesn't change that." shows up 45 times.
Seems like most of the rejections appear between 30 and 50 times.
Maybe then this is really a request for Github to get better/smarter merge tools in the Web UI, particularly syntax-aware ones for structured files like JSON and YAML, where it would be much easier to guess, or even just preset AB and BA as the two concrete options available when both changes inserted new content at the same point. It could even read your .gitattributes file for supported mergers that would be able to telegraph "I don't care about the order" or "Order new list entries alphabetically" or whatever.
cf. https://github.com/jonatanpedersen/git-json-merge
Not the way I'd approach it, but as a joke service, if it works it works.
As you gain experience, these projects become a testament to how far you've come.
"An http endpoint that returns a random array element" becomes so incredibly trivial that you can't believe you even made a repo for it, and one day you sheepishly delete it.
https://raw.githubusercontent.com/hotheadhacker/no-as-a-serv...
a missed opportunity for some humor
Some ideas:
- All the different HTTP status codes
- expired/invalid TLS cert
- no TLS cipher overlap
- invalid syntax at the TLS and/or HTTP level
- hang/timeout
- endless slowloris-style response
- compression-bomb
- DNS failure (and/or round-robin DNS where some IPs are bad)
- infinite redirect loop
- ipv6-only
- ipv4-only
- Invalid JSON or XML syntax
[0]: https://github.com/Shopify/toxiproxy
https://bofh.bjash.com/bofh/bofh1.html
I understand that one wants some rate limiting so that others don't just use this as a backend for their own service causing every single request for their service to also create an API request. But this is as simple and resource unintensive as it gets for an HTTP server. 10 requests per minute is just silly.
Also could it be that the limit isn't enforced against the origin IP address but against the whole Cloudflare reverse proxy?
The rate limit still pretty surely isn't applied per IP.