> I believe what is happening is that those images are being drawn by some script-kiddies. If I understand correctly, the website limited everyone to 1 pixel per 30 seconds, so I guess everyone was just scripting Puppeteer/Chromium to start a new browser, click a pixel, and close the browser, possibly with IP address rotation, but maybe that wasn't even needed.
I think you perhaps underestimate just how big of a thing this became basically overnight. I mentioned a drawing over my house to a few people and literally everyone instantly knew what I meant without even saying the website. People love /r/place style things every few years, and this having such a big canvas and being on a world map means that there is a lot of space for everyone to draw literally where they live.
ivanjermakov · 4h ago
> Nice idea, interesting project, next time please contact me before.
I understand that my popular service might bring your less popular one to the halt, but please configure it on your end so I know _programmatically_ what its capabilities are.
I host no API without rate-limiting. Additionally, clearly listing usage limits might be a good idea.
Aeolun · 3h ago
I think it’s reasonable to assume that a free service is not going to deal gracefully with your 100k rps hug of death. The fact that it actually did is an exception, not the rule.
If you are hitting anything free with more than 10rps (temporarily) you are an taking advantage in my opinion.
andai · 20h ago
From the screenshot I wanted to say, couldn't this be done on a single VPS? Seemed over engineered to me. Then I realized the silly pixels are on top of a map of the entire earth. Dang!
I'm curious what the peak req/s is like. I think it might be just barely within the range supported by benchmark-friendly web servers.
Unless there's some kind of order of magnitude slowdowns due to the nature of the application.
Edit: Looks like about 64 pixels per km (4096 per km^2). At full color uncompressed that's about 8TB to cover the entire earth (thinking long-term!). 10TB box is €20/month from Hetzner. You'd definitely want some caching though ;)
Edit 2: wplace uses 1000x1000 px pngs for the drawing layer. The drawings load instantly, while the map itself is currently very laggy, and some chunks permanently missing.
TylerE · 16h ago
"€20/month from Hetzner" is great until you actually need it to be up and working when you need it.
motorest · 7h ago
> "€20/month from Hetzner" is great until you actually need it to be up and working when you need it.
I managed a few Hetzner cloud instances, and some report perfect uptime for over a year. The ones that don't, I was the root cause.
What exactly leads you to make this sort of claim? Do you actually have any data or are you just running your mouth off?
slacktivism123 · 6h ago
>What exactly leads you to make this sort of claim?
IME Hetzner's not unreliable. I don't think you could serve 100k requests per second on a single VPS though. (And with dedicated, you're on the hook for redundancy yourself, same as any dedicated.)
cyberpunk · 4h ago
They’re unreliable as soon as you have to deal with their support who have the technical knowledge of a brick.
And as soon as you have to do ant business / deal with the german side of the business expect everything to slow down to 2 weeks for response which will still be incorrect.
They are simply not worth the hassle. Go with a competent host.
celsoazevedo · 2h ago
Ideally, we'd all be using very good and competent hosting companies, but they're not ideal for free/low revenue projects. It's better to have some down time than having to shut down the service because you can't afford to keep it running.
I think that in both cases here (OpenFreeMap and wplace), Hetzner/OVH/Scaleway is the way to go. Depending on what we're doing, the cost savings can even allow us to have redundancy at another cheap provider just in case something goes wrong.
Aeolun · 3h ago
> They’re unreliable as soon as you have to deal with their support who have the technical knowledge of a brick.
Since I never have to, that’s perfect isn’t it? If you need support from Hetzner you are using the wrong host.
colinbartlett · 23h ago
Thank you for this breakdown and for this level of transparency. We have been thinking of moving from MapTiler to OpenFreeMap for StatusGator's outage maps.
hyperknot · 22h ago
Feel free to migrate. If you ever worry about High Availability, self-hosting is always an option. But I'm working hard on making the public instance as reliable as possible.
Ericson2314 · 11h ago
Oh wow, TIL there is finally a simple way to actually view OpenStreetMap! Gosh, that's overdue. Glad it's done though!
Since the limit you ran into was number of open files could you just raise that limit? I get blocking the spammy traffic but theoretically could you have handled more if that limit was upped?
hyperknot · 20h ago
I've just written my question to the nginx community forum, after a lengthy debugging session with multiple LLMs. Right now, I believe it was the combination of multi_accept + open_file_cache > worker_rlimit_nofile.
Also, the servers were doing 200 Mbps, so I couldn't have kept up _much_ longer, no matter the limits.
toast0 · 19h ago
I'm pretty sure your open file cache is way too large. If you're doing 1k/sec, and you cache file descriptors for 60 minutes, assuming those are all unique, that's asking for 3 million FDs to be cached, when you've only got 1 million available. I've never used nginx or open_file_cache[1], but I would tune it way down and see if you even notice a difference in performance in normal operation. Maybe 10k files, 60s timeout.
> Also, the servers were doing 200 Mbps, so I couldn't have kept up _much_ longer, no matter the limits.
For cost reasons or system overload?
If system overload ... What kind of storage? Are you monitoring disk i/o? What kind of CPU do you have in your system? I used to push almost 10GBps with https on dual E5-2690 [2], but it was a larger file. 2690s were high end, but something more modern will have much better AES acceleration and should do better than 200 Mbps almost regardless of what it is.
[1] to be honest, I'm not sure I understand the intent of open_file_cache... Opening files is usually not that expensive; maybe at hundreds of thousands of rps or if you have a very complex filesystem. PS don't put tens of thousands of files in a directory. Everything works better if you take your ten thousand files and put one hundred files into each of one hundred directories. You can experiment to see what works best with your load, but a tree where you've got N layers of M directories and the last layer has M files is a good plan, 64 <= M <= 256. The goal is keeping the directories compact so searching and editing is effective.
If you do 200Mbps on a hetzner server after cloudflare caching, you are going to run out of traffic pretty rapidly. The limit is 20TB / month (which you’d reach in roughly 9 days).
ndriscoll · 20h ago
One thing that might work for you is to actually make the empty tile file, and hard link it everywhere it needs to be. Then you don't need to special case it at runtime, but instead at generation time.
NVMe disks are incredibly fast and 1k rps is not a lot (IIRC my n100 seems to be capable of ~40k if not for the 1 Gbit NIC bottlenecking). I'd try benchmarking without the tuning options you've got. Like do you actually get 40k concurrent connections from cloudflare? If you have connections to your upstream kept alive (so no constant slow starts), ideally you have numCores workers and they each do one thing at a time, and that's enough to max out your NIC. You only add concurrency if latency prevents you from maxing bandwidth.
hyperknot · 20h ago
Yes, that's a good idea. But we are talking about 90+% of the titles being empty (I might be wrong on that), that's a lot of hard links. I think the nginx config just need to be fixed, I hope I'll receive some help on their forum.
ndriscoll · 19h ago
You could also try turning off the file descriptor cache. Keep in mind that nvme ssds can do ~30-50k random reads/second with no concurrency, or at least hundreds of thousands with concurrency, so even if every request hit disk 10 times it should be fine. There's also kernel caching which I think includes some of what you'd get from nginx's metadata cache?
justinclift · 8h ago
> so I couldn't have kept up _much_ longer, no matter the limits.
Why would that kind of rate cause a problem over time?
wiradikusuma · 7h ago
"Wplace.live happened. Out of the blue, a new collaborative drawing website appeared, built from scratch using OpenFreeMap." -- as a founder, you know you're working on the wrong thing when there's a "fun project" getting daily traffic more than what you'd get in a lifetime :)
arend321 · 1h ago
I'm using OpenFreeMap commercially, fantastic and stable service.
rtaylorgarlock · 21h ago
Is it always/only 'laziness' (derogatory, i know) when caching isn't implemented by a site like wplace.live ? Why wouldn't they save openfreemap all the traffic when a caching server on their side presumably could serve tiles almost as fast or faster than openfreemap?
toast0 · 20h ago
Why should they when openfreemap is behind a CDN and their home page says things like:
> Using our public instance is completely free: there are no limits on the number of map views or requests. There’s no registration, no user database, no API keys, and no cookies. We aim to cover the running costs of our public instance through donations.
> Is commercial usage allowed?
> Yes.
IMHO, reading this and then just using it, makes a lot of sense. Yeah, you could put a cache infront of their CDN, but why, when they said it's all good, no limits, for free?
I might wonder a bit, if I knew the bandwidth it was using, but I might be busy with other stuff if my site went unexpectedly viral.
Aeolun · 3h ago
I think, when you read that, you should be reassured that nobody is going to suddenly tell you to pay, and then still implement caching on your own side to preserve the free offering for everyone else.
Seriously, whose first thought on reading that is “oh great, I can exploit this”.
naniwaduni · 37m ago
You don't need to be thinking "I can't exploit this" when you can just stop thinking about it.
VladVladikoff · 21h ago
I actually have a direct answer for this: priorities.
I run a fairly popular auction website and we have map tiles via stadia maps. We spend about $80/month on this service for our volume. We definitely could get this cost down to a lower tier by caching the tiles and serving them from our proxy. However we simply haven’t yet had the time to work on this, as there is always some other task which is higher priority.
latchkey · 15h ago
Like reading and commenting on HN articles! ;-)
hyperknot · 21h ago
We are talking about an insane amount of data here. It was 56 Gbit/s (or 56 x 1 Gbit servers 100% saturated!).
This is not something a "caching server" could handle. We are talking on the order of CDN networks, like Cloudflare, to be able to handle this.
Sesse__ · 18h ago
> We are talking about an insane amount of data here. It was 56 Gbit/s. This is not something a "caching server" could handle.
You are not talking about an insane amount of data if it's 56 Gbit/s. Of course a caching server could handle that.
Source: Has written servers that saturated 40gig (with TLS) on an old quadcore.
hyperknot · 17h ago
OK, technically there might exist such server, I guess Netflix and friends are using those. But we are talking about a community supported, free service here. Hetzner servers are my only options, because of their unmetered bandwidth.
Sesse__ · 16h ago
It really depends on the size of the active set. If it fits into RAM of whatever server you are using, then it's not a problem at all, even with completely off-the-shelf hardware and software. Slap two 40gig NICs in it, install Varnish or whatever and you're good to go. (This is, of course, assuming that you have someone willing to pay for the bandwidth out to your users!)
If you need to go to disk to serve large parts of it, it's a different beast. But then again, Netflix was doing 800gig already three years ago (in large part from disk) and they are handicapping themselves by choosing an OS where they need to do significant amounts of the scaling work themselves.
hyperknot · 16h ago
I'm sure the server hardware is not a problem. The full dataset is 150 GB and the server has 64 GB RAM, most of which will be never requested. So I'm sure that the used tiles would actually get served from OS cache. If not, it's on a RAID 0 NVME SSD, connected locally.
What I've been referring to is the fact that even unlimited 1 Gbps connections can be quite expensive, now try to find a 2x40 gig connection for a reasonable money. That one user generated 200 TB in 24 hours! I have no idea about bandwidth pricing, but I bet it ain't cheap to serve that.
Sesse__ · 15h ago
Well, “bandwidth is expensive” is a true claim, but it's also a very different claim from “a [normal] caching server couldn't handle 56 Gbit/sec”…?
hyperknot · 15h ago
You are correct. I was putting "a caching server on their side" in the context of their side being a single dev hobby project running on a VPS, exploding on the weekend. I agree that these servers do exist and some companies do pay for this bandwidth as part of their normal operations.
Aeolun · 2h ago
56 Gbit/sec costs you about €590/day even on Hetzner.
bigstrat2003 · 7h ago
I realize that what constitutes "insane" is a subjective judgement. But, uh... I most certainly would call 56 Gbps insane. Which is not to say that hardware which handles it doesn't exist. It might not even be especially insane hardware. But that is a pretty insane data rate in my book.
ndriscoll · 21h ago
I'd be somewhat surprised if nginx couldn't saturate a 10Gbit link with an n150 serving static files, so I'd expect 6x $200 minipcs to handle it. I'd think the expensive part would be the hosting/connection.
wyager · 21h ago
> or 56 x 1 Gbit servers 100% saturated
Presumably a caching server would be 10GbE, 40GbE, or 100GbE
56Gbit/sec of pre-generated data is definitely something that you can handle from 1 or 2 decent servers, assuming each request doesn't generate a huge number of random disk reads or something
markerz · 21h ago
It looks like a fun website, not a for-profit website. The expectations and focus of fun websites is more to just get it working than to handle the scale. It sounds like their user base exploded overnight, doubling every 14 hours or so. It also sounds like it’s other a solo dev or a small group based on the maintainers wording.
No comments yet
eggbrain · 22h ago
Limiting by referrer seems strange — if you know a normal user makes 10-20 requests (let’s assume per minute), can’t you just rate limit requests to 100 requests per minute per IP (5x the average load) and still block the majority of these cases?
Or, if it’s just a few bad actors, block based on JA4/JA3 fingerprint?
hyperknot · 22h ago
What if one user really wants to browse around the world and explore the map. I remember spending half an hour in Google Earth desktop, just exploring around interesting places.
I think referer based limits are better, this way I can ask high users to please choose self-hosting instead of the public instance.
toast0 · 19h ago
Limiting by referrer is probably the right first step. (And changing the front page text)
You want to track usage by the site, not the person, because you can ask a site to change usage patterns in a way you can't really ask a site's users. Maybe a per IP limit makes sense too, but you wouldn't want them low enough that it would be effective for something like this.
parhamn · 7h ago
Since cloudflare is already sponsoring it, I do wonder how much of this type of service can be implemented all on cloudflare. Their stack could be great for tile serving.
leobuskin · 6h ago
I’m also surprised to see nginx and hetzner in this project. Why not entirely Cloudflare: workers, R2, and cache
conradfr · 2h ago
You can get cheap dedicated server on Hetzner with unlimited bandwidth, would the cost be similar with CF?
PUSH_AX · 6h ago
You can run containers now on CF, I think this was one of the last barriers that might have prevented a lot of software being migrated without a re-architecting.
Most stuff could run there now.
jspiner · 22h ago
The cache hit rate is amazing. Is there something you implemented specifically for this?
hyperknot · 22h ago
Yes, I designed the whole path structure / location blocks with caching in mind. Here is the generated nginx.conf, if you are interested:
Curious how this would have compared to a static pmtiles file being read directly by maplibre. I’ve had good luck with virtually equal latency to served tiles when consuming pmtiles via range requests on Bunnycdn.
hyperknot · 13h ago
Yes, wplace could solve their whole need by a single, custom-built static pmtile. No need to serve 150 GB of OSM data for their use-case.
Interesting, I should benchmark this. I have only used Bunnycdn so far and latency seemed similar to most tile providers like Maptiler and others (but a very limited test). This was using the full planet pmtiles file.
Bunnycdn also makes it easy to prevent downloading the entire file, either in case you care about anyone using it or just want to prevent surprise downloads for those exploring network tab.
biker142541 · 10h ago
Quick benchmark of pmtiles directly in maplibre vs served tiles, both via Bunnycdn and 5 areas sampled using same style.
Total impact on page end to end load time: 39ms longer with cached range requests from pmtiles than cached tiles.
Individual requests are comparable in the 20-35ms range, so the slight extra time seems to be from the additional round trip for range headers (makes sense).
CommanderData · 3h ago
I love sites like wplace can still go viral and blow up, in a age of an increasingly centralised web. Woot
charcircuit · 22h ago
>Nice idea, interesting project, next time please contact me before.
It's impossible to predict that one's project may go viral.
>As a single user, you broke the service for everyone.
Or you did by not having a high enough fd limit. Blaming sites when using it too much when you advertise there is no limit is not cool. It's not like wplace themselves were maliciously hammering the API.
columb · 22h ago
You are so entitled... Because of you most nice things have "no limits but...". Not cool stress testing someone's infrastructure. Not cool.
The author of this post is more than understanding, tried to fix it and offered a solution even after blocking them. On a free service.
Show us what you have done.
charcircuit · 22h ago
>You are so entitled
That's how agreements work. If someone says they will sell a hamburger for $5, and another person pays $5 for a hamburger, then they are entitled to a hamburger.
>On a free service.
It's up to the owner to price the service. Being overwhelmed by traffic when there are no limits is not a problem limited only to free services.
perching_aix · 21h ago
> Do you offer support and SLA guarantees?
>
> At the moment, I don’t offer SLA guarantees or personalized support.
From the website.
eszed · 20h ago
Sure, and if you bulk-order 5k hamburgers the restaurant will honor the price, but they'll also tell you "we're going to need some notice to handle that much product". Perfect analogy, really. This guy handled the situation perfectly, imo.
charcircuit · 19h ago
Except in this case the restauraut would have been able to handle the 5k orders if they didn't arbitrarily have their workers work with their hands tied behind their back. And instead of untieing their workers and appreciating the realization they were accidently bottlenecking themselves they blame the nearby event who caused a spike in foot traffic.
Publicly attacking your users instead of celebrating their success and your new learnings is not what I would call handling it perfectly. I think going for a halo effect strategy where you celebrate how people are using your platform to accomplish their goals will help people understand how what is being done is valuable and want people to adopt it or financially support it. On the other hand attacking people who use your platform publicly can make people apprehensive in using it fearing that they will be criticized too.
austhrow743 · 12h ago
Hamburger situation is not comparable. It’s a trade.
This is just someone being not very specific in a text file on their computer. I have many such notes, some of them publicly viewable.
010101010101 · 22h ago
Do you expect him just to let the service remain broken or to scale up to infinite cost to himself on this volunteer project? He worked with the project author to find a solution that works for both and does not degrade service for every other user, under literally no obligation to do anything at all.
This isn’t Anthropic deciding to throttle users paying hundreds of dollars a month for a subscription. Constructive criticism is one thing, but entitlement to something run by an individual volunteer for free is absurd.
toast0 · 20h ago
The project page kind of suggests he might scale up to infinite cost...
> Financially, the plan is to keep renting servers until they cover the bandwidth. I believe it can be self-sustainable if enough people subscribe to the support plans.
Especially since he said Cloudflare is providing the CDN for free... Yes, running the origins costs money, but in most cases, default fd limits are low, and you can push them a lot higher. At some point you'll run into i/o limits, but I think the I/O at the origin seems pretty managable if my napkin math was right.
If the files are all tiny, and the fd limit is the actual bottleneck, there's ways to make that work better too. IMHO, it doesn't make sense to accept a inbound connection if you can't get a fd to read a file for it, so better to limit the concurrent connections and let connections sit in the listen queue and have a short keepalive time out to make sure you're not wasting your fds on idle connections. With no other knowledge, I'd put the connection limit at half the FD limit, assuming the origin server is dedicated for this and serves static files exclusively. But, to be honest, if I set up something like this, I probably wouldn't have thought about FD limits until they got hit, so no big deal ... hopefully whatever I used to monitor would include available fds by default and I'd have noticed, but it's not a default output everywhere.
charcircuit · 22h ago
We are talking about hosting a fixed amount of static files. This should be a solved problem. This is nothing like running large AI models for people.
010101010101 · 21h ago
The nature of the service is completely irrelevant.
charcircuit · 21h ago
Running a no limit service for free definitely depends on the marginal cost of serving a single request.
rikafurude21 · 22h ago
the funny part is that his service didnt break- cloudflares cache caught 99% of the requests. just wanted to feel powerful and break the latest viral trend.
hoppp · 19h ago
Cool... You did well to ban them.
Its a ddos attack, lucky you dont have to pay for the brandwidth, then its a denial of wallet
perching_aix · 20h ago
Haven't worked with Cloudflare yet first hand, and I'm not familiar with web map tech. But if the site really is pretty much just serving lots of static files, why is Hetzner in the loop? Wouldn't fully migrating to Cloudflare Pages be possible?
internetter · 20h ago
The tiles need to be rendered. Yes frequent tiles can be cached but you already have a cache… it’s Cloudflare. Theoretically you could port the tileserver to Cloudflare pages but then you’d need to… port it… and it probably wouldn’t be cheaper
hyperknot · 20h ago
They are actually static files. There is just too many of them, about 300 million. You cannot put that in Pages.
jonathanlydall · 19h ago
Is CloudFlare’s R2 an option for you?
hoppp · 19h ago
It would cost a lot. Hetzner is hardware and them you can hammer it, free bandwidth. You get a very good server for cheap.
cloudflare would be pay per request, a hefty sum if ddos happens
perching_aix · 19h ago
Did a quick cost calc (with the help of gpt5, so might be wrong) when I read their comment about Pages not being suitable for this many files.
They say they're receiving $500/mo in donos and that it's currently just enough to cover their infra costs. Given 300 million 70 KB files, R2 + high cache hit ratio would work out to about $300 in storage-months + request costs, or $600/mo with Cache Reserve and then they'd always hit cache if I understand the project right: meaning the costs shouldn't blow up beyond that, and that request count would essentially just not matter.
hoppp · 18h ago
Yea but the cost is not a fixed monthly sum and things can go wrong as we can see from the blog post.
An accident could bankrupt the dev.
A dedicated server will always cost the same so you always know how much you pay.
It will cost 40 Euro/month to have 6 cores/12 threads,64gb of ram and 1Tb of ssd.
Dirt cheap compared to any other alternative
cuu508 · 17h ago
Factor in that you also need resources to generate and upload the tiles weekly.
perching_aix · 20h ago
Oh interesting, okay. For some reason I had the impression that the tiles were static and rendered offline.
ohdeargodno · 17h ago
Noone renders tiles on servers anymore. The vast majority of services have moved on to sending out vector tiles.
bravesoul2 · 8h ago
429 is your friend... but well done for handling the load!
LoganDark · 22h ago
> I believe what is happening is that those images are being drawn by some script-kiddies.
Oh absolutely not. I've seen so many autistic people literally just nolifing and also collaborating on huge arts on wplace. It is absolutely not just script kiddies.
> 3 billion requests / 2 million users is an average of 1,500 req/user. A normal user might make 10-20 requests when loading a map, so these are extremely high, scripted use cases.
I don't know about that either. Users don't just load a map, they look all around the place to search for and see a bunch of the art others have made. I don't know how many requests is typical for "exploring a map for hours on end" but I imagine a lot of people are doing just that.
I wouldn't completely discount automation but these usage patterns seem by far not impossible. Especially since wplace didn't expect sudden popularity so they may not have optimized their traffic patterns as much as they could have.
Karliss · 21h ago
Just scrolled around a little bit 2-3minutes with network monitor open. That already resulted in 500requests, 5MB transferred (after filtering by vector tile data). Not sure how many of those got cached by browser with no actual requests, cached by browser exchanging only headers or cached by cloudflare. I am guessing that the typical 10-20 requests/user case is for embedded map fragment like those commonly found in contact page where most users don't scroll at all or at most slightly zoom out to better see rest of city.
nemomarx · 22h ago
There are some user scripts to overlay templates on the map and coordinate working together, but I can't imagine that increases the load much. What might is that wplace has been struggling under the load and you have to refresh to see your pixels placed or any changes and that could be causing more calls an hour maybe?
v5v3 · 22h ago
The article mentions Cloudflare, so how much of this was cached by them?
> Please don't comment on whether someone read an article. "Did you even read the article? It mentions that" can be shortened to "The article mentions that".
RandomBacon · 20h ago
That guideline is decent I guess.
I am disappointed that they edited another guideline for the worse:
> Please don't comment about the voting on comments. It never does any good, and it makes boring reading.
It used to just say, don't complain about voting.
If the number of votes are so taboo, why do they even show us the number or user karma (and have a top list)?
RandomBacon · 9h ago
We can't even talk about the guidelines?
keketi · 22h ago
Are you new? Nobody actually reads the articles.
LorenDB · 22h ago
False. I almost never upvote an article without reading it, and half of those upvotes are because I already read something similar recently that gave me the same information.
eszed · 20h ago
I'll submit in the second case (already read something similar) that, properly speaking, we should read both, and upvote (or submit, if not already here) the better of the articles.
Not that, you know, I often take the time to do that, either - but it would improve the site and the discussions if we all did.
fnord77 · 22h ago
sounds like they survived 1,000 reqs/sec and the cloudflare CDN survived 99,000 reqs/sec
feverzsj · 22h ago
So, OFM was hit by another Million Dollar Homepage for kids.
willsmith72 · 21h ago
so 96% availability = "survived" now?
but interesting write-up. If I were a consumer of OpenFreeMap, I would be concerned that such an availability drop was only detected by user reports
ndriscoll · 21h ago
If I were a consumer of a free service from someone who will not take your money to offer support or an SLA (i.e. is not trying to run a business), I would assume there's little to no monitoring at all.
timmg · 21h ago
96% during a unique event. I think you would typically consider long term in a stat like that.
Assuming it was close to 100% the rest of the year, that works out to 99.97% over 12 months.
I think you perhaps underestimate just how big of a thing this became basically overnight. I mentioned a drawing over my house to a few people and literally everyone instantly knew what I meant without even saying the website. People love /r/place style things every few years, and this having such a big canvas and being on a world map means that there is a lot of space for everyone to draw literally where they live.
I understand that my popular service might bring your less popular one to the halt, but please configure it on your end so I know _programmatically_ what its capabilities are.
I host no API without rate-limiting. Additionally, clearly listing usage limits might be a good idea.
If you are hitting anything free with more than 10rps (temporarily) you are an taking advantage in my opinion.
I'm curious what the peak req/s is like. I think it might be just barely within the range supported by benchmark-friendly web servers.
Unless there's some kind of order of magnitude slowdowns due to the nature of the application.
Edit: Looks like about 64 pixels per km (4096 per km^2). At full color uncompressed that's about 8TB to cover the entire earth (thinking long-term!). 10TB box is €20/month from Hetzner. You'd definitely want some caching though ;)
Edit 2: wplace uses 1000x1000 px pngs for the drawing layer. The drawings load instantly, while the map itself is currently very laggy, and some chunks permanently missing.
I managed a few Hetzner cloud instances, and some report perfect uptime for over a year. The ones that don't, I was the root cause.
What exactly leads you to make this sort of claim? Do you actually have any data or are you just running your mouth off?
https://news.ycombinator.com/item?id=29651993
https://news.ycombinator.com/item?id=42365295
https://news.ycombinator.com/item?id=44038591
>are you just running your mouth off?
Don't be snarky. Edit out swipes.
https://news.ycombinator.com/newsguidelines.html
And as soon as you have to do ant business / deal with the german side of the business expect everything to slow down to 2 weeks for response which will still be incorrect.
They are simply not worth the hassle. Go with a competent host.
I think that in both cases here (OpenFreeMap and wplace), Hetzner/OVH/Scaleway is the way to go. Depending on what we're doing, the cost savings can even allow us to have redundancy at another cheap provider just in case something goes wrong.
Since I never have to, that’s perfect isn’t it? If you need support from Hetzner you are using the wrong host.
https://community.nginx.org/t/too-many-open-files-at-1000-re...
Also, the servers were doing 200 Mbps, so I couldn't have kept up _much_ longer, no matter the limits.
> Also, the servers were doing 200 Mbps, so I couldn't have kept up _much_ longer, no matter the limits.
For cost reasons or system overload?
If system overload ... What kind of storage? Are you monitoring disk i/o? What kind of CPU do you have in your system? I used to push almost 10GBps with https on dual E5-2690 [2], but it was a larger file. 2690s were high end, but something more modern will have much better AES acceleration and should do better than 200 Mbps almost regardless of what it is.
[1] to be honest, I'm not sure I understand the intent of open_file_cache... Opening files is usually not that expensive; maybe at hundreds of thousands of rps or if you have a very complex filesystem. PS don't put tens of thousands of files in a directory. Everything works better if you take your ten thousand files and put one hundred files into each of one hundred directories. You can experiment to see what works best with your load, but a tree where you've got N layers of M directories and the last layer has M files is a good plan, 64 <= M <= 256. The goal is keeping the directories compact so searching and editing is effective.
[2] https://www.intel.com/content/www/us/en/products/sku/64596/i...
NVMe disks are incredibly fast and 1k rps is not a lot (IIRC my n100 seems to be capable of ~40k if not for the 1 Gbit NIC bottlenecking). I'd try benchmarking without the tuning options you've got. Like do you actually get 40k concurrent connections from cloudflare? If you have connections to your upstream kept alive (so no constant slow starts), ideally you have numCores workers and they each do one thing at a time, and that's enough to max out your NIC. You only add concurrency if latency prevents you from maxing bandwidth.
Why would that kind of rate cause a problem over time?
> Using our public instance is completely free: there are no limits on the number of map views or requests. There’s no registration, no user database, no API keys, and no cookies. We aim to cover the running costs of our public instance through donations.
> Is commercial usage allowed?
> Yes.
IMHO, reading this and then just using it, makes a lot of sense. Yeah, you could put a cache infront of their CDN, but why, when they said it's all good, no limits, for free?
I might wonder a bit, if I knew the bandwidth it was using, but I might be busy with other stuff if my site went unexpectedly viral.
Seriously, whose first thought on reading that is “oh great, I can exploit this”.
You are not talking about an insane amount of data if it's 56 Gbit/s. Of course a caching server could handle that.
Source: Has written servers that saturated 40gig (with TLS) on an old quadcore.
If you need to go to disk to serve large parts of it, it's a different beast. But then again, Netflix was doing 800gig already three years ago (in large part from disk) and they are handicapping themselves by choosing an OS where they need to do significant amounts of the scaling work themselves.
What I've been referring to is the fact that even unlimited 1 Gbps connections can be quite expensive, now try to find a 2x40 gig connection for a reasonable money. That one user generated 200 TB in 24 hours! I have no idea about bandwidth pricing, but I bet it ain't cheap to serve that.
Presumably a caching server would be 10GbE, 40GbE, or 100GbE
56Gbit/sec of pre-generated data is definitely something that you can handle from 1 or 2 decent servers, assuming each request doesn't generate a huge number of random disk reads or something
No comments yet
Or, if it’s just a few bad actors, block based on JA4/JA3 fingerprint?
I think referer based limits are better, this way I can ask high users to please choose self-hosting instead of the public instance.
You want to track usage by the site, not the person, because you can ask a site to change usage patterns in a way you can't really ask a site's users. Maybe a per IP limit makes sense too, but you wouldn't want them low enough that it would be effective for something like this.
Most stuff could run there now.
https://github.com/hyperknot/openfreemap/blob/main/docs/asse...
It's impossible to predict that one's project may go viral.
>As a single user, you broke the service for everyone.
Or you did by not having a high enough fd limit. Blaming sites when using it too much when you advertise there is no limit is not cool. It's not like wplace themselves were maliciously hammering the API.
Show us what you have done.
That's how agreements work. If someone says they will sell a hamburger for $5, and another person pays $5 for a hamburger, then they are entitled to a hamburger.
>On a free service.
It's up to the owner to price the service. Being overwhelmed by traffic when there are no limits is not a problem limited only to free services.
>
> At the moment, I don’t offer SLA guarantees or personalized support.
From the website.
Publicly attacking your users instead of celebrating their success and your new learnings is not what I would call handling it perfectly. I think going for a halo effect strategy where you celebrate how people are using your platform to accomplish their goals will help people understand how what is being done is valuable and want people to adopt it or financially support it. On the other hand attacking people who use your platform publicly can make people apprehensive in using it fearing that they will be criticized too.
This is just someone being not very specific in a text file on their computer. I have many such notes, some of them publicly viewable.
> Financially, the plan is to keep renting servers until they cover the bandwidth. I believe it can be self-sustainable if enough people subscribe to the support plans.
Especially since he said Cloudflare is providing the CDN for free... Yes, running the origins costs money, but in most cases, default fd limits are low, and you can push them a lot higher. At some point you'll run into i/o limits, but I think the I/O at the origin seems pretty managable if my napkin math was right.
If the files are all tiny, and the fd limit is the actual bottleneck, there's ways to make that work better too. IMHO, it doesn't make sense to accept a inbound connection if you can't get a fd to read a file for it, so better to limit the concurrent connections and let connections sit in the listen queue and have a short keepalive time out to make sure you're not wasting your fds on idle connections. With no other knowledge, I'd put the connection limit at half the FD limit, assuming the origin server is dedicated for this and serves static files exclusively. But, to be honest, if I set up something like this, I probably wouldn't have thought about FD limits until they got hit, so no big deal ... hopefully whatever I used to monitor would include available fds by default and I'd have noticed, but it's not a default output everywhere.
Its a ddos attack, lucky you dont have to pay for the brandwidth, then its a denial of wallet
cloudflare would be pay per request, a hefty sum if ddos happens
They say they're receiving $500/mo in donos and that it's currently just enough to cover their infra costs. Given 300 million 70 KB files, R2 + high cache hit ratio would work out to about $300 in storage-months + request costs, or $600/mo with Cache Reserve and then they'd always hit cache if I understand the project right: meaning the costs shouldn't blow up beyond that, and that request count would essentially just not matter.
A dedicated server will always cost the same so you always know how much you pay.
It will cost 40 Euro/month to have 6 cores/12 threads,64gb of ram and 1Tb of ssd.
Dirt cheap compared to any other alternative
Oh absolutely not. I've seen so many autistic people literally just nolifing and also collaborating on huge arts on wplace. It is absolutely not just script kiddies.
> 3 billion requests / 2 million users is an average of 1,500 req/user. A normal user might make 10-20 requests when loading a map, so these are extremely high, scripted use cases.
I don't know about that either. Users don't just load a map, they look all around the place to search for and see a bunch of the art others have made. I don't know how many requests is typical for "exploring a map for hours on end" but I imagine a lot of people are doing just that.
I wouldn't completely discount automation but these usage patterns seem by far not impossible. Especially since wplace didn't expect sudden popularity so they may not have optimized their traffic patterns as much as they could have.
> Please don't comment on whether someone read an article. "Did you even read the article? It mentions that" can be shortened to "The article mentions that".
I am disappointed that they edited another guideline for the worse:
> Please don't comment about the voting on comments. It never does any good, and it makes boring reading.
It used to just say, don't complain about voting.
If the number of votes are so taboo, why do they even show us the number or user karma (and have a top list)?
Not that, you know, I often take the time to do that, either - but it would improve the site and the discussions if we all did.
but interesting write-up. If I were a consumer of OpenFreeMap, I would be concerned that such an availability drop was only detected by user reports
Assuming it was close to 100% the rest of the year, that works out to 99.97% over 12 months.