A big focus is (rightly) on rural areas, but mobile internet packet loss can also a big issue in cities or places where there are a lot of users. It's very frustrating to be technically online, but effectively offline. An example: Using Spotify on a subway works terribly until you go into Airplane mode, and then it suddenly works correctly with your offline music.
epistasis · 7h ago
When Apple did their disastrous Apple Music transition, I was in the habit of daily recreation that involved driving in areas without mobile access.
All of a sudden one day, I was cut off from all my music, by the creators of the iPod!
I switched away from Apple Music and will never return. 15 years of extensive usage of iTunes, and now I will never trust Apple with my music needs again. I'm sure they don't care, or consider the move a good tradeoff for their user base, but it's the most user hostile thing I've ever experienced in two decades on Apple platforms.
w10-1 · 4h ago
Forget internet: just sync.
Add music on macOS, and on your phone. Then sync.
RESULT: one overwrites the other, regardless of any settings.
You no longer have the audio you formerly owned.
rsync · 2h ago
In 2025 I have a dedicated pixel5 with no SIM card that is nothing but an mp3 player.
It has nothing installed but VLC.
Life is too short to deal with the ridiculous interoperability of (simple music files) and (any modern computing platform).
copperx · 1h ago
What is the function of parentheses here?
rjbwork · 1h ago
Think of them as variables - a stand in for any given file format or modern operating system.
ofalkaed · 53m ago
The language itself gets that across without the parenthesis, literally what was said.
babypuncher · 1h ago
There's a whole cottage industry of Android powered digital music players. They usually skimp on things like screen quality and compute power, but add things like microSD slots, physical transport controls, and high quality DAC & amp hardware. It's gotten very competitive in recent years.
simonw · 7h ago
Did the "download" option in Apple Music not work? Or was that not available when they first launched the new app?
amendegree · 7h ago
The OP already had the music downloaded to his device. When apple switched to the streaming service they deleted all that… you still technically owned the music, but now it had to be streamed. I also don’t recall if they started with an offline feature.
Spooky23 · 4h ago
The magic was that you had to have iTunes Match or manually sync. Years later, few people remember or are still shaking their fist and babbling over U2.
Apple didn’t communicate that well and many folks lost stuff, particularly if they are picky about recordings.
All of the CD collection stuff has degraded everywhere as the databases of tracks have been passed around to various overlords.
thom · 5h ago
Mine just magically worked the whole time. In fact just last week I noticed I still had CD scratch artefacts in one of my tunes on Apple Music which I must have ripped 20-25 years ago (and went and redownloaded it from Apple instead).
icedchai · 6h ago
I never ran into this. I've never switched to to their streaming service and still use the iTunes app on my phone, which lets you download.
Henchman21 · 5h ago
Well my library was essentially destroyed by their actions. Albums that I own and ripped my damn self now have holes in them — all the wildly popular tracks on many of my albums are gone. The metadata still shows the track but it won’t play. The artwork I carefully curated was overwritten with unrelated junk albums, often $0.99 compilations that you might’ve found in a bargain bin 20 years ago. Even the data I created myself Apple felt zero issue with overwriting it themselves.
Oh and all my lossless got shit on.
Fuck me I guess??
kccqzy · 6h ago
Yeah but many albums are only available via streaming, not for purchasing outright.
babypuncher · 1h ago
When media companies refuse to sell me something, I take it as permission to sail the high seas for it
epistasis · 6h ago
There was a random smattering of songs from my library on my device, but not according to anything I regularly listened to.
I couldn't be bothered to spend time manually selecting stuff to download back then. It was offensive to even ask that spend 30 minutes manually correcting a completely unnecessary mistake on their part. And this was during a really really bad time in interface, with the flat ui idiocy all the rage, and when people were abandoning all UI standards that gave any affordances at all.
If I'm going to go and correct Apple's mistake, I may as well switch to another vendor and do it. Which is what I did. I'm now on Spotify to this day, even though it has many of the problems as Appple Music. At least Spotify had fewer bugs at the time, and they hadn't deleted music off my device.
Good riddance and I'll never go back to Apple Music.
Rebelgecko · 2h ago
At least when I last tried the android version of the Apple Music app, when you're technically but not reliably connected to the Internet (captive portal, crappy signal quality, etc), operations like play/next track/previous track would hang for 60 seconds before the UI responded
mr_toad · 2h ago
Yeah you need to navigate to “Library -> Downloaded Music” and play from there. Otherwise it will try and phone home.
joshmarinacci · 6h ago
It did have that at launch, but the transition was very confusing. There was (is?) an "iTunes Match" thing to replicate your personal mp3s in the cloud rather than uploading them. It was a real mess.
crazygringo · 1h ago
This a million times. Spotify on the subway is infinitely frustrating until you go into airplane mode.
Ideally, apps shouldn't detect if you have internet and then act differently. They should pull up your cached/offline data immediately and then update/sync as attempted connections return results.
The model where you have offline data but you can't even see your playlists because it wants to load them because it thinks you have internet is maddening.
bityard · 6h ago
Very good point. We had several power outages lasting a few hours lately. (One was just last night.) Every time this happens, my phone's mobile data is totally unusable because the whole neighborhood switches over from scrolling facebook (et all) on their wifi to scrolling facebook on mobile.
I can (and do) find things around the house that don't depend on a screen, but it's annoying to know that I don't really have much of a backup way to access the internet if the power is out for an extended period of time. (Short of plunking down for an inverter generator or UPS I suppose.)
BenjiWiebe · 4h ago
If your ISP is available during a power outage (as they should be) a UPS that only powers a WiFi router could be quite small/cheap.
Or you could use a Raspberry Pi or similar and a USB WiFi adapter (make sure it supports AP mode) and a battery bank, for an "emergency" battery-operated WiFi router that you'd only use during power outages.
EDIT: Unless your ISP's CPE (modem/whatever) runs on 5 volts, you'd need more than just a USB power bank to keep things going. Maybe a cheap amazon boost converter could get you the extra voltage option.
DamonHD · 3h ago
FWIW the routers I have owned over the last few years have been 12V @ 1A.
I run my router + my RPi server off-grid with ~1kWh of usable (lead-acid) battery capacity.
So with those and my laptop's battery, I sailed into our last couple of minor daytime power cuts without even noticing. Sounds of commotion from neighbours alerted me that something was up!
dghlsakjg · 1h ago
Spotify deals with degraded connections absolutely horrendously (on iOs anyway).
If I have a podcast already downloaded, but I am on an iffy connection, Spotify will block me from getting to that podcast view while it tries to load the podcast view from the web instead of using downloaded data.
I frequently put my phone in airplane mode to force spotify into offline mode to get content to play.
hypeatei · 7h ago
I'm on fiber at home and my ISP did a backend update which is dropping packets specifically on IPv6 for some reason. Most sites are unusable and other software isn't handling it very well (e.g. android) with frequent "no internet" popups.
98codes · 7h ago
The only thing worse than no internet is one bar of signal.
usmanity · 3h ago
100% this, I am almost always on 5G or LTE but in some areas in my city it seems like not even a webpage will load on either. In this case, using any apps is useless and google/kagi search feels like it takes too long to find something basic.
rendaw · 5h ago
Also subways, and people with cheap data plans that get throttled after 1GB. Google maps regularly says "no results found" because the connection times out.
genocidicbunny · 7h ago
Speaking of Airplanes, I also frequently have issues with apps and websites when using in-flight wifi due to the high latency and packet loss. Incidentally, Spotify is one of said apps, which often means I need to manually set it to offline mode to get it to work.
deltaburnt · 6h ago
It's deeply ironic how awfully designed the NYT games app is for offline use given many people use it on the subway. Some puzzles will cache, others won't. They only cache after you manually open them.
zzo38computer · 8m ago
I think you should not assume fast internet or any internet when it is not necessary to do so. Many programs could mostly work without needing an internet connection (e.g. a email program will only need to connect to internet to send/receive; you can compose drafts and read messages that are already received without an internet connection), so they should be designed to work mostly without internet connection where appropriate (this also includes to avoid spyware, etc as well). When you do need an internet connection, you should avoid adding excessive data (for HTML files, this includes pictures, CSS, JavaScripts, etc; for other protocols and file formats it includes other things), too.
For such things as streaming audio/video, there is the codec and other things to be considered as well. If the data can be coded in real time or if multiple qualities are available already on the server then this can be used to offer a lower quality file to clients that request such a file. The client can download the file for later use and may be able to continue download later, if needed.
There is also, e.g. do you know that you should need a video call (or whatever else you need) at all? Sometimes, you can do without it, or it can be an optional possibility.
There is also the avoiding needing specific computers, too. It is not only for internet access, although that is a part of it, too. However, this does not mean that computer and internet cannot be helpful. They can be helpful, but should be overly relied on so much.
The Gemini protocol does not have anything like the Range request and Content-length header, and I thought this was not good enough so I made one that does have these things. (HTTP allows multiple ranges per request, but I thought that is more complicated than it needs to be, and it is simpler to only allow one range per request.)
zeinhajjali · 7h ago
This reminds me of a project I worked on for a grad school data science course here in Canada. We tried to map this "digital divide" using public data.
Turns out, it's really tough to do accurately. The main reason is that the public datasets are a mess. For example, the internet availability data is in neat hexagons, while the census demographic data is in weird, irregular shapes that don't line up. Trying to merge them is a nightmare and you lose a ton of detail.
So our main takeaway, rather than just being a pretty map, was that our public data is too broken to even see the problem clearly.
Really interesting perspective, thanks for sharing.
I think in so many fields the datasets are by far the highest impact thing someone can work on, even if it seems a bit mundane and boring. Basically every field I've worked in struggles for need of reliable, well maintained and open access data, and when they do get it, it usually sets off a massive amount of related work (Seen this happen in genetics, ML of course once we got ImageNet and also started getting social media text instead of just old newspaper corpuses).
That would definitely be advice I'd give to many people searching for a project in a field -- high quality data is the bedrock infrastructure for basically all projects in academic and corporate research, so if you provide the data, you will have a major impact, pretty much guaranteed.
hardolaf · 5h ago
I'm in the USA with nominally a 1.25 Gb/s down, 50 Mb/s connection from my cable ISP. And you'd think that it would be fast, low latency, and reliable. Well that would be true except my ISP is Xfinity (Comcast). At least 4 times per week, I experience frequent packet loss that works with older web servers but makes most newer TCP based technology just fail. And the connection will randomly fail for 10 minutes to 2 days at a time and sure they give me a credit for it.
So anyways, I bring this up with my local government in Chicago and they recommend that I switch to AT&T Fiber because it's listed as available at my address in the FCC's database. Well, I would love to do that except that
1. The FCC's database was wrong and rejected my corrections multiple times before AT&T finally ran fiber to my building this year (only 7 years after they claimed that it was available in the database despite refusing to connect to the building whenever we tried).
2. Now that it is in the building, their Fiber ISP service can't figure out that my address exists and has existing copper telephone lines run to it by AT&T themselves so their system cannot sell me the service. I've been arguing with them for 3 months on this and have even sent them pictures of their own demarc and the existing copper lines to my unit.
3. Even if they fixed the 1st issue, they coded my address as being on a different street than its mailing address and can't figure out how to sell me a consumer internet plan with this mismatch. They could sell me a business internet plan at 5x the price though.
And that's just my personal issues. And I haven't even touched on how not every cell phone is equally reliable, how the switch to 5G has made many cell phones less reliable compared to 3G and 4G networks, how some people live next to live event venues where they can have great mobile connections 70% of the time but the other 30% of the time it becomes borderline unusable, etc.
HPsquared · 7h ago
Oddly fitting (or perhaps that's double irony) that your "mapping the digital divide" project was derailed by the literal digital mapping division boundaries.
At one of my previous jobs, we designed a whole API to be slightly more contrived but requiring only one round-trip for all key data, to address the iffy internet connectivity most of our users had. The frontend also did a lot of background loading to hide the latency when scrolling.
It's really eye-opening to set up something like toxiproxy, configure bandwidth limitations, latency variability, and packet loss in it, and run your app, or your site, or your API endpoints over it. You notice all kinds of UI freezing, lack of placeholders, gratuitously large images, lack of / inadequate configuration of retries, etc.
Tteriffic · 5h ago
Years ago API’s and apps that used them were expected to do some work offline and on slow networks. Then, suddenly, everyone was expected to have stable Internet to do anything. The reason, I think, is the few apps that expected to be always online seemed better to users and easier to architect. So most architectures went that way.
sfn42 · 2h ago
The reason is developers are worse. A decade or two ago, developers were nerds who loved tinkering with computers. Today tech is big money so everyone wants in. Most devs don't care, it's just a paycheck. Until someone starts setting expectations and holding people responsible for their trash code, we'll continue to see code monkeys write broken codebases. Not to mention where is the mentorship? I've worked as a mechanic, I've worked in construction, I've worked in a store. In all the above, I was mentored. Not for very long in the store but still, the other two have 2 year apprenticeship programs.
The i got a degree and a dev job, apprenticeship? Nah dude here's a big legacy app for you, have fun. Mentorship? Okay I technically had a mentor. We had a lunch every couple months, talked about stuff a bit but nothing much. And I mean this is going to sound a bit pompous but I'm above average. I had mostly A's in university, I finished every single project alone and then helped others. I was a TA. I corrected the professors when they made mistakes. I wrote a lot of code in my free time. I can't imagine what it must be like for one of my peers who honestly didn't know Jack shit and still graduated somehow.
I'm working on an app right now, took over after two other guys worked on it for about a year. This app isn't even in prod yet and it's already legacy code. Complete mess, everything takes like 5 seconds to load, the frontend does a crapload of processing because the data is stored and transferred in entirely the wrong structure so basically they just send all the data and sort it out on the frontend.
I honestly think the fastest way to get this app working properly is to scrap the whole thing and start from scratch but we have a deadline in a couple months so I guess I'll see how it goes.
grishka · 1h ago
VKontakte has a very clever but at the same time cursed solution to this — the `execute` API method. It takes JS-like code that runs server-side. You can make up to 25 API calls and transform the data any way you please before returning it to yourself, all for the cost of one network request. Working with every other API after that feels like a massive regression.
devmor · 3h ago
This reminded me of a feature request I dealt with at an employer, while working on backoffice software for a support team. The software loaded a list of all current customers on the main index page - this was fine in the early days, but as the company grew, it ended up taking nearly a whole minute before the page was responsive. This sucked.
So I was tasked with fixing the issue. Instead of loading the whole list, I established a paginated endpoint and a search endpoint. The page now loaded in less than a second, and searches of customer data loaded in a couple seconds. The users hated it.
Their previous way of handling the work was to just keep the index of all customers open in a browser tab all day, Ctrl+F the page for an instant result and open the link to the customer details in a new tab as needed. My upgrades made the index page load faster, but effectively made the users wait seconds every single time for a response that used to be instant at the cost of a one time per day long wait.
There's a few different lessons to take from this about intent and design, user feedback, etc. but the one that really applies here is that sometimes it's just more friendly to let the user have all the data they need and allow them to interact with it "offline".
sfn42 · 2h ago
There's no reason you can't have the cake and eat it too. If google can index the entire web and have search results and AI results for you in an instant, then you can give users instant customer search for a mid sized corp. A search bar that actually worked fast would have done the same as their Ctrl f workflow.
Of course if the system is a total mess then it might have been a lot of work, but what you describe is really more of a skill issue than a technical limitation.
baby_souffle · 7h ago
I wish more developers bothered to test on flaky connections. Absolutely infuriating when an app can't keep up with your muscle memory...
Sanzig · 7h ago
While many websites are bad for large unoptimized payloads sizes, they are even worse for latency sensitivity.
You can easily see this when using WiFi aboard a flight, where latency is around 600 msec at minimum (most airlines use geostationary satellites, NGSO for airline use isn't quite there yet). There is so much stuff that happens serially in back-and-forth client-server communication in modern web apps. The developer sitting in SF with a sub-10 ms latency to their development instance on AWS doesn't notice this, but it's sure as as heck noticeable when the round trip is 60x that. Obviously, some exchanges have to be serial, but there is a lot of room for optimization and batching that just gets left on the floor.
It's really useful to use some sort of network emulation tool like tc-netem as part of basic usability testing. Establish a few baseline cases (slow link, high packet loss, high latency, etc) and see how usable your service is. Fixing it so it's better in these cases will make it better for everyone else too.
catwhatcat · 7h ago
NB modern browsers have a "throttling" dropdown/selector built-in to the dev tools (under 'network') alike tc-netem
HPsquared · 7h ago
Someone needs to package a browser bundled with a variable latency network layer. Maybe a VM?
odo1242 · 6h ago
Chrome and Firefox and Safari let you add latency in developer tools
HPsquared · 5h ago
Oh right, that's pretty cool. I thought the throttling was only bandwidth.
immibis · 4h ago
You can also just live in New Zealand, where your minimum ping time to anywhere relevant is 200-300ms.
o11c · 5h ago
This fails to address the main concern I run into in practice: can you recover if some resources timed out while downloading?
This often fails in all sorts of ways:
* The client treats timeout as end-of-file, and thinks the resource is complete even though it isn't. This can be very difficult for the user to fix, except as a side-effect of other breakages.
* The client correctly detects the truncation, but either it or the server are incapable of range-based downloads and try to download the whole thing from scratch, which is likely to eventually fail again unless you're really lucky.
* Various problems with automatic refreshing.
* The client's only (working) option is "full page refresh", and that re-fetches all resources including those that should have been cached.
* There's some kind of evil proxy returning completely bogus content. Thankfully less common on the client end in a modern HTTPS world, but there are several ways this can still happen in various contexts.
1970-01-01 · 4h ago
wget -c https://zigzag.com/file1.zip
Note that -c only works with FTP servers and with HTTP servers that support the "Range" header.
sfn42 · 2h ago
Just don't send big data. Send what you need in order to display the page, for most use cases that is really not much data. There's way too many web apps sending huge amounts of data and using a small fraction of it.
potatolicious · 8h ago
A good point. The author does briefly address the point of mobile internet but I think it deserves a lot more real estate in any analysis like this. A few more points worth adding:
- Depending on your product or use case, somewhere between a majority and a vast majority of your users will be using your product from a mobile device. Throughput and latency can be extremely high, but also highly variable over time. You might be able to squeeze 30Mbps and 200ms pings for one request and then face 2Mbps and 4000ms pings seconds later.
- WiFi generally sucks for most people. The fact that they have a 100Mbps/20Mbps terrestrial link doesn't mean squat if they're eking out 3Mbps with eye-watering packet loss because they're in their attic office. The vast majority of your users are using wireless links (WiFi or cell) and are not in any way hardlined to the internet.
aidenn0 · 8h ago
I don't use an iPhone, but my wife does. She says that it will remove apps from the device that you haven't used in a while, and then automatically re-download when you try to run them. On our WiFi at home, that's fine, but if we are out and about it can take up to an hour to download a single app.
jurip · 7h ago
You can disable that (Settings → Apps → App Store → Offload Unused Apps.)
It's a nice feature, but it would be even nicer if you could pin some apps to prevent their offloading even if you haven't used them in ages.
joshstrange · 7h ago
> but it would be even nicer if you could pin some apps to prevent their offloading even if you haven't used them in ages.
That change would make _viable_ for me at all, right now it's next to useless.
Currently iOS will offload apps that provide widgets (like Widgetsmith) even when I have multiple Widgetsmith widgets on my 1st and 2nd homescreens, I just never open the app (I don't need to, the widgets are all I use). One day the widgets will just be black and clicking on them does nothing. I have to search for Widgetsmith and then make the phone re-download it. So annoying.
Also annoying is you can get push notifications from offloaded apps. Tapping on the notification does _nothing_ no alert, no re-download, just nothing. Again, you have to connect the dots and redownload it yourself.
This "feature" is very badly implemented. If they just allowed me to pin things and added some better UX (and logic for the widget issue) it would be much better.
jurip · 7h ago
Yeah. We have a 112 app in Finland, for making emergency calls and relaying your location. Maybe it's been made at least partially unnecessary by phone network features, but anyway. It's one app I absolutely never ever use except when someday I'll be in an emergency and will want to use it and then it'll be offloaded.
pimlottc · 7h ago
Definitely, I had this problem on an old iPad where it would often decide to unload my password manager...
pimlottc · 7h ago
Note that this should only happen when you're running low on storage. [0] But yes, it can be very annoying.
I've also noticed that the marginal cost of larger storage on an iPhone is significantly higher than on Android (e.g. my phone was $220 with 256GB of storage; it's $100 per 128GB to upgrade the iPhone 16 storage), making people much more likely to be low on storage.
Dylan16807 · 24m ago
Last I checked the marginal cost of storage for Google, Samsung, and Apple phones sits around $600/TB. More for lower amounts, less for higher amounts.
I don't look much into phones that don't promise a reasonable support life, but if I go look at motorola all these midrange phones don't even have size options. At least some of them accept microsd.
bob1029 · 4h ago
If you really want to engineer web products for users at the edge of the abyss, the most robust experiences are going to be SSR pages that are delivered in a single response with all required assets inlined.
Client-side rendering with piecemeal API calls is definitely not the solution if you are having trouble getting packets from A to B. The more you spread the information across different requests, the more likely you are going to get lose packets, force arbitrary retries and otherwise jank up the UI.
From the perspective of the server, you could install some request timing middleware to detect that a client is in a really bad situation and actually do something about it. Perhaps a compromise could be to have the happy path as a websocketed react experience that falls back to a ultralight, one-shot SSR experience if the session gets flagged as having a bad connection.
softfalcon · 4h ago
If you are dropping packets and losing data, why would it matter if you're making one request or several?
Even if I SSR and inline all the packages/content, that overall response could be broken up into multiple TCP packets that could also be dropped (missing parts in the middle of your overall response).
How does using SSR account for this?
I have to deal with this problem when designing TCP/UDP game networking during the streaming of world data. Streaming a bunch of data (~300 Kb) is similar to one big SSR render and send. This is because standard TCP packets max out at ~65 Kb.
Believing that one request maps to one packet is a frequent "gotcha" I have to point out to new network devs.
sfn42 · 2h ago
The point is you just need to finish the request and you're done, the page is working.
If there's 15 different components sending 25 different requests to different endpoints, some of which are triggered by activities like scrolling etc, then the user needs a consistent connection to have a good experience.
Packet loss in TCP doesn't fail the whole request. It just means some packets need to be resent which takes more time.
the8472 · 4h ago
> all required assets inlined
FSVO required. Images beyond a few bytes shouldn't be inlined for example since loading them would block the meat of the content after them.
Workaccount2 · 8h ago
This gets down to a fundamental problem that crops up everywhere: How much is x willing to exponentially sacrifice to satisfy the long tail of y?
It's grounds for endless debate because it's inherently a fuzzy answer, and everyone has their own limits. However the outcome naturally becomes an amalgamation of everyone's response. So perhaps a post like this leads to a few more slim websites.
reaperducer · 6h ago
How much is x willing to exponentially sacrifice to satisfy the long tail of y?
Part of the problem is the acceptance of the term "long tail" as normal. It is not. It is a method of marginalizing people.
These are not numbers, these are people. Just because someone is on an older phone or a slower connection does not make them any less of a human being than someone on a new phone with the latest, fastest connection.
You either serve, or you don't. If your business model requires you to ignore 20% of potential customers because they're not on the latest tech, then your business model is broken and you shouldn't be in business.
The whole reason companies are allowed to incorporate is to give them certain legal and financial benefits in exchange for providing benefits (economic and other) to society. If your company can't hold up its end of the bargain, then please do go out of business.
Workaccount2 · 1h ago
The vagueries of your post highlights the very problem I am addressing. What is an "older phone"? What is a "slower connection"? The bottom 20%? But the post at hand is talking about the bottom 3%, so where is the actual line? When you are in the hot chair, your hand in play, all those things need to be well defined lines.
If I squeeze you to be more precise, it becomes uncomfortable and untenable, as no matter what you are either marginalizing people or marginalizing yourself, your company, or everyone else. It's something where it is extremely easy to have moral high ground when you have zero stake yourself, but anyone who understands the nuance of the problem can see right through it.
ryandrake · 5h ago
> You either serve, or you don't. If your business model requires you to ignore 20% of potential customers because they're not on the latest tech, then your business model is broken and you shouldn't be in business.
Or, at least the business needs to recognize that their ending support for Y is literally cutting off potential customers, and affirmatively decide that's good for their business. Ask your company's sales team if they'd be willing to answer 10% of their inbound sales calls with "fuck off, customer" and hang up. I don't think any of them would! But these very same companies think nothing of ending support for 'old' phones or supporting only Chrome browser, or programming for a single OS platform, which is effectively doing the same thing: Telling potential customers to fuck off.
sealeck · 5h ago
There's obviously some trade-off here: should your business support fax machines? Work on a Nokia brick? These are clearly impractical. But slow internet connection speeds are a thing that everyone deals with, and therefore it makes sense for your software to be able to handle them.
SoftTalker · 5h ago
Nonsense. There are all kinds of businesses that target specific customer segments, and will even flat out refuse to do business with some others.
demosthanos · 6h ago
> This shows pretty much what I'd expect: coverage is fine in and around cities and less great in rural areas. (The Dakotas are an interesting exception; there's a co-op up there that connected a ton of folks with gigabit fiber. Pretty cool!)
Just a warning about the screenshot he's referencing here: the slice of map that he shows is of the western half of the US, which includes a lot of BLM land and other federal property where literally no one lives [0], which makes the map look a lot sparser in rural areas than it is in practice for humans on the ground. If you look instead at the Midwest on this map you'll see pretty decent coverage even in most rural areas.
The weakest coverage for actually-inhabited rural areas seems to be the South and Appalachia.
Article skips consideration for shared wifi such as cafes where, IME, a lot of students do their work. Consumer wifi routers might have a cap of ~24 clients, and kind of rotate which clients they're serving, so not only is your 100Mbit link carved up, but you periodically get kicked off and have to renew your connection. I cringe when I see people trying to use slack or office365 in this environment.
Grateful for the blog w/ nice data tho TY
hamandcheese · 7h ago
I have never experienced this. Then again, I'm not sure if the cafes I frequent have 24+ people connected to wifi at a time.
jonah-archive · 7h ago
A few years ago I was at a cafe in a venue that had a large conference/gathering, and their router was handing out 24 hour DHCP leases and ran out of IP addresses. It was a fairly technical group so me and a couple other people set up a table with RFC 2322-style pieces of paper with IP/gateway info ("please return when finished") and it worked surprisingly well!
myself248 · 7h ago
Did you dub yourselves the Impromptu Assigned Numbers Authority?
lukeschlather · 44m ago
> However, it's also worth keeping in mind that this is a map of commercial availability, not market penetration. Hypothetically, you could get the average speed of a US residential internet connection, but the FCC doesn't make such a statistic available.
It's actually worse than this. Companies will claim they offer gigabit within a zip code if there's a single gigabit connection, but they will not actually offer gigabit lines at any other addresses in the zip code.
cjs_ac · 7h ago
The first computer I ever used had a 56k modem, but I can empathise with greybeard stories about watching text appear one character at a time from 300 baud modems because of the task-tracking software my employer uses. I load it up in a browser tab in the morning, and watch as the various tasks appear one at a time. It's an impediment to productivity.
The rule I've come up with is one user action, one request, one response. By 'one response', I mean one HTTP response containing DOM data; if that response triggers further requests for CSS, images, fonts, or whatever, that's fine, but all the modifications to the DOM need to be in that first request.
hobs · 6h ago
That's a good one, I recently witnessed a bulleted list being loaded one item/ajax request at a time... that was then awaited so that all of them had to load sequentially.
An amazing thing.
jmajeremy · 7h ago
I'm a minimalist in this regard, and I really believe that a website should only be as complex as it needs to be. If your website requires fast Internet because it's providing some really amazing service that takes advantage of those speeds, then go for it. If it's just a site to provide basic information but it loads a bunch of high-res images and videos and lengthy javascript/css files, then you should consider trimming the fat and making it smaller.
Personally I always test my website on a variety of devices, including an old PC running Windows XP, a Mac from 2011 running High Sierra, an Android phone from 2016, and a Linux machine using Lynx text browser, and I test loading the site on a connection throttled to 128kbps. It doesn't have to run perfectly on all these devices, but my criterion is that it's at least usable.
RajT88 · 6h ago
I mean. I prefer my news sites plaintext. I think most video calls should be audio calls, and most audio calls could have been emails.
I lived happily on dialup when I was a teenager, with just one major use case for more bandwidth.
keysdev · 5h ago
Since 2013 I was in a situation where we only has edge internet for half year for 10 ppl. Ever since then I promote text web page. Not everyone has fast Internet.
simonw · 7h ago
Any time I'm on a road trip or traveling outside of major cities it becomes very obvious that a lot of developers don't consider slower network connections at all.
The other issue that's under-considered is lower spec devices. Way more people use cheap Android phones than fancy last-five-years iPhones. Are you testing on those more common devices?
mlhpdx · 7h ago
> you should not assume that it's better than around 25Mbps down and 3Mbps up
This is spot on for me. I live in a low-density community that got telcom early and the infrastructure has yet to be upgraded. So, despite being a relatively wealthy area, we suffer from poor service and have to choose between flaky high latency high bandwidth (Starlink) and flaky low latency low bandwidth (DSL). I’ve chosen the latter to this point. Point to point wireless isn’t an option because of the geography.
continuational · 6h ago
Here's a fun exercise: Put the front page of your favorite web framework though https://pagespeed.web.dev/
Yes. I select everything to work disconnected for long periods of time. I suspect we are in a temporary time of good connectivity. What we really have to look forward to is balkanisation, privacy threats from governments, geopolitical uncertainty and crazy people running our communications infra.
Seems sensible to take a small convenience hit now to mitigate those risks.
jebarker · 8h ago
Yes, for the same reason we should design for low end HW: it makes everyone’s experience better. I wish websites and apps treated phoning home as a last resort.
morleytj · 7h ago
This is a huge issue for me with a lot of sites. For whatever reason I've spent a lot of time in my life in areas with high latency or jist spotty internet service in general, and a lot of these modern sites with massive payload sizes and chained together dependencies (click this button to load this animation to display the next thing that you have to click to get the information you want) seriously struggle or outright break in those situations.
The ol reliable plain HTML stuff usually works great though, even when you have to wait a bit for it to load.
donatj · 8h ago
My parents live just 40 miles outside Minneapolis and use a very unreliable T-Mobile hotspot because the DSL available to them still tops out at a couple megabit. Their internet drops constantly and for completely unknown reasons.
I've been trying to convince them to try Starlink, but they're unwilling to pay for the $500+ equipment costs.
HeyLaughingBoy · 7h ago
I also live about 40 miles outside Minneapolis (SW). They should check if fiber is available. A few years ago the state apportioned a chunk of money to roll fiber out to rural communities and it's well underway at this time. I finally got hooked up a few months ago. We're in an unincorporated township, but it looks like the towns & villages got connected first.
One of my neighbors is apparently using Starlink since I see a Starlink router show up in my Wi-Fi scan.
amendegree · 7h ago
Tbf, it’s $500 in equipment + $50-100 in recurring costs, which I’m sure is much higher than what they’re paying now. If they don’t feel they need internet they probably don’t want to pay significantly more for it.
dghlsakjg · 7h ago
I don't know if this is true on your side of the border, but the equipment is free right now in Canada. Might be worth checking again.
edflsafoiewq · 7h ago
I have one of those too. The connection dropping out is the real crux of the matter I think. If it's merely slow you can just wait longer, but an intermittent connection requires qualitatively different design.
Many people have already said designing for iffy internet helps everyone: this is true for slimming your payload, but not necessarily designing around dropped connections. On a plane or train, you might alternate between no internet and good internet, so you can just retry anything that failed when the connection is back, but a rural connection can be always spotty. And I think the calculus for devs isn't clearly positive when you have to design qualitatively new error handling pathways that many people will never use.
For example, cloning a git repo is non-resumable. Cloning a larger repo can be almost impossible since the probability the connection doesn't drop in the middle falls to zero. The sparse checkout feature has helped a lot here. Cargo also used to be very hard to use on rural internet until sparse registries.
ipdashc · 7h ago
It's an edge case, but I noticed that the first two sections focus on people's Internet access at home. But what about when on the move? Public Wi-Fi and hotspots both kinda suck. On those, there are some websites that work perfectly fine, and some that just... aren't usable at all.
almosthere · 6h ago
Design for:
* blind
* def
* reading impaired
* other languages/cultures
* slow/bad hardware/iffy internet
To me at some point we need to get to an LCARs like system - where we don't program bespoke UIs at all. Instead the APIs are available and the UI consumes it, knows what to show (with LLMs) and a React interface is JITted on the spot.
And the LLM will remember all the rules for blind/def/etc...
bigstrat2003 · 6h ago
There are a whole lot of applications for which it makes no sense to design for other cultures. Not everyone is building something for a business which is, or might be, doing business internationally after all.
Also I think until LLMs become reliable (which may be never), using them in the way you describe is a terrible idea. You don't want your UI to all of a sudden hallucinate something that screws it up.
almosthere · 4h ago
LLMs don't hallucinate THAT badly, and if you're doing many calls for small pieces, it rarely makes those kinds of mistakes.
As far as international emitting of interfaces - yes it absolutely makes sense to do it this way. If you're asking for an address and the customer is in the US, the LLM can easily whip up a form for that kind of address. If you're somewhere else, it can do that too. There's no reason for bespoke interfaces that never get the upgrade because someone made it overly complicated for some reason.
Back in the day, AOP was almost a big thing (for a small subset of programmers). Perhaps what was missing was having a generalized LLM that allowed for the concern to be injected. Forgot your ALT tag? LLM, Not internationalized? LLM, Non-complicated Lynx compatible view? LLM
gadders · 7h ago
A thousand times yes. I hate apps that need to spend 2 minutes or so deciding whether your internet is bad or not, even though they can function offline (Spotify, TomTom Go).
cwillu · 7h ago
> Strangely, they don't let you zoom out enough to grab a screenshot of the whole country so I'm going to look at the west. That'll get both urban and rural coverage, as well as several famously internet-y locations (San Francisco Bay Area, Seattle.)
Turns out the max zoom-out is based on the browser window's width: making the window narrower reproduces the issue, although ctrl-minus makes the whole continent visible again.
GuB-42 · 4h ago
The short answer is yes, and there are tools to help you. There are ways to simulate a poor network in the dev tools of major browsers, in the Android emulator, there is "Augmented Traffic Control" by Facebook, "Network Link Conditioner" by Apple and probably many others.
It is telling that tech giants make tools to test their software in poor networking conditions. It may not look like they care, until you try software by those who really don't care.
CM30 · 5h ago
It's also worth noting that poor quality internet connections can be depressingly common in countries other than the US too. For example, here in the UK, there are a surprising number of areas with no fibre internet available even in large cities. I remember seeing a fair few tech companies getting lumbered with mediocre broadband connections in central London for example.
So if your market is a global one, there's a chance even a fortune 500 company could struggle to load your product in their HQ because of their terrible internet connection. And I suspect it's probably even worse in some South American/African/Asian countries in the developing world...
esseph · 6h ago
Note:
The NTIA or FCC just released an updated map a few days ago (part of the BEAD overhaul) that shows the locations currently covered by existing unlicensed fixed wireless.
Quick Google search didn't find a link but I have it buried in one of my work slack channels. I'll come back with the map data if somebody else doesn't.
The state of broadband is way, way worse than people think in the US.
It’s not that we should design for iffy internet, it’s we should design sites and apps that don’t make 1,000 xhr calls and load 50mb of javascript to load ads that also load javascript that refresh the page on purpose to trigger new ad bids to inflate viewership. (rant)
AnotherGoodName · 7h ago
>you should not assume that it's better than around 25Mbps down and 3Mbps up
It's hard to make a website that doesn't work reasonably well with that though. Even with all the messed up Javascript dependencies you might have.
I feel for those on older non-Starlink Satellite links. eg. islands in the pacific that still rely on Inmarsat geostationary links. 492 kbit/s maximum (lucky if you get that!), 3 second latency, pricing by the kb of data. Their lifestyle just doesn't use the internet much at all by necessity but at those speeds even when willing to pay the exorbitant cost sites will just timeout.
Starlink has been a revolution for these communities but it's still not everywhere yet.
purplezooey · 6h ago
This was table stakes not long ago. There seems to be an increase in apps/UIs blaming the network for what is clearly poor performance on the backend, as well.
grishka · 1h ago
What really grinds my gears is websites with news/articles that assume that you have a stable fast internet connection for the whole duration of you reading the article, and so load images lazily to "save data".
Except I sometimes read articles on the subway and not all subway tunnels in my city have cell service. Or sometimes I read articles when I eat in some place that's located deep inside an old building with thick brick walls. Public wifi is also not guaranteed to be stable — I stayed in hotels where my room was too far from the AP so the speed was utter shit. Once, I loaded some Medium articles on my phone before boarding a plane, only to discover, after takeoff, that these articles don't make sense without images that didn't load.
Anyway. As a user, for these kinds of static pages, I expect the page to be fully loaded as soon as my browser hides the progress bar. Dear web developers, please do your best to meet this expectation.
amelius · 6h ago
Internet providers: Maybe we should provide faster internet for our rural users.
Programmers: Let's design for crappy internet
Internet providers: Maybe it's not necessary
b0a04gl · 7h ago
been quietly rolling out beacon-based navigation inside metro stations in bengaluru. this post is about the pilot at vidhana soudha {https://www.linkedin.com/posts/shruthi-kshirasagar-622274121...}. i had a role to contribute in the early scoping and feedback loop. no flashy tech, just careful placement, calibration, and signal mapping. real work is in making this reliable across peak hours, metal obstructions, dead zones. location precision is tricky underground, bluetooth’s behavior shifts with crowd density. glad to see this inching forward. bmrc seems serious about bringing commuter-first features to public infra
sn9 · 5h ago
Reminds me of this old Dan Luu blog post: "How web bloat impacts users with slow connections" [0].
> What if that person is on a slow link? If you've never had bad internet access, maybe think of this as plane wifi
Loads of people are on "a slow link" or iffy internet that would otherwise have a fast internet. Like... plane wifi! Or driving through less populated areas (or the UK outside of london) and have spotty phone reception.
dghlsakjg · 7h ago
For the love of god, yes, design as if all of your users are going to be on a 1mbps connection that drops out for 5s every minute, because at some point, a lot of them (most of them, I would wager) will be using that connection. Often it is when you are on those connections that it is most important that your software work.
The article looks at broadband penetration in the US. Which is useful, but you need to plan for worst cases scenario, not statistically likely cases.
I have blazing fast internet at home, and that isn't helpful for the AAA app when I need to get roadside assistance.
I want the nytimes app to sync data for offline reading, locally caching literally all of the text from this week should be happening.
genewitch · 7h ago
I live in the US and use starlink - it's all i can get in my location, these days.
Ping statistics for <an IP in our DC>:
Packets: Sent = 98585, Received = 96686, Lost = 1899 (1% loss),
Approximate round trip times in milli-seconds:
Minimum = 43ms, Maximum = 3197ms, Average = 58ms
it's almost exactly 5s per 60s of loss^. has been since i got it. for "important" live stuff i have to switch to my cellphone, in a specific part of my house. otherwise the fact that most things are usable on "mobile" means my experience isn't "the worst" - but it does suck. I haven't played a multiplayer game with my friends in a year and a half - since at&t shut off fixed wireless to our area.
oh well, 250mbit is almost worth it.
^: when i say this, i wasn't averaging, it drops from 0:54-0:59, in essence, "5 seconds of every minute"
SpaceNoodled · 7h ago
One millibit per second might be a bit excessive. Surely we can expect more than just one bit every seventeen minutes.
ripe · 7h ago
Ha!
Your point reminded me of the NASA Mars rover deployed in 2021 with the little Ingenuity helicopter on board.
The helicopter had a bug that required a software update, which NASA had to upload over three network legs: the Deep Space Network to Mars, a UHF leg from Mars-orbiting robotic vehicles to the rover, and a ZigBee connection from the rover to the Ingenuity helicopter. A single message could take between 5 and 20 minutes to arrive...
It's crazy to me that almost all new web projects start with two assumptions:
- Mobile first design
- Near unlimited high speed bandwidth
There's never been a case where both are blanket true.
__MatrixMan__ · 8h ago
This is about high speed internet accessibility under normal circumstances. It seems like good analysis as far as it goes, but the bigger reason to design for iffy internet has to do with being able to rely on technology even after something bad happens to the internet.
nmstoker · 1h ago
Precisely! Like the person leaves their house. How is that not the obvious focal point. It's like they found great data on in the house internet and thought they'd skip the part that sadly many mobile app developers skip: most people don't stay at home and it's when they leave that unexpected outages crop up
wat10000 · 5h ago
So much software guidance can be subsumed by a simple rule:
Use the software that you make, in the same conditions that your users will use it in.
Most mobile apps are developed by people in offices with massive connections, or home offices with symmetric gigabit fiber or similar. The developers make sure the stuff works and then they're on to the next thing. The first time someone tries to use it on a spotty cellular connection is probably when the first user installs the update.
You don't have to work on a connection like that all the time, but you need to experience your app on that sort of connection, on a regular basis, if you care about your users' experience.
Of course, it's that last part that's the critical missing piece from most app development.
DamonHD · 3h ago
"eat your own dogfood" or "dogfooding"
jedberg · 3h ago
This doesn't show the whole picture. YEs, I have super reliable high speed internet in my house. But I do about 1/2 of my interneting on my mobile phone. And despite living in Silicon Valley with 5G, it's totally unreliable.
So yes, please assume that even your most adept power users will have crappy internet at least some of the time.
pier25 · 7h ago
How does the US infrastructure compare to the rest of the world?
slater · 5h ago
Showing my age here, but I remember working hard in the late 90s to get every image ultra-optimized before go-live. Impromptu meetings all "OK go from 83% to 82% on that JPG quality, OK that saves 10KB and it doesn't look like ass, ship it"
scumola · 8h ago
mosh is awesome for ssh over iffy connections
v5v3 · 7h ago
You can also run tmux on the remote server in detached mode, so disconnections are tolerant.
RugnirViking · 8h ago
at the very least consider it. It makes things better for everyone, highlights reflow messes as things load in etc
dfxm12 · 8h ago
Yes. Assume your users have a poor or metered connection. I don't want unnecessary things (like images) to load because it takes time, eats at my data quota and to be frank, I don't want people looking over my shoulder at media on my phone (especially when I have no idea what it is going to be). This is especially true for social media (and the reason I prefer HN over bluesky, reddit, etc.).
1970-01-01 · 4h ago
Yes, because Wi-Fi 7 and 5G still isn't anywhere near Ethernet in terms of packet loss.
New headline: Betteridge's rule finally defeated. Or is it?
RajT88 · 6h ago
Yes.
sneak · 8h ago
> Terrestrial because—well, have you ever tried to use a satellite connection for anything real? Latency is awful, and the systems tend to go down in bad weather.
This isn’t true anymore. Starlink changed the whole game. It’s fast and low latency now, and almost everyone on any service that isn’t Starlink has switched en masse to Starlink because previous satellite internet services were so bad.
tonyarkles · 7h ago
You're right most of the time. I have had a 99% "unnoticeable" experience with Starlink in rural Canada. I definitely still experienced rain fade during heavy rain storms and occasional clear-sky blips for 30s-2min or so. Vastly superior to anything else I've used for Internet access in remote areas for sure, but not perfect. I have also experienced longer-term (30 min or so) service degradations where the connection stayed up but the bandwidth dropped to ~10Mbit/1Mbit.
Most of the time no one would notice. For some applications it's definitely something that needs to get designed in.
gwbas1c · 7h ago
That's very similar to my experience with Comcast in downtown Palo Alto in 2010. A 99% "unnoticeable" experience, and then a multi-day outage caused by a massive storm.
I still occasionally get blips on Comcast, mostly late at night when I'm one of the few who notices.
gwbas1c · 7h ago
Yeah, I had to double-check the date when I read that. Cost aside, everything I've heard about Starlink puts them "on par" with cable. (IE, not exactly equivalent, but certainly in the same league.)
lo_zamoyski · 7h ago
We can avoid the problem simply by employing better design and a clear understanding of the intended audience.
There is no need or moral obligation for all of the internet to be accessible to everyone. If you're not a millionaire, you're not going to be join a rich country club. If you don't have a background in physics, the latest research won't be accessible to you. If you don't have a decent video card, you won't be able to play many of the latest games. The idea that everything should be equally accessible to everyone is simply the wrong assumption. Inequality is not a bad thing per se.
However, good design principles involve an element of parsimony. Not minimalism, mind you, but a purposeful use of technology. So if the content you wish to show is best served by something resource intensive that excludes some or even most people from using it, but those that can access it are the intended audience, then that's fine. But if you're just jamming a crapton of worthless gimmickry into your website that doesn't serve the purpose of the website, and on top of that, it prevents your target audience from using it, then that's just bad design.
Begin with purpose and a clear view of your intended audience and most of this problem will go away. We already do that by making websites that work with both mobile and desktop browsers. You don't necessarily need to make resource heaviness a first-order concern. It's already entailed by audience and informed by the needs of the presentation.
jekwoooooe · 5h ago
Something that is missing is… who cares? If you have bad internet why assume the product or page is for you?
All of a sudden one day, I was cut off from all my music, by the creators of the iPod!
I switched away from Apple Music and will never return. 15 years of extensive usage of iTunes, and now I will never trust Apple with my music needs again. I'm sure they don't care, or consider the move a good tradeoff for their user base, but it's the most user hostile thing I've ever experienced in two decades on Apple platforms.
Add music on macOS, and on your phone. Then sync.
RESULT: one overwrites the other, regardless of any settings.
You no longer have the audio you formerly owned.
It has nothing installed but VLC.
Life is too short to deal with the ridiculous interoperability of (simple music files) and (any modern computing platform).
Apple didn’t communicate that well and many folks lost stuff, particularly if they are picky about recordings.
All of the CD collection stuff has degraded everywhere as the databases of tracks have been passed around to various overlords.
Oh and all my lossless got shit on.
Fuck me I guess??
I couldn't be bothered to spend time manually selecting stuff to download back then. It was offensive to even ask that spend 30 minutes manually correcting a completely unnecessary mistake on their part. And this was during a really really bad time in interface, with the flat ui idiocy all the rage, and when people were abandoning all UI standards that gave any affordances at all.
If I'm going to go and correct Apple's mistake, I may as well switch to another vendor and do it. Which is what I did. I'm now on Spotify to this day, even though it has many of the problems as Appple Music. At least Spotify had fewer bugs at the time, and they hadn't deleted music off my device.
Good riddance and I'll never go back to Apple Music.
Ideally, apps shouldn't detect if you have internet and then act differently. They should pull up your cached/offline data immediately and then update/sync as attempted connections return results.
The model where you have offline data but you can't even see your playlists because it wants to load them because it thinks you have internet is maddening.
I can (and do) find things around the house that don't depend on a screen, but it's annoying to know that I don't really have much of a backup way to access the internet if the power is out for an extended period of time. (Short of plunking down for an inverter generator or UPS I suppose.)
Or you could use a Raspberry Pi or similar and a USB WiFi adapter (make sure it supports AP mode) and a battery bank, for an "emergency" battery-operated WiFi router that you'd only use during power outages.
EDIT: Unless your ISP's CPE (modem/whatever) runs on 5 volts, you'd need more than just a USB power bank to keep things going. Maybe a cheap amazon boost converter could get you the extra voltage option.
I run my router + my RPi server off-grid with ~1kWh of usable (lead-acid) battery capacity.
So with those and my laptop's battery, I sailed into our last couple of minor daytime power cuts without even noticing. Sounds of commotion from neighbours alerted me that something was up!
If I have a podcast already downloaded, but I am on an iffy connection, Spotify will block me from getting to that podcast view while it tries to load the podcast view from the web instead of using downloaded data.
I frequently put my phone in airplane mode to force spotify into offline mode to get content to play.
For such things as streaming audio/video, there is the codec and other things to be considered as well. If the data can be coded in real time or if multiple qualities are available already on the server then this can be used to offer a lower quality file to clients that request such a file. The client can download the file for later use and may be able to continue download later, if needed.
There is also, e.g. do you know that you should need a video call (or whatever else you need) at all? Sometimes, you can do without it, or it can be an optional possibility.
There is also the avoiding needing specific computers, too. It is not only for internet access, although that is a part of it, too. However, this does not mean that computer and internet cannot be helpful. They can be helpful, but should be overly relied on so much.
The Gemini protocol does not have anything like the Range request and Content-length header, and I thought this was not good enough so I made one that does have these things. (HTTP allows multiple ranges per request, but I thought that is more complicated than it needs to be, and it is simpler to only allow one range per request.)
Turns out, it's really tough to do accurately. The main reason is that the public datasets are a mess. For example, the internet availability data is in neat hexagons, while the census demographic data is in weird, irregular shapes that don't line up. Trying to merge them is a nightmare and you lose a ton of detail.
So our main takeaway, rather than just being a pretty map, was that our public data is too broken to even see the problem clearly.
I wrote up our experience here if anyone's curious: https://zeinh.ca/projects/mapping-digital-divide/
I think in so many fields the datasets are by far the highest impact thing someone can work on, even if it seems a bit mundane and boring. Basically every field I've worked in struggles for need of reliable, well maintained and open access data, and when they do get it, it usually sets off a massive amount of related work (Seen this happen in genetics, ML of course once we got ImageNet and also started getting social media text instead of just old newspaper corpuses).
That would definitely be advice I'd give to many people searching for a project in a field -- high quality data is the bedrock infrastructure for basically all projects in academic and corporate research, so if you provide the data, you will have a major impact, pretty much guaranteed.
So anyways, I bring this up with my local government in Chicago and they recommend that I switch to AT&T Fiber because it's listed as available at my address in the FCC's database. Well, I would love to do that except that
1. The FCC's database was wrong and rejected my corrections multiple times before AT&T finally ran fiber to my building this year (only 7 years after they claimed that it was available in the database despite refusing to connect to the building whenever we tried).
2. Now that it is in the building, their Fiber ISP service can't figure out that my address exists and has existing copper telephone lines run to it by AT&T themselves so their system cannot sell me the service. I've been arguing with them for 3 months on this and have even sent them pictures of their own demarc and the existing copper lines to my unit.
3. Even if they fixed the 1st issue, they coded my address as being on a different street than its mailing address and can't figure out how to sell me a consumer internet plan with this mismatch. They could sell me a business internet plan at 5x the price though.
And that's just my personal issues. And I haven't even touched on how not every cell phone is equally reliable, how the switch to 5G has made many cell phones less reliable compared to 3G and 4G networks, how some people live next to live event venues where they can have great mobile connections 70% of the time but the other 30% of the time it becomes borderline unusable, etc.
https://medium.com/spin-vt/impact-of-unlicensed-fixed-wirele...
It's really eye-opening to set up something like toxiproxy, configure bandwidth limitations, latency variability, and packet loss in it, and run your app, or your site, or your API endpoints over it. You notice all kinds of UI freezing, lack of placeholders, gratuitously large images, lack of / inadequate configuration of retries, etc.
The i got a degree and a dev job, apprenticeship? Nah dude here's a big legacy app for you, have fun. Mentorship? Okay I technically had a mentor. We had a lunch every couple months, talked about stuff a bit but nothing much. And I mean this is going to sound a bit pompous but I'm above average. I had mostly A's in university, I finished every single project alone and then helped others. I was a TA. I corrected the professors when they made mistakes. I wrote a lot of code in my free time. I can't imagine what it must be like for one of my peers who honestly didn't know Jack shit and still graduated somehow.
I'm working on an app right now, took over after two other guys worked on it for about a year. This app isn't even in prod yet and it's already legacy code. Complete mess, everything takes like 5 seconds to load, the frontend does a crapload of processing because the data is stored and transferred in entirely the wrong structure so basically they just send all the data and sort it out on the frontend.
I honestly think the fastest way to get this app working properly is to scrap the whole thing and start from scratch but we have a deadline in a couple months so I guess I'll see how it goes.
So I was tasked with fixing the issue. Instead of loading the whole list, I established a paginated endpoint and a search endpoint. The page now loaded in less than a second, and searches of customer data loaded in a couple seconds. The users hated it.
Their previous way of handling the work was to just keep the index of all customers open in a browser tab all day, Ctrl+F the page for an instant result and open the link to the customer details in a new tab as needed. My upgrades made the index page load faster, but effectively made the users wait seconds every single time for a response that used to be instant at the cost of a one time per day long wait.
There's a few different lessons to take from this about intent and design, user feedback, etc. but the one that really applies here is that sometimes it's just more friendly to let the user have all the data they need and allow them to interact with it "offline".
Of course if the system is a total mess then it might have been a lot of work, but what you describe is really more of a skill issue than a technical limitation.
You can easily see this when using WiFi aboard a flight, where latency is around 600 msec at minimum (most airlines use geostationary satellites, NGSO for airline use isn't quite there yet). There is so much stuff that happens serially in back-and-forth client-server communication in modern web apps. The developer sitting in SF with a sub-10 ms latency to their development instance on AWS doesn't notice this, but it's sure as as heck noticeable when the round trip is 60x that. Obviously, some exchanges have to be serial, but there is a lot of room for optimization and batching that just gets left on the floor.
It's really useful to use some sort of network emulation tool like tc-netem as part of basic usability testing. Establish a few baseline cases (slow link, high packet loss, high latency, etc) and see how usable your service is. Fixing it so it's better in these cases will make it better for everyone else too.
This often fails in all sorts of ways:
* The client treats timeout as end-of-file, and thinks the resource is complete even though it isn't. This can be very difficult for the user to fix, except as a side-effect of other breakages.
* The client correctly detects the truncation, but either it or the server are incapable of range-based downloads and try to download the whole thing from scratch, which is likely to eventually fail again unless you're really lucky.
* Various problems with automatic refreshing.
* The client's only (working) option is "full page refresh", and that re-fetches all resources including those that should have been cached.
* There's some kind of evil proxy returning completely bogus content. Thankfully less common on the client end in a modern HTTPS world, but there are several ways this can still happen in various contexts.
- Depending on your product or use case, somewhere between a majority and a vast majority of your users will be using your product from a mobile device. Throughput and latency can be extremely high, but also highly variable over time. You might be able to squeeze 30Mbps and 200ms pings for one request and then face 2Mbps and 4000ms pings seconds later.
- WiFi generally sucks for most people. The fact that they have a 100Mbps/20Mbps terrestrial link doesn't mean squat if they're eking out 3Mbps with eye-watering packet loss because they're in their attic office. The vast majority of your users are using wireless links (WiFi or cell) and are not in any way hardlined to the internet.
It's a nice feature, but it would be even nicer if you could pin some apps to prevent their offloading even if you haven't used them in ages.
That change would make _viable_ for me at all, right now it's next to useless.
Currently iOS will offload apps that provide widgets (like Widgetsmith) even when I have multiple Widgetsmith widgets on my 1st and 2nd homescreens, I just never open the app (I don't need to, the widgets are all I use). One day the widgets will just be black and clicking on them does nothing. I have to search for Widgetsmith and then make the phone re-download it. So annoying.
Also annoying is you can get push notifications from offloaded apps. Tapping on the notification does _nothing_ no alert, no re-download, just nothing. Again, you have to connect the dots and redownload it yourself.
This "feature" is very badly implemented. If they just allowed me to pin things and added some better UX (and logic for the widget issue) it would be much better.
0: https://support.apple.com/guide/iphone/manage-storage-on-iph...
I don't look much into phones that don't promise a reasonable support life, but if I go look at motorola all these midrange phones don't even have size options. At least some of them accept microsd.
Client-side rendering with piecemeal API calls is definitely not the solution if you are having trouble getting packets from A to B. The more you spread the information across different requests, the more likely you are going to get lose packets, force arbitrary retries and otherwise jank up the UI.
From the perspective of the server, you could install some request timing middleware to detect that a client is in a really bad situation and actually do something about it. Perhaps a compromise could be to have the happy path as a websocketed react experience that falls back to a ultralight, one-shot SSR experience if the session gets flagged as having a bad connection.
Even if I SSR and inline all the packages/content, that overall response could be broken up into multiple TCP packets that could also be dropped (missing parts in the middle of your overall response).
How does using SSR account for this?
I have to deal with this problem when designing TCP/UDP game networking during the streaming of world data. Streaming a bunch of data (~300 Kb) is similar to one big SSR render and send. This is because standard TCP packets max out at ~65 Kb.
Believing that one request maps to one packet is a frequent "gotcha" I have to point out to new network devs.
If there's 15 different components sending 25 different requests to different endpoints, some of which are triggered by activities like scrolling etc, then the user needs a consistent connection to have a good experience.
Packet loss in TCP doesn't fail the whole request. It just means some packets need to be resent which takes more time.
FSVO required. Images beyond a few bytes shouldn't be inlined for example since loading them would block the meat of the content after them.
It's grounds for endless debate because it's inherently a fuzzy answer, and everyone has their own limits. However the outcome naturally becomes an amalgamation of everyone's response. So perhaps a post like this leads to a few more slim websites.
Part of the problem is the acceptance of the term "long tail" as normal. It is not. It is a method of marginalizing people.
These are not numbers, these are people. Just because someone is on an older phone or a slower connection does not make them any less of a human being than someone on a new phone with the latest, fastest connection.
You either serve, or you don't. If your business model requires you to ignore 20% of potential customers because they're not on the latest tech, then your business model is broken and you shouldn't be in business.
The whole reason companies are allowed to incorporate is to give them certain legal and financial benefits in exchange for providing benefits (economic and other) to society. If your company can't hold up its end of the bargain, then please do go out of business.
If I squeeze you to be more precise, it becomes uncomfortable and untenable, as no matter what you are either marginalizing people or marginalizing yourself, your company, or everyone else. It's something where it is extremely easy to have moral high ground when you have zero stake yourself, but anyone who understands the nuance of the problem can see right through it.
Or, at least the business needs to recognize that their ending support for Y is literally cutting off potential customers, and affirmatively decide that's good for their business. Ask your company's sales team if they'd be willing to answer 10% of their inbound sales calls with "fuck off, customer" and hang up. I don't think any of them would! But these very same companies think nothing of ending support for 'old' phones or supporting only Chrome browser, or programming for a single OS platform, which is effectively doing the same thing: Telling potential customers to fuck off.
Just a warning about the screenshot he's referencing here: the slice of map that he shows is of the western half of the US, which includes a lot of BLM land and other federal property where literally no one lives [0], which makes the map look a lot sparser in rural areas than it is in practice for humans on the ground. If you look instead at the Midwest on this map you'll see pretty decent coverage even in most rural areas.
The weakest coverage for actually-inhabited rural areas seems to be the South and Appalachia.
[0] https://upload.wikimedia.org/wikipedia/commons/0/0f/US_feder...
Grateful for the blog w/ nice data tho TY
It's actually worse than this. Companies will claim they offer gigabit within a zip code if there's a single gigabit connection, but they will not actually offer gigabit lines at any other addresses in the zip code.
The rule I've come up with is one user action, one request, one response. By 'one response', I mean one HTTP response containing DOM data; if that response triggers further requests for CSS, images, fonts, or whatever, that's fine, but all the modifications to the DOM need to be in that first request.
An amazing thing.
I lived happily on dialup when I was a teenager, with just one major use case for more bandwidth.
The other issue that's under-considered is lower spec devices. Way more people use cheap Android phones than fancy last-five-years iPhones. Are you testing on those more common devices?
This is spot on for me. I live in a low-density community that got telcom early and the infrastructure has yet to be upgraded. So, despite being a relatively wealthy area, we suffer from poor service and have to choose between flaky high latency high bandwidth (Starlink) and flaky low latency low bandwidth (DSL). I’ve chosen the latter to this point. Point to point wireless isn’t an option because of the geography.
(if you don't have a favorite, try react.dev)
We're using this benchmark all the time on https://www.firefly-lang.org/ to try to keep it a perfect 100%.
Seems sensible to take a small convenience hit now to mitigate those risks.
The ol reliable plain HTML stuff usually works great though, even when you have to wait a bit for it to load.
I've been trying to convince them to try Starlink, but they're unwilling to pay for the $500+ equipment costs.
One of my neighbors is apparently using Starlink since I see a Starlink router show up in my Wi-Fi scan.
Many people have already said designing for iffy internet helps everyone: this is true for slimming your payload, but not necessarily designing around dropped connections. On a plane or train, you might alternate between no internet and good internet, so you can just retry anything that failed when the connection is back, but a rural connection can be always spotty. And I think the calculus for devs isn't clearly positive when you have to design qualitatively new error handling pathways that many people will never use.
For example, cloning a git repo is non-resumable. Cloning a larger repo can be almost impossible since the probability the connection doesn't drop in the middle falls to zero. The sparse checkout feature has helped a lot here. Cargo also used to be very hard to use on rural internet until sparse registries.
And the LLM will remember all the rules for blind/def/etc...
Also I think until LLMs become reliable (which may be never), using them in the way you describe is a terrible idea. You don't want your UI to all of a sudden hallucinate something that screws it up.
As far as international emitting of interfaces - yes it absolutely makes sense to do it this way. If you're asking for an address and the customer is in the US, the LLM can easily whip up a form for that kind of address. If you're somewhere else, it can do that too. There's no reason for bespoke interfaces that never get the upgrade because someone made it overly complicated for some reason.
Back in the day, AOP was almost a big thing (for a small subset of programmers). Perhaps what was missing was having a generalized LLM that allowed for the concern to be injected. Forgot your ALT tag? LLM, Not internationalized? LLM, Non-complicated Lynx compatible view? LLM
Huh, worked fine for me: https://i.imgur.com/Y7lTOac.png
It is telling that tech giants make tools to test their software in poor networking conditions. It may not look like they care, until you try software by those who really don't care.
So if your market is a global one, there's a chance even a fortune 500 company could struggle to load your product in their HQ because of their terrible internet connection. And I suspect it's probably even worse in some South American/African/Asian countries in the developing world...
The NTIA or FCC just released an updated map a few days ago (part of the BEAD overhaul) that shows the locations currently covered by existing unlicensed fixed wireless.
Quick Google search didn't find a link but I have it buried in one of my work slack channels. I'll come back with the map data if somebody else doesn't.
The state of broadband is way, way worse than people think in the US.
Indirect Link: https://medium.com/spin-vt/impact-of-unlicensed-fixed-wirele...
It's hard to make a website that doesn't work reasonably well with that though. Even with all the messed up Javascript dependencies you might have.
I feel for those on older non-Starlink Satellite links. eg. islands in the pacific that still rely on Inmarsat geostationary links. 492 kbit/s maximum (lucky if you get that!), 3 second latency, pricing by the kb of data. Their lifestyle just doesn't use the internet much at all by necessity but at those speeds even when willing to pay the exorbitant cost sites will just timeout.
Starlink has been a revolution for these communities but it's still not everywhere yet.
Except I sometimes read articles on the subway and not all subway tunnels in my city have cell service. Or sometimes I read articles when I eat in some place that's located deep inside an old building with thick brick walls. Public wifi is also not guaranteed to be stable — I stayed in hotels where my room was too far from the AP so the speed was utter shit. Once, I loaded some Medium articles on my phone before boarding a plane, only to discover, after takeoff, that these articles don't make sense without images that didn't load.
Anyway. As a user, for these kinds of static pages, I expect the page to be fully loaded as soon as my browser hides the progress bar. Dear web developers, please do your best to meet this expectation.
Programmers: Let's design for crappy internet
Internet providers: Maybe it's not necessary
[0] https://danluu.com/web-bloat/
Loads of people are on "a slow link" or iffy internet that would otherwise have a fast internet. Like... plane wifi! Or driving through less populated areas (or the UK outside of london) and have spotty phone reception.
The article looks at broadband penetration in the US. Which is useful, but you need to plan for worst cases scenario, not statistically likely cases.
I have blazing fast internet at home, and that isn't helpful for the AAA app when I need to get roadside assistance.
I want the nytimes app to sync data for offline reading, locally caching literally all of the text from this week should be happening.
oh well, 250mbit is almost worth it.
^: when i say this, i wasn't averaging, it drops from 0:54-0:59, in essence, "5 seconds of every minute"
Your point reminded me of the NASA Mars rover deployed in 2021 with the little Ingenuity helicopter on board.
The helicopter had a bug that required a software update, which NASA had to upload over three network legs: the Deep Space Network to Mars, a UHF leg from Mars-orbiting robotic vehicles to the rover, and a ZigBee connection from the rover to the Ingenuity helicopter. A single message could take between 5 and 20 minutes to arrive...
Edit: I described this in an article back then:
https://robotsinplainenglish.com/e/2021-04-18-install.html
- Mobile first design
- Near unlimited high speed bandwidth
There's never been a case where both are blanket true.
Use the software that you make, in the same conditions that your users will use it in.
Most mobile apps are developed by people in offices with massive connections, or home offices with symmetric gigabit fiber or similar. The developers make sure the stuff works and then they're on to the next thing. The first time someone tries to use it on a spotty cellular connection is probably when the first user installs the update.
You don't have to work on a connection like that all the time, but you need to experience your app on that sort of connection, on a regular basis, if you care about your users' experience.
Of course, it's that last part that's the critical missing piece from most app development.
So yes, please assume that even your most adept power users will have crappy internet at least some of the time.
New headline: Betteridge's rule finally defeated. Or is it?
This isn’t true anymore. Starlink changed the whole game. It’s fast and low latency now, and almost everyone on any service that isn’t Starlink has switched en masse to Starlink because previous satellite internet services were so bad.
Most of the time no one would notice. For some applications it's definitely something that needs to get designed in.
I still occasionally get blips on Comcast, mostly late at night when I'm one of the few who notices.
There is no need or moral obligation for all of the internet to be accessible to everyone. If you're not a millionaire, you're not going to be join a rich country club. If you don't have a background in physics, the latest research won't be accessible to you. If you don't have a decent video card, you won't be able to play many of the latest games. The idea that everything should be equally accessible to everyone is simply the wrong assumption. Inequality is not a bad thing per se.
However, good design principles involve an element of parsimony. Not minimalism, mind you, but a purposeful use of technology. So if the content you wish to show is best served by something resource intensive that excludes some or even most people from using it, but those that can access it are the intended audience, then that's fine. But if you're just jamming a crapton of worthless gimmickry into your website that doesn't serve the purpose of the website, and on top of that, it prevents your target audience from using it, then that's just bad design.
Begin with purpose and a clear view of your intended audience and most of this problem will go away. We already do that by making websites that work with both mobile and desktop browsers. You don't necessarily need to make resource heaviness a first-order concern. It's already entailed by audience and informed by the needs of the presentation.