I like this on the first glance. The idea of a random website probing arbitrary local IPs (or any IPs for that matter) with HTTP requests is insane. I wouldn't care if it breaks some enterprise apps or integrations - enterprises could reenable this "feature" via management tools, normal users could configure it themselves, just show a popup "this website wants to control local devices - allow/deny".
buildfocus · 10h ago
This is a misunderstanding. Local network devices are protected from random websites by CORS, and have been for many years. It's not perfect, but it's generally quite effective.
The issue is that CORS gates access only on the consent of the target server. It must return headers that opt into receiving requests from the website.
This proposal aims to tighten that, so that even if the website and the network device both actively want to communicate, the user's permission is also explicitly requested. Historically we assumed server & website agreement was sufficient, but Facebook's recent tricks where websites secretly talked to apps on your phone have broken that assumption - websites might be in cahoots with local network servers to work against your interests.
xp84 · 10h ago
Doesn't CORS just restrict whether the webpage JS context gets to see the response of the target request? The request itself happens anyway, right?
So the attack vector that I can imagine is that JS on the browser can issue a specially crafted request to a vulnerable printer or whatever that triggers arbitrary code execution on that other device. That code might be sufficient to cause the printer to carry out your evil task, including making an outbound connection to the attacker's server. Of course, the webpage would not be able to discover whether it was successful, but that may not be important.
tombakt · 10h ago
No, a preflight (OPTIONS) request is sent by the browser first prior to sending the request initiated by the application. I would be surprised if it is possible for the client browser to control this OPTIONS request more than just the URL. I am curious if anyone else has any input on this topic though.
Maybe there is some side-channel timing that can be used to determine the existence of a device, but not so sure about actually crafting and delivering a malicious payload.
varenc · 4h ago
This tag:
<img src="http://192.168.1.1/router?reboot=1">
triggers a local network GET request without any CORS involvement.
Exactly you can also trigger forms for POST or DELETE etc. this is called CSRF if the endpoint doesn't validate some token in the request. CORS only protects against unauthorized xhr requests. All decades old OWASP basics really.
LegionMammal978 · 10h ago
The idea is, the malicious actor would use a 'simple request' that doesn't need a preflight (basically, a GET or POST request with form data or plain text), and manage to construct a payload that exploits the target device. But I have yet to see a realistic example of such a payload (the paper I read about the idea only vaguely pointed at the existence of polyglot payloads).
MajesticHobo2 · 9h ago
There doesn't need to be any kind of "polyglot payload". Local network services and devices that accept only simple HTTP requests are extremely common. The request will go through and alter state, etc.; you just won't be able to read the response from the browser.
EGreg · 8h ago
Exactly. People who are answering must not have been aware of “simple” requests not requiring preflight.
freeone3000 · 9h ago
Oh, you can only send arbitrary text or form submissions. That’s SO MUCH.
It can send a json-rpc request to your bitcoin node and empty your wallet
rafram · 5h ago
You’re forgetting { mode: 'no-cors' }, which makes the response opaque (no way to read the data) but completely bypasses the CORS preflight request and header checks.
layer8 · 8h ago
There is a limited, but potentially effective, attack surface via URL parameters.
rerdavies · 8h ago
I can confirm that local websites that don't implement CORS via the OPTIONS request cannot be browsed with mainstream browsers. Does nothing to prevent non-browser applications running on the local network from accessing your website.
As far as I can tell, the only thing this proposal does that CORS does not already do is provide some level of enterprise configuration control to guard against the scenario where your users are using compromised internet sites that can ping around your internal network for agents running on compromised desktops. Maybe? I don't get it.
If somebody would fix the "no https for local connections" issue, then IoT websites could use authenticated logins to fix both problems. Non-https websites also have no access to browser crypto APIs so roll-your-own auth (the horror) isn't an option either. Fustrating!
dgoldstein0 · 1h ago
I don't believe this is true? As others have pointed out, preflight options requests only happen for non simple requests. Cors response headers are still required to read a cross domain response, but that is still a huge window for a malicious site to try to send side effectful requests to your local network devices that have some badly implemented web server running.
nbadg · 10h ago
Or simply perform a timing attack as a way of exploring the local network, though I'm not sure if the browser implementation immediately returns after the request is made (ex fetch API is called) but before the response is received. Presumably it doesn't, which would expose it to timing attacks as a way of exploring the network.
dgoldstein0 · 1h ago
Almost every js API for making requests is asynchronous so they do return after the request is made. The exception though is synchronous XHR calls, but I'm not sure if those are still supported
... Anyhow I think it doesn't matter because you can listen for the error/failure of most async requests. Cors errors are equivalent to network errors - the browser tells the js it got stays v code 0 with no further information - but the timing of that could lead to some sort of interference? Hard to say what that would be though. Maybe if you knew the target webserver was slow but would respond to certain requests, a slower failed local request could mean it actually reached a target device.
That said, why not just fire off simple http requests with your intended payload? Abusing the csrf vulnerabilities of local network devices seems far easier than trying to make something out of a timing attack here.
rnicholus · 6h ago
CORS doesn’t protect you from anything. Quite the opposite: it _allows_ cross origin communication (provided you follow the spec). The same origin policy is what protects you.
friendzis · 53m ago
> The issue is that CORS gates access only on the consent of the target server. It must return headers that opt into receiving requests from the website.
False. CORS only gates non-simple requests (via options), simple requests are sent regardless of CORS config, there is no gating whatsoever.
IshKebab · 9h ago
I think this can be circumvented by DNS rebinding, though your requests won't have the authentication cookies for the target, so you would still need some kind of exploit (or a completely unprotected target).
MBCook · 6h ago
How? The browser would still have to resolve it to a final IP right?
hsbauauvhabzb · 10h ago
CORS prevents the site from accessing the response body. In some scenarios, a website could, for example, blindly attempt to authenticate to your router and modify settings by guessing your router bran/model and password
No comments yet
Aeolun · 8h ago
How would Facebook do that? They scan all likely local ranges for what could be your phone, and have a web server running on the phone? That seems more like a problem of allowing the phone app to start something like that and keep it running in the background.
londons_explore · 2h ago
Webrtc allows you to find the local ranges.
Typically there are only 256 IP's, so a scan of them all is almost instant.
ameliaquining · 9h ago
Is this kind of attack actually in scope for this proposal? The explainer doesn't mention it.
kmeisthax · 10h ago
THE MYTH OF "CONSENSUAL" REQUESTS
Client: I consent
Server: I consent
User: I DON'T!
ISN'T THERE SOMEBODY YOU FORGOT TO ASK?
cwillu · 2h ago
Does anyone remember when the user-agent was an agent of the user?
jm4 · 6h ago
This sounds crazy to me. Why should websites ever have access to the local network? That presents an entirely new threat model for which we don’t have a solution. Is there even a use case for this for which there isn’t already a better solution?
loaph · 4h ago
I've used https://pairdrop.net/ before to share files between devices on the same LAN. It obviously wouldn't have to be a website, but it's pretty convenient since all my devices I wanted to share files on already have a browser.
charcircuit · 3h ago
>for which we don’t have a solution
It's called ZTA, Zero Trust Architecture. Devices shouldn't assume the LAN is secure.
lucideer · 11h ago
> normal users could configure it themselves, just show a popup "this website wants to control local devices - allow/deny".
MacOS currently does this (per app, not per site) & most users just click yes without a second thought. Doing it per site might create a little more apprehension, but I imagine not much.
mastazi · 8h ago
Do we have any evidence that most users just click yes?
My parents who are non-technical click no by default to everything, sometimes they ask for my assistance when something doesn't work and often it's because they denied some permission that is essential for an app to work e.g. maybe they denied access to the microphone to an audio call app.
Unless we have statistics, I don't think we can make assumptions.
technion · 6h ago
The amount of "malware" infections I've responded to over the years that involved browser push notifications to Windows desktops is completely absurd. Chrome and Edge clearly ask for permissions to enable a browser push.
The moment a user gets this permissions request, as far as I can tell they will hit approve 100% of the time. We have one office where the staff have complained that it's impossible to look at astrology websites without committing to desktop popups selling McAfee. Which implies those staff, having been trained to hit "no", believe it's impossible to do.
(yes, we can disable with a GPO, which I heavily promote, but that org has political problems).
lucideer · 1h ago
I have no statistics but I wouldn't consider older parents the typical case here. My parents never click yes on anything but my young colleagues in non engineering roles in my office do. And I'd say even a decent % of the engineering colleagues do too - especially the vibe coders. And they all spend a lot more time on they computer then my parents.
Aeolun · 8h ago
As a counter example, I think all these dialogs are annoying as hell and click yes to almost everything. If I’m installing the app I have pre-vetted it to ensure it’s marginally trustworthy.
paxys · 9h ago
People accept permission prompts from apps because they conciously downloaded the app and generally have an idea about the developer and what the app does. If a social media app asks for permission to your photos it's easy to understand why, same with a music streamer wanting to connect to your smart speaker.
A random website someone linked me to wanting to access my local network is a very different case. I'm absolutely not giving network or location or camera or any other sort of access to websites except in very extreme circumstances.
poincaredisk · 8h ago
"Please accept the [tech word salad] popup to verify your identity"
Maybe this won't fool you, but it would trick 90% of internet users. (And even if it was 20% instead of 90%, that's still way too much.)
grokkedit · 11h ago
problem is: without allowing it webUIs like synology won't work, since they require your browser to connect to the local network... as it is, it's not great
planb · 10h ago
Why? I’d guess requests from a local network site to itself (maybe even to others on the same network) will be allowed.
zbuttram · 9h ago
With the proposal in the OP, I would think so yes. But the MacOS setting mentioned directly above is blanket per-app at the OS level.
jay_kyburz · 10h ago
This proposal is for websites outside your network contacting inside your network. I assume local IPs will still work.
Marsymars · 10h ago
Note that the proposal also covers loopbacks, so domain names for local access would also still work.
mystified5016 · 10h ago
I can't believe that anyone still thinks a popup permission modal offers any type of security. Windows UAC has shown quite definitively that users will always click through any modal in their way without thought or comprehension.
Besides that, approximately zero laypersons will have even the slightest clue what this permission means, the risks involved, or why they might want to prevent it. All they know is that the website they want is not working, and the website tells them to enable this or that permission. They will all blindly enable it every single time.
ameliaquining · 9h ago
I don't think anyone's under the impression that this is a perfect solution. But it's better than nothing, and the options are this, nothing, or a security barrier that can't be bypassed with a permission prompt. And it was determined that the latter would break too many existing sites that have legitimate (i.e., doing something the end user actively wants) reason to talk to local devices.
xp84 · 10h ago
This is so true. The modern Mac is a sea of Allow/Don't Allow prompts, mixed with the slightly more infantilizing alternative of the "Block" / "Open System Preferences" where you have to prove you know what you're doing by manually browsing for the app to grant the permission to, to add it to the list of ones with whatever permission.
They're just two different approaches with the same flaw: People with no clue how tech works cannot completely protect themselves from any possible attacker, while also having sophisticated networked features. Nobody has provided a decent alternative other than some kind of fully bubble-wrapped limited account using Group Policies, to ban all those perms from even being asked for.
Gigachad · 26m ago
A better option would be to put Mark Zuckerberg in prison for deploying malware to a massive number of people.
donnachangstein · 8h ago
> The modern Mac is a sea of Allow/Don't Allow prompts
Remember when they used to mock this as part of their marketing?
Windows Vista would spawn a permissions prompt when users did something as innocuous as creating a shortcut on their desktop.
Microsoft deserved to be mocked for that implementation.
Gigachad · 25m ago
MacOS asked a permission dialog when I plug my AirPods in to charge. I have no idea what I’m even giving permission for but it pops up every time.
AStonesThrow · 5h ago
I once encountered malware on my roommate’s Windows 98 system. It was a worm designed to rewrite every image file as a VBS script that would replicate and re-infect every possible file whenever it was clicked or executed. It hid the VBS extensions and masqueraded as the original images.
Creation of a shortcut on Windows is not necessarily innocuous. It was a common first vector to drop malware as users were accustomed to installing software that did the same thing. A Windows shortcut can hide an arbitrary pathname, arbitrary command-line arguments, a custom icon, and more; these can be modified at any time.
So whether it was a mistake for UAC to be overzealous or obstructionist, or Microsoft was already being mocked for poor security, perhaps they weren’t wrong to raise awareness about such maneuvers.
GeekyBear · 5h ago
A user creating a shortcut manually is not something that requires a permissions prompt.
If you want to teach users to ignore security prompts, then completely pointless nagging is how you do it.
socalgal2 · 6h ago
I wish they'd (Apple/Micrsoft/Google/...) would do similar things for USB and Bluetooth.
Lately, every app I install, wants bluetooth access to scan all my bluetooth devices. I don't want that. At most, I want the app to have to declare in their manifest some specific device IDs (short list) that their app is allowed to connect to and have the OS limit their connections to only those devices. For for example the Bose App should only be able to see Bose devices, nothing else. The CVS (pharmacy app) should only be able to connect to CVS devices, whatever those are. All I know is the app asked for permission. I denied it.
I might even prefer if it had to register the device ids and then the user would be prompted, the same way camera access/gps access is prompted. Via the OS, it might see a device that the CVS.app registered for in its manifest. The OS would popup "CVS app would like to connect to device ABC? Just this once, only when the app is running, always" (similar to the way iOS handles location)
By id, I mean some prefix that a company registers for its devices. bose.xxx, app's manifest says it wants to connect to "bose.*" and OS filters.
Similarly for USB and maybe local network devices. Come up with an id scheme, have the OS prevent apps form connecting to anything not that id. Effectively, don't let apps browser the network, usb, bluetooth.
3eb7988a1663 · 4h ago
I am still holding out hope that eventually at least Apple will offer fake permission grants to applications. Oh, app XYZ "needs" to see my contact list to proceed? Well it gets a randomized fake list, indistinguishable from the real one. Similar with GPS.
I have been told that WhatsApp does not let you name contacts without sharing your address book back to Facebook.
nothrabannosir · 3h ago
In iOS you can share a subset of your contacts. This is functionally equivalent and works as you described for WhatsApp.
totetsu · 5h ago
Like the github 3rd party application integration. "ABC would like to see your repositories, which ones do you want to share?"
paxys · 9h ago
It's crazy to me that this has always been the default behavior for web browsers. A public website being able to silently access your entire filesystem would be an absurd security hole. Yet all local network services are considered fair game for XHR, and security is left to the server itself. If you are developer and run your company's webapp on your dev machine for testing (with loose or non-existent security defaults), facebook.com or google.com or literally anyone else could be accessing it right now. Heck think of everything people deploy unauthed on their home network because they trust their router's firewall. Does every one of them have the correct CORS configuration?
thaumasiotes · 2h ago
> Does every one of them have the correct CORS configuration?
I would guess it's closer to 0% than 0.1%.
pacifika · 12h ago
Internet Explorer solved this with their zoning system right?
Ironically, Chrome partially supported and utilized IE security zones on Windows, though it was not well documented.
pacifika · 11h ago
Oh yeah forgot about that, amazing.
nailer · 8h ago
Honestly I just assumed a modern equivalent existed. That it doesn’t is ridiculous. Local network should be a special permission like the camera or microphone.
sroussey · 12h ago
I guess this would help Meta’s sneaking identification code sharing between native apps and websites with their sdk on them from communicating serendipitously through localhost, particularly on Android.
Do note that since the removal of NPAPI plugins years ago, locally-installed software that intends to be used by one or more public websites has to run an HTTP server on localhost.
It would be really annoying if this use case was made into an unreasonable hassle or killed entirely. (Alternatively, browser developers could've offered a real alternative, but it's a bit late for that now.)
michaelt · 11h ago
Doesn't most software just register a protocol handler with the OS? Then a website can hand the browser a zoommtg:// link, which the browser opens with zoom ?
Things like Jupyter Notebooks will presumably be unaffected by this, as they're not doing any cross-origin requests.
And likewise, when a command line tool wants you to log in with oauth2 and returns you to a localhost URL, it's a simple redirect not a cross-origin request, so should likewise be allowed?
ronsor · 10h ago
That works if you want to launch an application from a website, but it doesn't work if you want to actively communicate with an application from a website.
fn-mote · 10h ago
This needs more detail to make it clear what you are wishing for that will not happen.
It seems like you're thinking of a specific application, or at least use-case. Can you elaborate?
Once you're launching an application, it seems like the application can negotiate with the external site directly if it wants.
xp84 · 10h ago
#1 use case would be a password manager. It would be best if the browser plugin part can ping say, the 1password native app, which runs locally on your pc, and say "Yo I need a password for google.com" - then the native app springs into action, prompts for biometrics, locates the password or offers the user to choose, then returns it directly to the browser for filling.
Sure you can make a fully cloud-reliant PW manager, which has to have your key stored in the browser and fetch your vault from the server, but a lot of us like having that information never have to leave our computers.
spiffyk · 9h ago
Browser extensions play by very different rules than websites already. The proposal is for the latter and I doubt it is going to affect the former, other than MAYBE an extra permanent permission.
cAtte_ · 9h ago
you missed the point. password managers are one of the many use cases for this feature; that they just so happen to be mostly implemented as extensions does not mean that the feature is only useful for extensions
A common use case, whether for 3D printers, switches, routers, or NAS devices is that you've got a centrally hosted management UI that then sends requests directly to your local devices.
This allows you to use a single centrally hosted website as user interface, without the control traffic leaving your network. e.g. Plex uses this.
michaelt · 9h ago
I don't think this proposal will stop you visiting the management UI for devices like switches and NASes on the local network. You'll be able to visit http://192.168.0.1 and it'll work just fine?
This is just about blocking cross-origin requests from other websites. I probably don't want every ad network iframe being able to talk to my router's admin UI.
kuschku · 9h ago
That's not what I'm talking about.
A common example is this:
1. I visit ui.manufacturer.tld
2. I click "add device" and enter 192.168.0.230, repeating this for my other local devices.
3. The website ui.manufacturer.tld now shows me a dashboard with aggregate metrics from all my switches and routers, which it collects by fetch(...) ing data from all of them.
The manufacturers site is just a static page. It stores the list of devices and credentials to connect to them in localStorage.
None of the data ever leaves my network, but I can just bookmark ui.manufacturer.tld and control all of my devices at once.
This is a relatively neat approach providing the same comfort as cloud control, without the privacy nightmare.
No comments yet
hypercube33 · 9h ago
Windows Admin center but it's only local which I rather hate
IshKebab · 9h ago
It would be amazing if that method of communicating with a local app was killed entirely, because it's been a very very common source of security vulnerabilities.
> locally-installed software that intends to be used by one or more public websites has to run an HTTP server on localhost
if that software runs with a pull approach, instead of a push one, the server becomes unnecessary
bonus: then you won't have websites grossly probing local networks that aren't theirs (ew)
skybrian · 11h ago
While this will help to block many websites that have no business making local connections at all, it's still very coarse-grained.
Most websites that need this permission only need to access one local server. Granting them access to everything violates the principle of least privilege. Most users don't know what's running on localhost or on their local network, so they won't understand the risk.
paxys · 10h ago
> Most users don't know what's running on localhost or on their local network, so they won't understand the risk.
Yes, which is why they also won't understand when the browser asks if you'd like to allow the site to visit http://localhost:3146 vs http://localhost:8089. A sensible permission message ("allow this site to access resources on your local network") is better than technical mumbo jumbo which will make them just click "yes" in confusion.
xp84 · 10h ago
Either way they'll click "yes" as long as the attacker site properly primes them for it.
For instance, on the phishing site they clicked on from an email, they'll first be prompted like:
"Chase need to verify your Local Network identity to keep your account details safe. Please ensure that you click "Yes" on the following screen to confirm your identity and access account."
Yes, that's meaningless gibberish but most people would say:
• "Not sure what that means..."
• "I DO want to access my account, though."
derefr · 9h ago
In an ideal world, the browser could act as an mDNS client, discovering local services, so that it could then show the pretty name of the relevant service in the security prompt.
In the world we live in, of course, almost nothing on your average LAN has an associated mDNS service advertisement.
skybrian · 4h ago
On a phone at least, it should be "do you want to allow website A to connect to app B."
(It's harder to do for the rest of the local network, though.)
nine_k · 10h ago
A comprehensive implementation would be a firewall. Which CIDRs, which ports, etc.
I wish there were an API to build such a firewall, e.g. as a part of a browser extension, but also a simple default UI allowing to give access to a particular machine (e.g. router), to the LAN, to a VPN, based on the routing table, or to "private networks" in general, in the sense Windows ascribes to that. Also access to localhost separately. The site could ask one of these categories explicitly.
qwertox · 37m ago
Proposing this in 2025. While probably knowing about this problem since Chrome was released (2008).
Why not treat any local access as if it were an access to a microphone?
G_o_D · 4h ago
Cors doesnt stop POST request also not fetch with 'no-cors'supplied in javascript its that you cant read response that doesnt mean request is not sent by browser
Then again local app can run server with proxy that adds adds CORS headers to the proxied request and you can access any site via js fetch/xmlhttprequest interface, even extension is able to modify headers to bypass cors
Cors bypassing is just matter of editing headers whats really hard to or impossible to bypass in CSP rules,
Now facebook app itself is running such cors server proxy even without it an normal http or websocket server is enought to send metrics
Chrome already has flag to prevent locahost access still as said websocket can be used
Completely banning localhost is detrimental
Many users are using self hosted bookmarking, note app, pass managers like solutions that rely on local server
rerdavies · 9h ago
I worry that there are problems with Ipv6. Can anyone explain to me if there actually is a way to determine whether an IPv6 is site local? If not, the proposal is going to have problems on IPv6-only networks.
I have an struggled with this issue in the past. I have an IoT application whose websever wants to reject any requests from a non-local address. After failing to find a way to distinguish IPv6 local addresses, I ended up redirecting IPv6 requests to the local IPv4 address. And that was the end of that.
I feel like I would be in a better position to raise concerns if I could confirm that my understanding is correct: that there is no practical way for an application to determine whether an IPv6 address is link- or site-local.
I did experiment with IPv6 "link local" addresses, but these seem to be something else altogether different (for use by routers rather than general applications),and don't seem to work for regular application use.
There is some wiggle room provided by including .local address as local servers. But implementation of .local domains seems to be inconsistent across various OSs at present. Raspberry PI OS, for example, will do mDNS resolution of "some_address" but not of "someaddress.local"; Ubuntu 24.04 will resolve "someaddress.local", but not "someaddress". And neither will resolve "someaddress.local." (which I think was recommended at one point, but is now deprecated and non-functional). Which does seems like an issue worth raising.
And it frustrates the HECK out of me that nobody will allow use of privately issued certs for local network addresses. The "no https for local addresses" thing needs to be fixed.
gerdesj · 7h ago
IPv6 still has the concept of "routable". You just have to decide what site-local means in terms of the routing table.
In old school IPv4 you would normally assign octet two to a site and octet three to a VLAN. Oh and you start with 10.
With IPv6 you have a lot more options.
All IPv6 devices have link local addresses - that's the LAN or local VLAN - a bit like APIPA.
Then you start on .local - that's Apple and DNS and the like and nothing to do with IP addresses. That's name to address.
You can do Lets Encrypt (ACME) for "local network addresses" (I assume you mean RFC 1918 addresses: 10/8, 172.16/12, 192.168/16) - you need to look into DNS-01 and perhaps DNS CNAME. It does require quite some effort.
There is a very good set of reasons why TLS certs are a bit of a bugger to get working effectively these days. There are solutions freely available but they are also quite hard to implement. At least they are free. I remember the days when even packet capture required opening your wallet.
You might look into acme.sh if Certbot fails to work for you. You also might need to bolt down IP addressing in general, IPv4 vs IPv6 and DNS and mDNS (and Bonjour) as concepts - you seem a little hazy on that lot.
Bon chance mate
donnachangstein · 8h ago
> Can anyone explain to me if there is any way to determine whether an inbound IPv6 address is "local"?
No, because it's the antithesis of IPv6 which is supposed to be globally routable. The concept isn't supposed to exist.
Not to mention Google can't even agree on the meaning of "local" - the article states they completely changed the meaning of "local" to be a redefinition of "private" halfway through brainstorming this garbage.
Creating a nonstandard, arbitrary security boundary based on CIDR subnets as an HTTP extension is completely bonkers.
As for your application, you're going about it all wrong. Just assume your application is public-facing and design your security with that in mind. Too many applications make this mistake and design saloon-door security into their "local only" application which results in overreaction such as the insanity that is the topic of discussion here.
".local" is reserved for mDNS and is in the RFC, though this is frequently and widely ignored.
ryanisnan · 8h ago
It's very useful to have this additional information in something like a network address. I agree, you shouldn't rely on it, but IPv6 hasn't clicked with me yet, and the whole "globally routable" concept is one of the reasons. I hear that, and think, no, I don't agree.
donnachangstein · 8h ago
Globally routable doesn't mean you don't have firewalls in between filtering and blocking traffic. You can be globally routable but drop all incoming traffic at what you define as a perimeter. E.g. the WAN interface of a typical home network.
The concept is frequently misunderstood in that IPv4 consumer SOHO "routers" often combine a NAT and routing function with a firewall, but the functions are separate.
rerdavies · 2h ago
It is widely understood that my SOHO router provides NAT for IPV4, and routing+firewall (but no NAT) for IPV6. And provides absolutely no configuability for the IpV6 firewall (which would be extremely difficult anyway) because all of the IPV6 addresses allocated to devices on my home network are impermanent and short-lived.
ryanisnan · 8h ago
That makes sense. I do love the idea of living in a world without NAT.
fiddlerwoaroof · 31m ago
I don’t: NAT may have been a hack at first, but it’s my favorite feature provided by routers and why I disable ipv6 on my local network
rerdavies · 2h ago
@donnachangstein:
The device is an IoT guitar pedal that runs on a Raspberry Pi. In performance, on stage, a Web UI runs on a phone or tablet over a hotspot connection on the PI, which is NOT internet connected (since there's no expectation that there's a Wi-Fi router or internet access at a public venue). OR the pi runs on a home wifi network, using a browser-hosted UI on a laptop or desktop. OR, I suppose over an away-from-home Wi-Fi connection at a studio or rehearsal space, I suppose.
It is not reasonable to expect my users to purchase domain names and certs for their $60 guitar pedal, which are not going to work anyway, if they are playing away from their home network. Nor is ACME provisioning an option because the device may be in use but unconnected to the internet for months at a time if users are using the Pi Hotspot at home.
I can't use password authentication to get access to the Pi Web server, because I can't use HTTPS to conceal the password, and browsers disable access to javascript crypto APIs on non non-HTTPS pages (not that I'd really trust myself to write javascript code to obtain auth tokens from the pi server anyway), so doing auth over an HTTP connection doesn't really strike me as a serious option either..
Nor is it reasonable to expect my non-technical users to spend hours configuring their networks. It's an IoT device that should be just drop and play (maybe with a one-time device setup that takes place on the Pi).
There is absolutely NO way I am going to expose the server to the open internet without HTTPS and password authentication. The server provides a complex API to the client over which effects are configured and controlled. Way too much surface area to allow anyone of the internet to poke around in. So it uses IP/4 isolation, which is the best I can figure out given the circumstances. It's not like I havem't given the problem serious consideration. I just don't see a solution.
The use case is not hugely different from an IoT toothbrush. But standards organizations have chosen to leave both my (hypothetical) toothbrush and my application utterly defenseless when it comes to security. Is it any surprise that IoT toothbrushes have security problems?
How would YOU see https working on a device like that?
> ".local" is reserved for mDNS and is in the RFC, though this is frequently and widely ignored.
Yes. That was my point. It is currently widely ignored.
AStonesThrow · 8h ago
> can't even agree on the meaning of "local"
Well, who can agree on this? Local network, private network, intranet, Tailscale and VPN, Tor? IPv6 ULA, NAT/CGNAT, SOCKS, transparent proxy? What resources are "local" to me and what resources are "remote"?
This is quite a thorny and sometimes philosophical question. Web developers are working at the OSI Layer 6-7 / TCP/IP Application Layer.
Now even cookies and things like CSRF were trying to differentiate "servers" and "origins" and "resources" along the lines of the DNS hierarchy. But this has been fraught with complication, because DNS was not intended to delineate such things, and can't do so cleanly 100% of the time.
Now these proposals are trying to reach even lower in the OSI model - Layer 3, Layer 2. If you're asking "what is on my LAN" or "what is a private network", that is not something that HTTPS or web services are supposed to know. Are you going to ask them to delve into your routing table or test the network interfaces? HTTPS was never supposed to know about your netmask or your next-hop router.
So this is only one reason that there is no elegant solution for the problem. And it has been foundational to the way the web was designed: "given a uniform locator, find this resource wherever it may be, whenever I request it." That was a simpler proposition when the Web was used to publish interesting and encyclopedic information, rather than deliver applications and access sensitive systems.
nickcw · 10h ago
This has the potential to break rclone's oauth mechanism as it relies on setting the redirect URL to localhost so when the oauth is done rclone (which is running on your computer) gets called.
I guess if the permissions dialog is sensibly worded then the user will allow it.
I think this is probably a sensible proposal but I'm sure it will break stuff people are relying on.
0xCMP · 9h ago
IIUC this should not break redirects. This only affects: (1) fetch/xmlhttprequests (2) resources linked to AND loaded on a page (e.g. images, js, css, etc.)
As noted in another comment this doesn't work unless the server responding provides proper CORS headers allowing the content to be loaded by the browser in that context: so for any request to work the server is either wide open (cors: *) or are cooperating with the requesting code (cors: website.co). The changes prevent communication without user authorization.
AdmiralAsshat · 11h ago
uBlock / uMatrix does this by default, I believe.
I often see sites like Paypal trying to probe 127.0.0.1. For my "security", I'm sure...
potholereseller · 10h ago
It appears to not have been enabled by default on my instance of uBlock; it seems a specific filter list is used to implement this [0]; that filter was un-checked; I have no idea why. The contents of that filter list are here [1]; notice that there are exceptions for certain services, so be sure to read through the exceptions before enabling it.
[0] Filter Lists -> Privacy -> Block Outsider Intrusion into Lan
The web is currently just “controlled code execution” on your device. This will never work if not done properly. We need a real “web 3.0” where web apps can run natively and containerized, but done correctly, where they are properly sandboxed. This will bring performance and security.
eternityforest · 2h ago
I really hope this gets implemented, and more importantly, I really hope they have the ability to access an HTTP local site from an HTTPS domain.
There are so many excellent home automation and media/entertainment use cases for something like this.
profmonocle · 11h ago
Assuming that RFC1918 addresses mean "local" network is wrong. It means "private". Many large enterprises use RFC1918 for private, internal web sites.
One internal site I spend hours a day using has a 10.x.x.x IP address. The servers for that site are on the other side of the country and are many network hops away. It's a big company, our corporate network is very very large.
A better definition of "local IP" would be whether the IP is in the same subnet as the client, i.e. look up the client's own IP and subnet mask and determine if a packet to a given IP would need to be routed through the default gateway.
JdeBP · 4h ago
Many years ago, before it was dropped, IP version 6 had a concept of "site local" addresses, which (if it had applied to version 4) would have encompassed the corporate intranet addresses that you are talking about. Routed within the corporate intranet; but not routed over corporate borders.
Think of this proposal's definition of "local" (always a tricky adjective in networking, and reportedly the proposers here have bikeshedded it extensively) as encompassing both Local Area Network addresses and non-LAN "site local" addresses.
aaronmdjones · 2h ago
fd00::/8 (within fc00::/7) is still reserved for this purpose (site-local IPv6 addressing).
fc00::/8 (a network block for a registry of organisation-specific assignments for site-local use) is the idea that was abandoned.
Roughly speaking, the following are analogs:
169.254/16 -> fe80::/64 (within fe80::/10)
10/8, 172.16/12, 192.168/16 -> a randomly-generated network (within fd00::/8)
For example, a service I maintain that consists of several machines in a partial WireGuard mesh uses fda2:daf7:a7d4:c4fb::/64 for its peers. The recommendation is no larger than a /48, so a /64 is fine (and I only need the one network, anyway).
fc00::/7 is not globally routable.
rerdavies · 2h ago
So in my case, I guess I need to blame the unconfigurable cable router my ISP provided me with? Since there's no way to provide reservations for IPv6 addresses. :-/
kccqzy · 11h ago
The article spends a lot of effort defining the words "local" and "private" here. It then says:
> Note that local -> local is not a local network request
So your use case won't be affected.
ale42 · 10h ago
The computer I use at work (and not only mine, many many of them) has a public IP address. Many internal services are on 10.0.0.0/8. How is this being taken into account?
numpad0 · 9h ago
10.0.0.0/8, 172.16.0.0/12, 192.168.0.0/16 are all private addresses per RFC1918 and documents superseding it(5735?). If it's like 66.249.73.128/27 or 164.13.12.34/12, those are "global" IP.
Yes that's the point: many of our work PCs have global public IPs from something like 128.130.0.0/15 (not this actual block, but something similar), and many internal services are on 10.0.0.0/8. I'm not sure I get exactly how the proposal is addressing this. How does it know that 128.130.0.0/15 is actually internal and should be considered for content loaded from an external site?
lilyball · 9h ago
Your computer's own IP address is completely irrelevant. What matters is the site hostname and the IP address it resolves to.
AStonesThrow · 4h ago
People believe that "my computer" or "my smartphone" has an Internet address, but this is a simplification of how it's really working.
The reality is that each network interface has at least one Internet address, and these should usually all be different.
An ordinary computer at home could be plugged into Ethernet and active on WiFi at the same time. The Ethernet interface may have an IPv4 address and a set of IPv6 addresses, and belong to their home LAN. The WiFi adapter and interface may have a different IPv4 address, and belongs to the same network, or some other network. The latter is called "multi-homing".
If you visit a site that reveals your "public" IP address(es), you may find that your public, routable IPv4 and/or IPv6 addresses differ from the ones actually assigned to your interfaces.
In order to be compliant with TCP/IP standards, your device always needs to respond on a "loopback" address in 127.0.0.0/8, and typically this is assigned to a "loopback" interface.
A network router does not identify with a singular IP address, but could answer to dozens, when many interface cards are installed. Linux will gladly add "alias" IPv4 addresses to most interface devices, and you'll see SLAAC or DHCPv6 working when there's a link-local and perhaps multiple routable IPv6 addresses on each interface.
The GP says that their work computer has a [public] routable IP address. But the same computer could have another interface, or even the same interface has additional addresses assigned to it, making it a member of that private 10.0.0.0/8 intranet. This detail may or may not be relevant to the services they're connecting to, in terms of authorization or presentation. It may be relevant to the network operators, but not to the end-user.
So as a rule of thumb: your device needs at least one IP address to connect to the Internet, but that address is associated with an interface rather than your device itself, and in a functional system, there are multiple addresses being used for different purposes, or held in reserve, and multiple interfaces that grant the device membership on at least one network.
jaywee · 9h ago
Ideally, in an organization this should be a centrally pushed group policy defining CIDRs.
Like, at home, I have 10/8 and public IPv6 addresses.
kccqzy · 10h ago
As far as I understand that doesn't matter. What matters is the frame's origin and the request.
xp84 · 9h ago
Is it a gross generalization to say that if you're visiting a site whose name resolves to a private IP address, it's a part of the same organizational entity as your computer is?
The proposal here would consider that site local and thus allowed to talk to local. What are the implications? Your employer whose VPN you're on, or whose physical facility you're located in, can get some access to the LAN where you are.
In the case where you're a remote worker and the LAN is your private home, I bet that the employer already has the ability to scan your LAN anyway, since most employers who are allowing you onto their VPN do so only from computers they own, manage, and control completely.
AshamedCaptain · 10h ago
I do not understand. Doesn't same-origin prevent all of these issues? Why on earth would you extend some protection to resources based on IP address ranges? It seems like the most dubious criteria of all.
maple3142 · 1h ago
I think the problem is that some local server are not really designed to be as secure as a public server. For example, a local server having a stupid unauthenticated endpoint like "GET /exec?cmd=rm+-rf+/*", which is obviously exploitable and same-origin does not prevent that.
fn-mote · 10h ago
I think you're mistaken about this.
Use case 1 in the document and the discussion made it clear to me.
AshamedCaptain · 8h ago
Browsers allow launching HTTP requests to localhost in the same way they allow my-malicious-website.com to launch HTTP requests to say mail.google.com . They can _request_ a resource but that's about it -- everything else, even many things you would expect to be able to do with the downloaded resource, are blocked by the same origin policy. [1] Heck, we have a million problems already where file:/// websites cannot access resources from http://localhost , and viceversa.
So what's the attack vector exactly? Why it would be able to attack a local device but not attack your Gmail account ( with your browser happily sending your auth cookies) or file:///etc/passwd ?
The only attack I can imagine is that _the mere fact_ of a webserver existing on your local IP is a disclosure of information for someone, but ... what's the attack scenario here again? The only thing they know is you run a webserver, and maybe they can check if you serve something at a specified location.
Does this even allow identifying the router model you use? Because I can think of a bazillion better ways to do it -- including the simple "just assume is the default router of the specific ISP from that address".
> [Same-origin policy] prevents a malicious website on the Internet from running JS in a browser to read data from [...] a company intranet (which is protected from direct access by the attacker by not having a public IP address) and relaying that data to the attacker.
AnthonyMouse · 3h ago
This is specifically in response to the recent Facebook chicanery where their app was listening on localhost and spitting out a unique tracking ID to anything that connects, allowing arbitrary web pages to get the tracking ID and correspondingly identify the user visiting the page.
But this is trying to solve the problem in the wrong place. The problem isn't that the browser is making the connection, it's that the app betraying the user is running on the user's device. The Facebook app is malware. The premise of app store curation is that they get banned for this, right? Make everyone who wants to use Facebook use the web page now.
b0a04gl · 2h ago
this thing’s leaking. localhost ain’t private if random sites can hit it and get responses. devices still exposing ports like it’s 2003. prompts don’t help, people just just click through till it goes away. cors not doing much, it’s just noise now.issue’s been sitting there forever, everyone patches on top but none of these local services even check who’s knocking. just answers. every time.
A browser can't tell if a site is on the local network. Ambiguous addresses may not be on the local network and conversely a local network may use global addresses especially with v6.
zajio1am · 10h ago
This seems like a silly solution, considering we are in the middle of IPv6 transition, where local networks use public addresses.
rerdavies · 2h ago
Whatever happened to IPv6 site-local and link local address ranges (address ranges that were specifically defined as address ranges that would not cross router or WAN boundaries? They were in the original IPv6 standards, but don't seem to be implemented or supported. Or at least they aren't implemented or supported by my completely uconfigurable home cable router povided by my ISP.
MBCook · 6h ago
So because IPv6 exists we shouldn’t even try?
It’s insane to me that random internet sites can try to poke at my network or local system for any purpose without me knowing and approving it.
With all we do for security these days this is such a massive hole it defies belief. Ever since I first saw an enterprise thing that just expected end users to run a local utility (really embedded web server) for their website to talk to I’ve been amazed this hasn’t been shut down.
jeroenhd · 10h ago
Even IPv6 has local devices. Determining whether that's a /64 or a /56 network may need some work, but the concept isn't all that different. Plus, you have ::1 and fe80::, of course.
mbreese · 9h ago
Even in this case, it could be useful to limit the access websites have to local servers within your subnet (/64, etc), which might be a better way to define the “local” network.
(And then corporate/enterprise managed Chrome installs could have specific subnets added to the allow list)
Hnrobert42 · 2h ago
Ironic given that on my Mac, Chrome always asks to find other devices on my network by Firefox never does.
foota · 12h ago
The alternative proposal sounds much nicer, but unfortunately was paused due to concerns about devices not being able to support it.
I guess once this is added maybe the proposed device opt in mechanism could be used for applications to cooperatively support access without a permission prompt?
thesdev · 10h ago
Off-topic: Is the placement of the apostrophe right in the title? Should it be "a users' local network" (current version) or "a user's local network"?
IshKebab · 10h ago
It should be "from accessing a user's local network", or "from accessing users' local networks".
AStonesThrow · 12h ago
Chris Siebenmann weighs in with thoughts on:
Browers[sic] can't feasibly stop web pages from talking to private (local) IP addresses (2019)
The split horizon DNS model mentioned in that article is to me insane. Your DNS responses should not change based on what network you are connected to. It breaks so many things. For one, caching breaks because DNS caching is simplistic and is only cached with a TTL: no way to tell your OS to associate a DNS cached response to a network.
I understand why some companies want this, but doing it on the DNS level is a massive hack.
If I were the decision maker I would break that use case. (Chrome probably wouldn't though.)
parliament32 · 10h ago
> Your DNS responses should not change based on what network you are connected to.
GeoDNS and similar are very broadly used by services you definitely use every day. Your DNS responses change all the time depending on what network you're connecting from.
Further: why would I want my private hosts to be resolvable outside my networks?
Of course DNS responses should change depending on what network you're on.
kccqzy · 10h ago
> but if you're inside our network perimeter and you look up their name, you get a private IP address and you have to use this IP address to talk to them
In the linked article using the wrong DNS results in inaccessibility. GeoDNS is merely a performance concern. Big difference.
> why would I want my private hosts
Inaccessibility is different. We are talking about accessible hosts requiring different IP addresses to be accessed in different networks.
dwattttt · 4h ago
If you have two interfaces connected to two separate networks, you can absolutely have another host connected to the same two networks. That host will have a different IP for each of their interfaces, you could reach it on either, and DNS on each network should resolve to the IP it's reachable on on that network.
kuschku · 9h ago
I'm surprised you've never seen this before.
Especially for universities it's very common to have the same hostname resolve to different servers, and provide different results, depending on whether you're inside the university network or not.
Some sites may require login if you're accessing them from the internet, but are freely accessible from the intranet.
Others may provide read-write access from inside, but limited read-only access from the outside.
Similar situations with split-horizon DNS are also common in corporate intranets or for people hosting Plex servers.
Ultimately all these issues are caused by NAT and would disappear if we switched to IPv6, but that would also circumvent the OP proposal.
kccqzy · 8h ago
No I haven't seen this before. I have seen however the behavior where login is required from the Internet but not on the university network; I had assumed this is based on checking the source IP of the request.
Similarly the use case of read-write access from inside, but limited read-only access from the outside is also achievable by checking the source IP.
phkahler · 8h ago
This should not be possible in the first place. There is no legitimate reason for it. Having users grant "concent" is just a way to make it more OK, not to stop it.
auxiliarymoose · 7h ago
There are definitely legitimate reasons—for example, a browser-based CAD system communicating with a 3D mouse.
otherayden · 8h ago
This seems like such a no-brainer, I’m shocked this isn’t already something sites need explicit permission to do
parliament32 · 10h ago
Won't this break every local-device oauth flow?
gostsamo · 12h ago
What is so hard in blocking apps on android from listening on random ports without permission?
jeroenhd · 9h ago
The same thing that makes blocking ports on iOS and macOS so hard: there's barely any firewall on these devices by default, and the ones users may find cause more problems than users will ever think they solve.
Listening on a specific port is one of the most basic things software can possibly do. What's next, blocking apps from reading files?
Plus, this is also about blocking your phone's browser from accessing your printer, your router, or that docker container you're running without a password.
elric · 1h ago
That doesn't seem right. Can't speak to macOS, but on Android every application is sandboxed. Restricting its capabilities is trivial. Android apps certainly ARE blocked from reading files, except for some files in its storage and files the user grants it access to.
Adding two Android permissions would fix this entire class of exploits: "run local network service", and "access local network services" (maybe with a whitelist).
zb3 · 11h ago
It's not only about android, it's about exploiting local services too..
neuroelectron · 7h ago
Cia isn't going to like this. I bet that google monopoly case suddenly reaches a new resolution.
naikrovek · 7h ago
Why can browsers do the kinds of things they do at all?
Why does a web browser need USB or Bluetooth support? They don’t.0
Browsers should not be the universal platform. They’ve become the universal attack vector.
auxiliarymoose · 7h ago
With WebUSB, you can program a microcontroller without needing to install local software. With Web Bluetooth, you can wirelessly capture data from + send commands to that microcontroller.
As a developer, these standards prevent you from needing to maintain separate implementations for Windows/macOS/Linux/Android.
As a user, they let you grant and revoke sandbox permissions in a granular way, including fully removing the web app from your computer.
Browsers provide a great cross-platform sandbox and make it much easier to develop secure software across all platforms.
WebUSB and Web Bluetooth are opt-in when the site requests a connection/permission, as opposed to unlimited access by default for native apps. And if you don't want to use them, you can choose a browser that doesn't implement those standards.
What other platform (outside of web browsers) is a good alternative for securely developing cross-platform software that interacts with hardware?
naikrovek · 5h ago
I’m ok with needing non-browser software for those things.
> Browsers provide a great cross-platform sandbox and make it much easier to develop secure software across all platforms.
Sure, until advertising companies find ways around and through those sandboxes because browser authors want the browsers be capable of more, in the name of a cross platform solution. The more a browser can do, the more surface area the sandbox has. (An advertising company makes the most popular browser, by the way.)
> What other platform (outside of web browsers) is a good alternative for securely developing cross-platform software that interacts with hardware?
There isn’t one, other than maybe video game engines, but it doesn’t matter. OS vendors need to work to make cross-platform software possible; it’s their fault we need a cross-platform solution at all. Every OS is a construct, and they were constructed to be different for arbitrary reasons.
A good app-permission model in the browser is much more likely to happen, but I don’t see that really happening, either. “Too inconvenient for users [and our own in-house advertisers/malware authors]” will be the reason.
MacOS handles permissions pretty well, but it could do better. If something wants local network permission, the user gets prompted. If the user says no, those network requests fail. Same with filesystem access. Linux will never have anything like this, nor will Windows, but it’s what security looks like, probably.
Users will say yes to those prompts ultimately, because as soon as users have the ability to say “no” on all platforms, sites will simply gate site functionality behind the granting of those permissions because the authors of those sites want that data so badly.
The only thing that is really going to stop behavior like this is law, and that is NEVER going to happen in the US.
So, short of laws, browsers themselves must stop doing stupid crap like allowing local network access from sites that aren’t on the local network, and nonsense stuff like WebUSB. We need to give up on the idea that anyone can be safe on a platform when we want that platform to be able to do anything. Browsers must have boundaries.
Operating systems should be the police, probably, and not browsers. Web stuff is already slow as hell, and browsers should be less capable, not more capable for both security reasons and speed reasons.
gnarbarian · 9h ago
I don't like the implications of this. say you want to host a game that has a lan play component. that would be illegal.
Pxtl · 4h ago
Honestly I think cross-site requests were a mistake. Tracking cookies, hacks, XSS attacks, etc.
My relationship is with your site. If you want to outsource that to some other domain, do that on your servers, not in my browser.
elric · 1h ago
The mistake was putting CORS on the server side. It should have been part of the browser. "Facebook.com wants to access foo.example.com: y/n?"
But then we would have had to educate users, and ad peddlers would have lost revenue.
AStonesThrow · 4h ago
Cross-site requests have been built in to the design of the WWW since the beginning. The whole idea of hyperlinking from one place to another, and amalgamating media from multiple sites into a single page, is the essence of the World Wide Web that Tim Berners-Lee conceived at CERN, based on the HyperCard stacks and Gopher and Wais services that had preceded it.
Of course it was only later that cookies and scripting and low-trust networks were introduced.
The WWW was conceived as more of a "desktop publishing" metaphor, where pages could be formatted and multimedia presentations could be made and served to the public. It was later that the browser was harnessed as a cross-platform application delivery front-end.
Also, many sites do carefully try to guard against "linking out" or letting the user escape their walled gardens without a warning or disclaimer. As much as they may rely on third-party analytics and ad servers, most web masters want the users to remain on their site, interacting with the same site, without following an external link that would end their engagement or web session.
elric · 1h ago
CORS != hyperlinks. CORS is about random websites in your browser accessing other domains without your say-so. Websites doing stuff behind your back does feel antithetical to Tim Berners Lee's ideals...
Pxtl · 3h ago
I'm aware of that but obviously there's a huge difference between the user clicking a link and navigating to a page on another domain and the site making that request on the user's behalf for a blob of JS.
cwilby · 8h ago
Is it just me or am I not seeing any example that isn't pure theory?
And if it is just me, fine I'll jump in - they should also make it so that users have to approve local network access three times. I worry about the theoretical security implications that come after they only approve local network access once.
owebmaster · 10h ago
I propose restricting android apps, not websites.
jeroenhd · 9h ago
Android apps need UDP port binding to function. You can't do QUIC without UDP. Of course you can (should) restrict localhost bound ports to the namespaces of individual apps, but there is no easy solution to this problem at the moment.
If you rely on users having to click "yes", then you're just making phones harder to use because everyone still using Facebook or Instagram will just click whatever buttons make the app work.
On the other hand, I have yet to come up with a good reason why arbitrary websites need to set up direct connections to devices within the local network.
There's the IPv6 argument against the proposed measures, which requires work to determine if an address is local or global, but that's also much more difficult to enumerate than the IPv4 space that some websites try to scan. That doesn't mean IPv4 address shouldn't be protected at all, either. Even with an IPv6-shaped hole, blocking local networks (both IPv4 and local IPv6) by default makes sense for websites originating from outside.
IE did something very similar to this decades ago. They also had a system for displaying details about websites' privacy policies and data sharing. It's almost disheartening to see we're trying to come up with solutions to these problems again.
numpad0 · 9h ago
Personally I had completely forgotten that anyone and anything can do this right now.
TLDR, IIUC, right now, random websites can try accessing contents on local IPs. You can try to blind load e.g. http://192.168.0.1/cgi-bin/login.cgi from JavaScript, iterating through a gigantic malicious list of such known useful URLs, then grep and send back whatever you want to share with advertisers or try POSTing backdoors to printer update page. No, we don't need that.
Of course, OTOH, many webapps today use localhost access to pass tokens and to talk to cooperating apps, but you only need access to 127.0.0.0/8 for that which is harder to abuse, so that range can be default exempted.
I disagree. I know it’s done, but I don’t think that makes it safe or smart.
Require the user to OK it and require the server to send a header with the one _exact_ port it will access. Require that the local server _must_ use CORS and allow that server.
No website not loaded from localhost should ever be allowed to just hit random local/private IPs and ports without explicit permission.
xyst · 8h ago
Advertising firms hate this.
hulitu · 11h ago
> A proposal to restrict sites from accessing a users' local network
A proposal to treat webbrowsers as malware ? Why would a webbrowser connect to a socket/internet ?
loumf · 11h ago
The proposal is directed at the websites in the browser (using JS, embedded images or whatever), not the code that implements the browser.
hello_computer · 11h ago
just the fact that this comes from google is a hard pass for me. they sell so many adwords scams that they clearly do not give a damn about security. “security” from google is just another one of their trojan horses.
fn-mote · 10h ago
Don't post shallow dismissals. The same company runs Project Zero, which has a major positive security impact.
project zero is ZERO compared to the millions of little old ladies around the world getting scammed through adwords. only security big g cares about is its own. they have the tools to laser-in on and punish the subtlest of wrongthink on youtube, yet it’s just too tall of an order to focus the same laser on tech support scammers…
No comments yet
themikesanto · 11h ago
Google loves wreaking havoc on web standards. Is there really anything anyone can do about it at this point? The number of us using alternative browsers are a drop in the bucket when compared to Chrome's market share.
charcircuit · 3h ago
Google open source the implementation of them which any other browser is free to use.
zelon88 · 12h ago
I understand the idea behind it and am still kinda chewing on the scope of it all. It will probably break some enterprise applications and cause some help desk or group policy/profile headaches for some.
It would be nice to know when a site is probing the local network. But by the same token, here is Google once again putting barriers on self sufficiency and using them to promote their PaaS goals.
They'll gladly narc on your self hosted application doing what it's supposed to do, but what about the 23 separate calls to Google CDN, ads, fonts, ect that every website has your browser make?
I tend to believe the this particular functionality is no longer of any use to Google, which is why they want to deprecate it to raise the barrier of entry for others.
iforgotpassword · 12h ago
Idk, I like the idea of my browser warning me when a random website I visit tries to talk to my network. if there's a legitimate reason I can still click yes. This is orthogonal to any ads and data collection.
Henchman21 · 12h ago
I have this today from macOS. To me it feels more appropriate to have the OS attempt to secure running applications.
happyopossum · 12h ago
No you don’t - you get a single permission prompt for the entire browser. You definitely don’t get any permission-site permission options from the OS
Henchman21 · 11h ago
Ah I misunderstood, thank you
Henchman21 · 12h ago
I agree that any newly proposed standards for the web coming from Google should be met with a skeptical eye — they aren’t good stewards IMO and are usually self-serving.
I’d be interested in hearing what the folks at Ladybird think of this proposal.
The issue is that CORS gates access only on the consent of the target server. It must return headers that opt into receiving requests from the website.
This proposal aims to tighten that, so that even if the website and the network device both actively want to communicate, the user's permission is also explicitly requested. Historically we assumed server & website agreement was sufficient, but Facebook's recent tricks where websites secretly talked to apps on your phone have broken that assumption - websites might be in cahoots with local network servers to work against your interests.
So the attack vector that I can imagine is that JS on the browser can issue a specially crafted request to a vulnerable printer or whatever that triggers arbitrary code execution on that other device. That code might be sufficient to cause the printer to carry out your evil task, including making an outbound connection to the attacker's server. Of course, the webpage would not be able to discover whether it was successful, but that may not be important.
Maybe there is some side-channel timing that can be used to determine the existence of a device, but not so sure about actually crafting and delivering a malicious payload.
https://news.ycombinator.com/item?id=44169115
As far as I can tell, the only thing this proposal does that CORS does not already do is provide some level of enterprise configuration control to guard against the scenario where your users are using compromised internet sites that can ping around your internal network for agents running on compromised desktops. Maybe? I don't get it.
If somebody would fix the "no https for local connections" issue, then IoT websites could use authenticated logins to fix both problems. Non-https websites also have no access to browser crypto APIs so roll-your-own auth (the horror) isn't an option either. Fustrating!
... Anyhow I think it doesn't matter because you can listen for the error/failure of most async requests. Cors errors are equivalent to network errors - the browser tells the js it got stays v code 0 with no further information - but the timing of that could lead to some sort of interference? Hard to say what that would be though. Maybe if you knew the target webserver was slow but would respond to certain requests, a slower failed local request could mean it actually reached a target device.
That said, why not just fire off simple http requests with your intended payload? Abusing the csrf vulnerabilities of local network devices seems far easier than trying to make something out of a timing attack here.
False. CORS only gates non-simple requests (via options), simple requests are sent regardless of CORS config, there is no gating whatsoever.
No comments yet
Typically there are only 256 IP's, so a scan of them all is almost instant.
Client: I consent
Server: I consent
User: I DON'T!
ISN'T THERE SOMEBODY YOU FORGOT TO ASK?
It's called ZTA, Zero Trust Architecture. Devices shouldn't assume the LAN is secure.
MacOS currently does this (per app, not per site) & most users just click yes without a second thought. Doing it per site might create a little more apprehension, but I imagine not much.
My parents who are non-technical click no by default to everything, sometimes they ask for my assistance when something doesn't work and often it's because they denied some permission that is essential for an app to work e.g. maybe they denied access to the microphone to an audio call app.
Unless we have statistics, I don't think we can make assumptions.
The moment a user gets this permissions request, as far as I can tell they will hit approve 100% of the time. We have one office where the staff have complained that it's impossible to look at astrology websites without committing to desktop popups selling McAfee. Which implies those staff, having been trained to hit "no", believe it's impossible to do.
(yes, we can disable with a GPO, which I heavily promote, but that org has political problems).
A random website someone linked me to wanting to access my local network is a very different case. I'm absolutely not giving network or location or camera or any other sort of access to websites except in very extreme circumstances.
Maybe this won't fool you, but it would trick 90% of internet users. (And even if it was 20% instead of 90%, that's still way too much.)
Besides that, approximately zero laypersons will have even the slightest clue what this permission means, the risks involved, or why they might want to prevent it. All they know is that the website they want is not working, and the website tells them to enable this or that permission. They will all blindly enable it every single time.
They're just two different approaches with the same flaw: People with no clue how tech works cannot completely protect themselves from any possible attacker, while also having sophisticated networked features. Nobody has provided a decent alternative other than some kind of fully bubble-wrapped limited account using Group Policies, to ban all those perms from even being asked for.
Remember when they used to mock this as part of their marketing?
https://www.youtube.com/watch?v=DUPxkzV1RTc
Microsoft deserved to be mocked for that implementation.
Creation of a shortcut on Windows is not necessarily innocuous. It was a common first vector to drop malware as users were accustomed to installing software that did the same thing. A Windows shortcut can hide an arbitrary pathname, arbitrary command-line arguments, a custom icon, and more; these can be modified at any time.
So whether it was a mistake for UAC to be overzealous or obstructionist, or Microsoft was already being mocked for poor security, perhaps they weren’t wrong to raise awareness about such maneuvers.
If you want to teach users to ignore security prompts, then completely pointless nagging is how you do it.
Lately, every app I install, wants bluetooth access to scan all my bluetooth devices. I don't want that. At most, I want the app to have to declare in their manifest some specific device IDs (short list) that their app is allowed to connect to and have the OS limit their connections to only those devices. For for example the Bose App should only be able to see Bose devices, nothing else. The CVS (pharmacy app) should only be able to connect to CVS devices, whatever those are. All I know is the app asked for permission. I denied it.
I might even prefer if it had to register the device ids and then the user would be prompted, the same way camera access/gps access is prompted. Via the OS, it might see a device that the CVS.app registered for in its manifest. The OS would popup "CVS app would like to connect to device ABC? Just this once, only when the app is running, always" (similar to the way iOS handles location)
By id, I mean some prefix that a company registers for its devices. bose.xxx, app's manifest says it wants to connect to "bose.*" and OS filters.
Similarly for USB and maybe local network devices. Come up with an id scheme, have the OS prevent apps form connecting to anything not that id. Effectively, don't let apps browser the network, usb, bluetooth.
I have been told that WhatsApp does not let you name contacts without sharing your address book back to Facebook.
I would guess it's closer to 0% than 0.1%.
https://learn.microsoft.com/en-us/previous-versions/troubles...
[0] https://www.theregister.com/2025/06/03/meta_pauses_android_t...
It would be really annoying if this use case was made into an unreasonable hassle or killed entirely. (Alternatively, browser developers could've offered a real alternative, but it's a bit late for that now.)
Things like Jupyter Notebooks will presumably be unaffected by this, as they're not doing any cross-origin requests.
And likewise, when a command line tool wants you to log in with oauth2 and returns you to a localhost URL, it's a simple redirect not a cross-origin request, so should likewise be allowed?
It seems like you're thinking of a specific application, or at least use-case. Can you elaborate?
Once you're launching an application, it seems like the application can negotiate with the external site directly if it wants.
Sure you can make a fully cloud-reliant PW manager, which has to have your key stored in the browser and fetch your vault from the server, but a lot of us like having that information never have to leave our computers.
This allows you to use a single centrally hosted website as user interface, without the control traffic leaving your network. e.g. Plex uses this.
This is just about blocking cross-origin requests from other websites. I probably don't want every ad network iframe being able to talk to my router's admin UI.
A common example is this:
1. I visit ui.manufacturer.tld
2. I click "add device" and enter 192.168.0.230, repeating this for my other local devices.
3. The website ui.manufacturer.tld now shows me a dashboard with aggregate metrics from all my switches and routers, which it collects by fetch(...) ing data from all of them.
The manufacturers site is just a static page. It stores the list of devices and credentials to connect to them in localStorage.
None of the data ever leaves my network, but I can just bookmark ui.manufacturer.tld and control all of my devices at once.
This is a relatively neat approach providing the same comfort as cloud control, without the privacy nightmare.
No comments yet
if that software runs with a pull approach, instead of a push one, the server becomes unnecessary
bonus: then you won't have websites grossly probing local networks that aren't theirs (ew)
Most websites that need this permission only need to access one local server. Granting them access to everything violates the principle of least privilege. Most users don't know what's running on localhost or on their local network, so they won't understand the risk.
Yes, which is why they also won't understand when the browser asks if you'd like to allow the site to visit http://localhost:3146 vs http://localhost:8089. A sensible permission message ("allow this site to access resources on your local network") is better than technical mumbo jumbo which will make them just click "yes" in confusion.
For instance, on the phishing site they clicked on from an email, they'll first be prompted like:
"Chase need to verify your Local Network identity to keep your account details safe. Please ensure that you click "Yes" on the following screen to confirm your identity and access account."
Yes, that's meaningless gibberish but most people would say:
• "Not sure what that means..."
• "I DO want to access my account, though."
In the world we live in, of course, almost nothing on your average LAN has an associated mDNS service advertisement.
(It's harder to do for the rest of the local network, though.)
I wish there were an API to build such a firewall, e.g. as a part of a browser extension, but also a simple default UI allowing to give access to a particular machine (e.g. router), to the LAN, to a VPN, based on the routing table, or to "private networks" in general, in the sense Windows ascribes to that. Also access to localhost separately. The site could ask one of these categories explicitly.
Why not treat any local access as if it were an access to a microphone?
Then again local app can run server with proxy that adds adds CORS headers to the proxied request and you can access any site via js fetch/xmlhttprequest interface, even extension is able to modify headers to bypass cors
Cors bypassing is just matter of editing headers whats really hard to or impossible to bypass in CSP rules,
Now facebook app itself is running such cors server proxy even without it an normal http or websocket server is enought to send metrics
Chrome already has flag to prevent locahost access still as said websocket can be used
Completely banning localhost is detrimental
Many users are using self hosted bookmarking, note app, pass managers like solutions that rely on local server
I have an struggled with this issue in the past. I have an IoT application whose websever wants to reject any requests from a non-local address. After failing to find a way to distinguish IPv6 local addresses, I ended up redirecting IPv6 requests to the local IPv4 address. And that was the end of that.
I feel like I would be in a better position to raise concerns if I could confirm that my understanding is correct: that there is no practical way for an application to determine whether an IPv6 address is link- or site-local.
I did experiment with IPv6 "link local" addresses, but these seem to be something else altogether different (for use by routers rather than general applications),and don't seem to work for regular application use.
There is some wiggle room provided by including .local address as local servers. But implementation of .local domains seems to be inconsistent across various OSs at present. Raspberry PI OS, for example, will do mDNS resolution of "some_address" but not of "someaddress.local"; Ubuntu 24.04 will resolve "someaddress.local", but not "someaddress". And neither will resolve "someaddress.local." (which I think was recommended at one point, but is now deprecated and non-functional). Which does seems like an issue worth raising.
And it frustrates the HECK out of me that nobody will allow use of privately issued certs for local network addresses. The "no https for local addresses" thing needs to be fixed.
In old school IPv4 you would normally assign octet two to a site and octet three to a VLAN. Oh and you start with 10.
With IPv6 you have a lot more options.
All IPv6 devices have link local addresses - that's the LAN or local VLAN - a bit like APIPA.
Then you start on .local - that's Apple and DNS and the like and nothing to do with IP addresses. That's name to address.
You can do Lets Encrypt (ACME) for "local network addresses" (I assume you mean RFC 1918 addresses: 10/8, 172.16/12, 192.168/16) - you need to look into DNS-01 and perhaps DNS CNAME. It does require quite some effort.
There is a very good set of reasons why TLS certs are a bit of a bugger to get working effectively these days. There are solutions freely available but they are also quite hard to implement. At least they are free. I remember the days when even packet capture required opening your wallet.
You might look into acme.sh if Certbot fails to work for you. You also might need to bolt down IP addressing in general, IPv4 vs IPv6 and DNS and mDNS (and Bonjour) as concepts - you seem a little hazy on that lot.
Bon chance mate
No, because it's the antithesis of IPv6 which is supposed to be globally routable. The concept isn't supposed to exist.
Not to mention Google can't even agree on the meaning of "local" - the article states they completely changed the meaning of "local" to be a redefinition of "private" halfway through brainstorming this garbage.
Creating a nonstandard, arbitrary security boundary based on CIDR subnets as an HTTP extension is completely bonkers.
As for your application, you're going about it all wrong. Just assume your application is public-facing and design your security with that in mind. Too many applications make this mistake and design saloon-door security into their "local only" application which results in overreaction such as the insanity that is the topic of discussion here.
".local" is reserved for mDNS and is in the RFC, though this is frequently and widely ignored.
The concept is frequently misunderstood in that IPv4 consumer SOHO "routers" often combine a NAT and routing function with a firewall, but the functions are separate.
The device is an IoT guitar pedal that runs on a Raspberry Pi. In performance, on stage, a Web UI runs on a phone or tablet over a hotspot connection on the PI, which is NOT internet connected (since there's no expectation that there's a Wi-Fi router or internet access at a public venue). OR the pi runs on a home wifi network, using a browser-hosted UI on a laptop or desktop. OR, I suppose over an away-from-home Wi-Fi connection at a studio or rehearsal space, I suppose.
It is not reasonable to expect my users to purchase domain names and certs for their $60 guitar pedal, which are not going to work anyway, if they are playing away from their home network. Nor is ACME provisioning an option because the device may be in use but unconnected to the internet for months at a time if users are using the Pi Hotspot at home.
I can't use password authentication to get access to the Pi Web server, because I can't use HTTPS to conceal the password, and browsers disable access to javascript crypto APIs on non non-HTTPS pages (not that I'd really trust myself to write javascript code to obtain auth tokens from the pi server anyway), so doing auth over an HTTP connection doesn't really strike me as a serious option either..
Nor is it reasonable to expect my non-technical users to spend hours configuring their networks. It's an IoT device that should be just drop and play (maybe with a one-time device setup that takes place on the Pi).
There is absolutely NO way I am going to expose the server to the open internet without HTTPS and password authentication. The server provides a complex API to the client over which effects are configured and controlled. Way too much surface area to allow anyone of the internet to poke around in. So it uses IP/4 isolation, which is the best I can figure out given the circumstances. It's not like I havem't given the problem serious consideration. I just don't see a solution.
The use case is not hugely different from an IoT toothbrush. But standards organizations have chosen to leave both my (hypothetical) toothbrush and my application utterly defenseless when it comes to security. Is it any surprise that IoT toothbrushes have security problems?
How would YOU see https working on a device like that?
> ".local" is reserved for mDNS and is in the RFC, though this is frequently and widely ignored.
Yes. That was my point. It is currently widely ignored.
Well, who can agree on this? Local network, private network, intranet, Tailscale and VPN, Tor? IPv6 ULA, NAT/CGNAT, SOCKS, transparent proxy? What resources are "local" to me and what resources are "remote"?
This is quite a thorny and sometimes philosophical question. Web developers are working at the OSI Layer 6-7 / TCP/IP Application Layer.
https://en.wikipedia.org/wiki/OSI_model#Comparison_with_TCP/...
Now even cookies and things like CSRF were trying to differentiate "servers" and "origins" and "resources" along the lines of the DNS hierarchy. But this has been fraught with complication, because DNS was not intended to delineate such things, and can't do so cleanly 100% of the time.
Now these proposals are trying to reach even lower in the OSI model - Layer 3, Layer 2. If you're asking "what is on my LAN" or "what is a private network", that is not something that HTTPS or web services are supposed to know. Are you going to ask them to delve into your routing table or test the network interfaces? HTTPS was never supposed to know about your netmask or your next-hop router.
So this is only one reason that there is no elegant solution for the problem. And it has been foundational to the way the web was designed: "given a uniform locator, find this resource wherever it may be, whenever I request it." That was a simpler proposition when the Web was used to publish interesting and encyclopedic information, rather than deliver applications and access sensitive systems.
I guess if the permissions dialog is sensibly worded then the user will allow it.
I think this is probably a sensible proposal but I'm sure it will break stuff people are relying on.
As noted in another comment this doesn't work unless the server responding provides proper CORS headers allowing the content to be loaded by the browser in that context: so for any request to work the server is either wide open (cors: *) or are cooperating with the requesting code (cors: website.co). The changes prevent communication without user authorization.
I often see sites like Paypal trying to probe 127.0.0.1. For my "security", I'm sure...
[0] Filter Lists -> Privacy -> Block Outsider Intrusion into Lan
[1] <https://github.com/uBlockOrigin/uAssets/blob/master/filters/...>
There are so many excellent home automation and media/entertainment use cases for something like this.
One internal site I spend hours a day using has a 10.x.x.x IP address. The servers for that site are on the other side of the country and are many network hops away. It's a big company, our corporate network is very very large.
A better definition of "local IP" would be whether the IP is in the same subnet as the client, i.e. look up the client's own IP and subnet mask and determine if a packet to a given IP would need to be routed through the default gateway.
Think of this proposal's definition of "local" (always a tricky adjective in networking, and reportedly the proposers here have bikeshedded it extensively) as encompassing both Local Area Network addresses and non-LAN "site local" addresses.
fc00::/8 (a network block for a registry of organisation-specific assignments for site-local use) is the idea that was abandoned.
Roughly speaking, the following are analogs:
169.254/16 -> fe80::/64 (within fe80::/10)
10/8, 172.16/12, 192.168/16 -> a randomly-generated network (within fd00::/8)
For example, a service I maintain that consists of several machines in a partial WireGuard mesh uses fda2:daf7:a7d4:c4fb::/64 for its peers. The recommendation is no larger than a /48, so a /64 is fine (and I only need the one network, anyway).
fc00::/7 is not globally routable.
> Note that local -> local is not a local network request
So your use case won't be affected.
1: https://www.rfc-editor.org/rfc/rfc1918
2: https://www.rfc-editor.org/rfc/rfc5735
3: https://en.wikipedia.org/wiki/Private_network
The reality is that each network interface has at least one Internet address, and these should usually all be different.
An ordinary computer at home could be plugged into Ethernet and active on WiFi at the same time. The Ethernet interface may have an IPv4 address and a set of IPv6 addresses, and belong to their home LAN. The WiFi adapter and interface may have a different IPv4 address, and belongs to the same network, or some other network. The latter is called "multi-homing".
If you visit a site that reveals your "public" IP address(es), you may find that your public, routable IPv4 and/or IPv6 addresses differ from the ones actually assigned to your interfaces.
In order to be compliant with TCP/IP standards, your device always needs to respond on a "loopback" address in 127.0.0.0/8, and typically this is assigned to a "loopback" interface.
A network router does not identify with a singular IP address, but could answer to dozens, when many interface cards are installed. Linux will gladly add "alias" IPv4 addresses to most interface devices, and you'll see SLAAC or DHCPv6 working when there's a link-local and perhaps multiple routable IPv6 addresses on each interface.
The GP says that their work computer has a [public] routable IP address. But the same computer could have another interface, or even the same interface has additional addresses assigned to it, making it a member of that private 10.0.0.0/8 intranet. This detail may or may not be relevant to the services they're connecting to, in terms of authorization or presentation. It may be relevant to the network operators, but not to the end-user.
So as a rule of thumb: your device needs at least one IP address to connect to the Internet, but that address is associated with an interface rather than your device itself, and in a functional system, there are multiple addresses being used for different purposes, or held in reserve, and multiple interfaces that grant the device membership on at least one network.
Like, at home, I have 10/8 and public IPv6 addresses.
The proposal here would consider that site local and thus allowed to talk to local. What are the implications? Your employer whose VPN you're on, or whose physical facility you're located in, can get some access to the LAN where you are.
In the case where you're a remote worker and the LAN is your private home, I bet that the employer already has the ability to scan your LAN anyway, since most employers who are allowing you onto their VPN do so only from computers they own, manage, and control completely.
Use case 1 in the document and the discussion made it clear to me.
So what's the attack vector exactly? Why it would be able to attack a local device but not attack your Gmail account ( with your browser happily sending your auth cookies) or file:///etc/passwd ?
The only attack I can imagine is that _the mere fact_ of a webserver existing on your local IP is a disclosure of information for someone, but ... what's the attack scenario here again? The only thing they know is you run a webserver, and maybe they can check if you serve something at a specified location.
Does this even allow identifying the router model you use? Because I can think of a bazillion better ways to do it -- including the simple "just assume is the default router of the specific ISP from that address".
[1] https://developer.mozilla.org/en-US/docs/Web/Security/Same-o...
In fact, [1] literally says
> [Same-origin policy] prevents a malicious website on the Internet from running JS in a browser to read data from [...] a company intranet (which is protected from direct access by the attacker by not having a public IP address) and relaying that data to the attacker.
But this is trying to solve the problem in the wrong place. The problem isn't that the browser is making the connection, it's that the app betraying the user is running on the user's device. The Facebook app is malware. The premise of app store curation is that they get banned for this, right? Make everyone who wants to use Facebook use the web page now.
similar thread: https://news.ycombinator.com/item?id=44179276
It’s insane to me that random internet sites can try to poke at my network or local system for any purpose without me knowing and approving it.
With all we do for security these days this is such a massive hole it defies belief. Ever since I first saw an enterprise thing that just expected end users to run a local utility (really embedded web server) for their website to talk to I’ve been amazed this hasn’t been shut down.
(And then corporate/enterprise managed Chrome installs could have specific subnets added to the allow list)
I guess once this is added maybe the proposed device opt in mechanism could be used for applications to cooperatively support access without a permission prompt?
Browers[sic] can't feasibly stop web pages from talking to private (local) IP addresses (2019)
https://utcc.utoronto.ca/~cks/space/blog/web/BrowsersAndLoca...
I understand why some companies want this, but doing it on the DNS level is a massive hack.
If I were the decision maker I would break that use case. (Chrome probably wouldn't though.)
GeoDNS and similar are very broadly used by services you definitely use every day. Your DNS responses change all the time depending on what network you're connecting from.
Further: why would I want my private hosts to be resolvable outside my networks?
Of course DNS responses should change depending on what network you're on.
In the linked article using the wrong DNS results in inaccessibility. GeoDNS is merely a performance concern. Big difference.
> why would I want my private hosts
Inaccessibility is different. We are talking about accessible hosts requiring different IP addresses to be accessed in different networks.
Especially for universities it's very common to have the same hostname resolve to different servers, and provide different results, depending on whether you're inside the university network or not.
Some sites may require login if you're accessing them from the internet, but are freely accessible from the intranet.
Others may provide read-write access from inside, but limited read-only access from the outside.
Similar situations with split-horizon DNS are also common in corporate intranets or for people hosting Plex servers.
Ultimately all these issues are caused by NAT and would disappear if we switched to IPv6, but that would also circumvent the OP proposal.
Similarly the use case of read-write access from inside, but limited read-only access from the outside is also achievable by checking the source IP.
Listening on a specific port is one of the most basic things software can possibly do. What's next, blocking apps from reading files?
Plus, this is also about blocking your phone's browser from accessing your printer, your router, or that docker container you're running without a password.
Adding two Android permissions would fix this entire class of exploits: "run local network service", and "access local network services" (maybe with a whitelist).
Why does a web browser need USB or Bluetooth support? They don’t.0
Browsers should not be the universal platform. They’ve become the universal attack vector.
As a developer, these standards prevent you from needing to maintain separate implementations for Windows/macOS/Linux/Android.
As a user, they let you grant and revoke sandbox permissions in a granular way, including fully removing the web app from your computer.
Browsers provide a great cross-platform sandbox and make it much easier to develop secure software across all platforms.
WebUSB and Web Bluetooth are opt-in when the site requests a connection/permission, as opposed to unlimited access by default for native apps. And if you don't want to use them, you can choose a browser that doesn't implement those standards.
What other platform (outside of web browsers) is a good alternative for securely developing cross-platform software that interacts with hardware?
> Browsers provide a great cross-platform sandbox and make it much easier to develop secure software across all platforms.
Sure, until advertising companies find ways around and through those sandboxes because browser authors want the browsers be capable of more, in the name of a cross platform solution. The more a browser can do, the more surface area the sandbox has. (An advertising company makes the most popular browser, by the way.)
> What other platform (outside of web browsers) is a good alternative for securely developing cross-platform software that interacts with hardware?
There isn’t one, other than maybe video game engines, but it doesn’t matter. OS vendors need to work to make cross-platform software possible; it’s their fault we need a cross-platform solution at all. Every OS is a construct, and they were constructed to be different for arbitrary reasons.
A good app-permission model in the browser is much more likely to happen, but I don’t see that really happening, either. “Too inconvenient for users [and our own in-house advertisers/malware authors]” will be the reason.
MacOS handles permissions pretty well, but it could do better. If something wants local network permission, the user gets prompted. If the user says no, those network requests fail. Same with filesystem access. Linux will never have anything like this, nor will Windows, but it’s what security looks like, probably.
Users will say yes to those prompts ultimately, because as soon as users have the ability to say “no” on all platforms, sites will simply gate site functionality behind the granting of those permissions because the authors of those sites want that data so badly.
The only thing that is really going to stop behavior like this is law, and that is NEVER going to happen in the US.
So, short of laws, browsers themselves must stop doing stupid crap like allowing local network access from sites that aren’t on the local network, and nonsense stuff like WebUSB. We need to give up on the idea that anyone can be safe on a platform when we want that platform to be able to do anything. Browsers must have boundaries.
Operating systems should be the police, probably, and not browsers. Web stuff is already slow as hell, and browsers should be less capable, not more capable for both security reasons and speed reasons.
My relationship is with your site. If you want to outsource that to some other domain, do that on your servers, not in my browser.
But then we would have had to educate users, and ad peddlers would have lost revenue.
Of course it was only later that cookies and scripting and low-trust networks were introduced.
The WWW was conceived as more of a "desktop publishing" metaphor, where pages could be formatted and multimedia presentations could be made and served to the public. It was later that the browser was harnessed as a cross-platform application delivery front-end.
Also, many sites do carefully try to guard against "linking out" or letting the user escape their walled gardens without a warning or disclaimer. As much as they may rely on third-party analytics and ad servers, most web masters want the users to remain on their site, interacting with the same site, without following an external link that would end their engagement or web session.
And if it is just me, fine I'll jump in - they should also make it so that users have to approve local network access three times. I worry about the theoretical security implications that come after they only approve local network access once.
If you rely on users having to click "yes", then you're just making phones harder to use because everyone still using Facebook or Instagram will just click whatever buttons make the app work.
On the other hand, I have yet to come up with a good reason why arbitrary websites need to set up direct connections to devices within the local network.
There's the IPv6 argument against the proposed measures, which requires work to determine if an address is local or global, but that's also much more difficult to enumerate than the IPv4 space that some websites try to scan. That doesn't mean IPv4 address shouldn't be protected at all, either. Even with an IPv6-shaped hole, blocking local networks (both IPv4 and local IPv6) by default makes sense for websites originating from outside.
IE did something very similar to this decades ago. They also had a system for displaying details about websites' privacy policies and data sharing. It's almost disheartening to see we're trying to come up with solutions to these problems again.
TLDR, IIUC, right now, random websites can try accessing contents on local IPs. You can try to blind load e.g. http://192.168.0.1/cgi-bin/login.cgi from JavaScript, iterating through a gigantic malicious list of such known useful URLs, then grep and send back whatever you want to share with advertisers or try POSTing backdoors to printer update page. No, we don't need that.
Of course, OTOH, many webapps today use localhost access to pass tokens and to talk to cooperating apps, but you only need access to 127.0.0.0/8 for that which is harder to abuse, so that range can be default exempted.
Disabling this, as proposed, does not affect your ability to open http://192.168.0.1/login.html, as that's just another "web" site. If JS on http://myNAS.local/search-local.html wants to access http://myLaptop.local:8000/myNasDesktopAppRemotingApi, only then you have to click some buttons to allow it.
Edit: uBlock Origin has filter for it[1]; was unchecked in mine.
1: https://news.ycombinator.com/item?id=44184799
I disagree. I know it’s done, but I don’t think that makes it safe or smart.
Require the user to OK it and require the server to send a header with the one _exact_ port it will access. Require that the local server _must_ use CORS and allow that server.
No website not loaded from localhost should ever be allowed to just hit random local/private IPs and ports without explicit permission.
A proposal to treat webbrowsers as malware ? Why would a webbrowser connect to a socket/internet ?
[1]: https://googleprojectzero.blogspot.com/
No comments yet
It would be nice to know when a site is probing the local network. But by the same token, here is Google once again putting barriers on self sufficiency and using them to promote their PaaS goals.
They'll gladly narc on your self hosted application doing what it's supposed to do, but what about the 23 separate calls to Google CDN, ads, fonts, ect that every website has your browser make?
I tend to believe the this particular functionality is no longer of any use to Google, which is why they want to deprecate it to raise the barrier of entry for others.
I’d be interested in hearing what the folks at Ladybird think of this proposal.