So it's being pushed because it'll be easier for a few big players in industry. Everybody else suffers.
It's a decision by Certificate Authorities, the ones that sell TLS certificate services, and web browser vendors. One benefits from increased demand on their product, while the other benefits by increasing the overhead on the management of their software, which increases the minimum threshold to be competitive.
There are security benefits, yes. But as someone that works in infrastructure management, including on 25 or 30 year old systems in some cases, it's very difficult to not find this frustrating. I need tools I will have in 10 years to still be able to manage systems that were implemented 15 years ago. That's reality.
Doubtless people here have connected to their router's web interface using the gateway IP address and been annoyed that the web browser complains so much about either insecure HTTP or an unverified TLS certificate. The Internet is an important part of computer security, but it's not the only part of computer security.
I wish technical groups would invest some time in real solutions for long-term, limited access systems which operate for decades at a time without 24/7 access to the Internet. Part of the reason infrastructure feels like running Java v1.3 on Windows 98 is because it's so widely ignored.
It is continuously frustrating to me to see the arrogant dismissiveness which people in charge of such technical groups display towards the real world usage of their systems. It's some classic ivory tower "we know better than you" stuff, and it needs to stop. In the real world, things are messy and don't conform to the tidy ideas that the Chrome team at Google has. But there's nothing forcing them to wake up and face reality, so they keep making things harder and harder for the rest of us in their pursuit of dogmatic goals.
It astounds me that there's no non-invasive local solution to go to my router or whatever other appliances web page without my browser throwing warnings and calling it evil. Truly a fuck up(purposeful or not) by all involved in creating the standards. We need local TLS without the hoops.
Simplest possible, least invasive, most secure thing I can think of: QR code on the router with the CA cert of the router. Open cert manager app on laptop/phone, scan QR code, import CA cert. Comms are now secure (assuming nobody replaced the sticker).
The crazy thing? There is already two WiFi QR code standards, but they do not include the CA cert. There's a "Wi-Fi Easy Connect" standard that is intended to secure the network for an enterprise, and there's a random Java QR code library that made their own standard for just encoding an access point and WPA shared key (and Android and iOS both adopted it, so now it's a de-facto standard).
End-user security wasn't a consideration for either of them. With the former they only cared about protecting the enterprise network, and with the latter they just wanted to make it easier to get onto a non-Enterprise network. The user still has to fend for themselves once they're on the network.
This is a terrible solution. Now you require an Internet connection and a (non-abandoned) third party service to configure a LAN device. Not to mention countless industrial devices where operators would typically have no chance to see QR code.
The solution I just mentioned specifically avoids an internet connection or third parties. It's a self-signed cert you add to your computer's CA registry. 100% offline and independent of anything but your own computer and the router. The QR code doesn't require an internet connection. And the first standard I mentioned was designed for industrial devices.
Not only would that set a questionable precedent if users learn to casually add new trust roots, it would also need support for new certificate extensions to limit validity to that device only. It's far from obvious that would be a net gain for Internet security in general.
It might be easier to extend the URL format with support for certificate fingerprints. It would only require support in web browsers, which are updated much faster than operating systems. It could also be made in a backwards compatible way, for example by extending the username syntax. That way old browsers would continue to show the warning and new browsers would accept the self signed URL format in a secure way.
Have you seen the state of typical consumer router firmwares? Security hasn’t been a serious concern for a decade plus.
They only stopped using global default passwords because people were being visibly compromised on the scale of millions at a time.
Your router should use acme with a your-slug.network.home (a communal one would be nice, but more realistically some vendor specific domain suffix that you could cname) domain name and then you should access it via that, locally. your router should run ideally splitbrain dns for your network. if you want you can check a box and make everything available globally via dns-sd.
Wouldn't that allow the router to MITM all encrypted data that goes through it?
If it were a CA cert yes. It could instead be a self-signed server (non-CA) cert, that couldn't be used for requests to anything else.
All my personal and professional feelings aside (they are mixed) it would be fascinating to consider a subnet based TLS scheme. Usually I have to bang on doors to manage certs at the load balancer level anyway.
I’ve actually put a decent amount of thought into this. I envision a raspberry pi sized device, with a simple front panel ui. This serves as your home CA. It bootstraps itself witha generated key and root cert and presents on the network using a self-issued cert signed by the bootstrapped CA. It also shows the root key fingerprint on the front panel. On your computer, you go to its web UI and accept the risk, but you also verify the fingerprint of the cert issuer against what’s displayed on the front panel. Once you do that, you can download and install your newly trusted root. Do this on all your machines that want to trust the CA. There’s your root of trust.
Now for issuing certs to devices like your router, there’s a registration process where the device generates a key and requests a cert from the CA, presenting its public key. It requests a cert with a local name like “router.local”. No cert is issued but the CA displays a message on its front panel asking if you want to associate router.local with the displayed pubkey fingerprint. Once you confirm, the device can obtain and auto renew the cert indefinitely using that same public key.
Now on your computer, you can hit local https endpoints by name and get TLS with no warnings. In an ideal world you’d get devices to adopt a little friendly UX for choosing their network name and showing the pubkey to the user, as well as discovering the CA (maybe integrate with dhcp), but to start off you’d definitely have to do some weird hacks.
This is incredibly convoluted scenario for a use case with near zero chance of a MITM attack. Security ops is cancer.
Please tell me this is satire.
What can I say, I am a pki nerd and I think the state of local networking is significantly harmed by consumer devices needing to speak http (due to modern browsers making it very difficult to use). This is less about increasing security and more about increasing usability without also destroying security by coaching people to bypass cert checks. And as home networks inevitably become more and more crowded with devices, I think it will be beneficial to be able to strongly identify those devices from the network side without resorting to keeping some kind of inventory database, which nobody is going to do.
It also helps that I know exactly how easy it is to build this type of infrastructure because I have built it professionally twice.
Why should your browser trust the router's self-signed certificate? After you verify that it is the correct cert you can configure Firefox or your OS to trust it.
Because local routers by definition control the (proposed?) .internal TLD, while nobody controls the .local mDNS/Zeroconf one, so the router or any local network device should arguably be trusted at the TLS level automatically.
Training users to click the scary “trust this self-signed certificate once/always” button won’t end well.
Honestly, I'd just like web browsers to not complain when you're connecting to an IP on the same subnet by entering https://10.0.0.1/ or similar.
Yes, it's possible that the system is compromised and it's redirecting all traffic to a local proxy and that it's also malicious.
It's still absurd to think that the web browser needs to make the user jump through the same hoops because of that exceptional case, while having the same user experience as if you just connected to https://bankofamerica.com/ and the TLS cert isn't trusted. The program should be smarter than that, even if it's a "local network only" mode.
Certificates protect against man in the middle attacks and those are a thing on local networks.
I wonder what this would look like: for things like routers, you could display a private root in something like a QR code in the documentation and then have some kind of protocol for only trusting that root when connecting to the router and have the router continuously rotate the keys it presents.
Yeah, what they'll do is put a QR code on the bottom, and it'll direct you to the app store where they want you to pay them $5 so they can permanently connect to your router and gather data from it. Oh, and they'll let you set up your WiFi password, I guess.
That's their "solution".
I wonder if a separate CA would be useful for non-public-internet TLS certificates. Imagine a certificate that won't expire for 25 years issued by it.
Such a certificate should not be trusted for domain verification purposes, even though it should match the domain. Instead it should be trusted for encryption / stream integrity purposes. It should be accepted on IPs outside of publicly routable space, like 192.0.0/24, or link-local IPv6 addresses. It should be possible to issue it for TLDs like .local. It should result in a usual invalid certificate warning if served off a pubic internet address.
In other words, it should be handled a bit like a self-signed certificate, only without the hassle of adding your handcrafted CA to every browser / OS.
Of course it would only make sense if a major browser would trust this special CA in its browser by default. That is, Google is in a position to introduce it. I wonder if they may have any incentive though. (To say nothing of Apple.)
But what would be the value of such a certificate over a self-signed one? For example, if the ACME Router Corp uses this special CA to issue a certificate for acmerouter.local and then preloads it on all of its routers, it will sooner or later be extracted by someone.
So in a way, a certificate the device generates and self-signs would actually be better, since at least the private key stays on the device and isn’t shared.
The value: you open such an URL with a bog standard, just-installed browser, and the browser does not complain about the certificate being suspicious.
The private key of course stays within the device, or anywhere the certificate is generated. The idea is that the CA from which the certificate is derived is already trusted by the browser, in a special way.
Compromise one device, extract the private key, have a "trusted for a very long time" cert that identifies like devices of that type, sneak it into a target network for man in the middle shenanigans.
If someone does that you’ve already been pwned. In reality you limit the CA to be domain scoped. I don’t know why domain-scoped CAs aren’t a thing.
> If someone does that you’ve already been pwned
Then why use encryption at all when your threat model for encrypted communication can't handle a malicious actor on the network?
Because there are various things in HTML and JS that require https.
(Though getting the browser to just assume http to local domains is secure like it already does for http://localhost would solve that)
Yes
Old cruft dying there for decades
That's the reality and that's an issue unrelated to TLS
Running unmanaged compute at home (or elsewhere ..) is the issue here.
If the web browsers would adopt DANE, we could bypass CAs and still have TLS.
A domain validated secure key exchange would indeed be a massive step up in security, compared to the mess that is the web PKI. But it wouldn't help with the issue at hand here: home router boostrap. It's hard to give these devices a valid domain name out of the box. Most obvious ways have problems either with security or user friendliness.
Frankly, unless 25 and 30 year old systems are being continually updated to adhere to newer TLS standards then they are not getting many benefits from TLS.
Practically, the solution is virtual machines with the compatible software you'll need to manage those older devices 10 years in the future, or run a secure proxy for them.
Internet routers are definitely one of the worst offenders because originating a root of trust between disparate devices is actually a hard problem, especially over a public channel like wifi. Generally, I'd say the correct answer to this is that wifi router manufacturers need to maintain secure infrastructure for enrolling their devices. If manufacturers can't bother to maintain this kind of infrastructure then they almost certainly won't be providing security updates in firmware either, so they're a poor choice for an Internet router.
It is reasonable for the WebPKI of 2025 to assume that the Internet encompasses the entire scope of its problem.
Or, equivalently, it's being pushed because customers of "big players", of which there are a great many, are exposed to security risk by the status quo that the change mitigates.
It makes the system more reliable and more secure for everyone.
I think that's a big win.
The root reason is that revocation is broken, and we need to do better to get the security properties we demand of the Web PKI.
> It makes the system more reliable
It might in theory but I suspect it's going to make things very very unreliable for quite a while before it (hopefully) gets better. I think probably already a double digit fraction of our infrastructure outages are due to expired certificates.
And because of that it may well tip a whole class of uses back to completely insecure connections because TLS is just "too hard". So I am not sure if it will achieve the "more secure" bit either.
It makes systems more reliable and secure for system runners that can leverage automation for whatever reason. For the same reason, it adds a lot of barriers to things like embedded devices, learners, etc. who might not be able to automate TLS checks.
Putting a manually generated cert on an embedded device is inherently insecure, unless you have complete physical control over the device.
And as mentioned in other comments, the revocation system doesn't really work, and reducing the validity time of certs reduces the risks there.
Unfortunately, there isn't really a good solution for many embedded and local network cases. I think ideally there would be an easy way to add a CA that is trusted for a specific domain, or local ip address, then the device can generate its own certs from a local ca. And/or add trust for a self-signed cert with a longer lifetime.
This is a bad definition of security, I think. But you could come up with variations here that would be good enough for most home network use cases. IMO, being able to control the certificate on the device is a crucial consumer right
Real question: What is the correct way to handle certs on embedded devices? I never thought about it before I read this comment.
There are many embedded devices for which TLS is simply not feasible. For remote sensing, when you are relying on battery power and need to maximise device battery life, then the power budget is critical. Telemetry is the biggest drain on the power budget, so anything that means spending more time with the RF system powered up should be avoided. TLS falls into this category.
Yes, but the question is about devices that can reasonably run TLS.
The answer is local acme with your router issuing certs for your ULA prefix or “home zone” domain.
And rather than fix the issues with revocation, its being shuffled off to the users.
Good example of enshittification
"Suffer" is a strong word for those of us who've been using things like Let's Encrypt for years now without issue.
Unless I misunderstood, GP mentions that the problem stems from WebPKI's central role in server identity management. Think of these cert lifetimes as forcefully being signed out after 47 days of being signed in.
> easier for a few big players in industry
Not necessarily. OP mentions, more certs would mean bigger CT logs. More frequent renewals mean more load. Like with everything else, this seems like a trade-off. Unfortunately, for you & I, as customers of cert authorities, 47 days is where the now the agreed cut-off is (not 42).