Client side SSL authentication, as a bit of an old school sysadmin, remains both my favourite and least favourite authentication method.
Favourite because I like the decentralised model, secrets in the perimeter instead of the core, and it was also fantastic for M2M comms before OAuth etc came on the scene.
Least favourite because the client side implementation was always ugly and unpleasant. In particular the shared key store between Windows world (Inc IE) and Chrome and absence of that with Firefox. Explaining to a user how to make establish with client certificates was such a pain.
I still use this today in professional life for at least 2 B2B relationships that spring to mind. Both of them are with Telcos. I also use it for a couple of older OpenVPN setups.
I think client side SSL still holds a strong place in the toolbelt of options for doing small scale micro service models. It handles many authentication and identity challenges "for free".
EasyRSA is a great goto tool to help you get started with it.
Author here. "Secrets in the perimeter instead of the core" is an interesting point. To my mind it can just as equally be a disadvantage - I'd usually consider the core more trusted and less exposed.
When deploying client certificate auth with HTTP load balancers, etc. you basically have to have the LB add a X-Client-Certificate: ... header and then trust that it's telling the truth. This means a LB compromise basically compromises everything, since it can just lie, whereas something like (for example) AWS's HMAC-based signature system (where the substance of a request is signed using a shared secret unknown to LBs) for its APIs wouldn't be.
So I actually find this a compelling argument against client certificates in practice. It's actually a curious example of how the security of client certificates might be worse in practice, even though theoretically superior on the client-to-server leg, because of the backend implications it creates.
However, that LB code is written once and generic for all users, so it can be thoroughly checked and audited. How often are LBs hacked? Not so often. Whereas when servers handle their own auth they have to roll their own every single time and it's inevitable that a lot of them will get it wrong (see the article on auto maker web apps getting hacked and letting people take control of cars, usually due to bad auth impls).
With a non-client-cert based authentication system, one can devise a common authentication scheme (AWS's v4 signatures being a good example), but then delegate implementation of that scheme to a centralised service (or a standardised library), rather than being reimplemented in every application service. But that service needn't be a public, user-visible service.
I think there's a distinction here to be made about whether an application should roll its own authentication (answer: no), whether an organisation should roll its own authentication (answer: probably not), and whether, when we are comparing unified authentication systems which do or could exist, including client certificates, whether client certificates are particularly good (my view: probably not). The question of who designed an authentication scheme, and when and where that scheme is actually enforced (on what machine, in what codebase), are, except in the case of client certificates, largely orthogonal.
(I just wrote up my thoughts on this in more detail and will publish that blog post sometime.)
There's a couple things that I think are worth digging into with this post and the grandparent:
- If a LB is a place where all traffic is going through, it does become a higher value target. Saying that they are not hacked feels very anecdotal.
- It is possible to have a LB which forwards TLS connections and does not need to MitM other connections. There are tradeoffs in it, but this is entirely possible. I had a writeup of my own on it over here: https://er4hn.info/blog/2023.02.18-tls-load-balancer/
hlandau, would love to read your article on comparing different auth schemes when you have that written up.
> To my mind it can just as equally be a disadvantage - I'd usually consider the core more trusted and less exposed.
If a bad actor takes a little sneak peek at your database, they can brute-force hashed passwords or cookies and impersonate users. They can’t brute-force public keys.
It has nothing to do with the location of data, because that is the same: user has a secret credential, server has verification function
> This means a LB compromise basically compromises everything
Think about what a compromise of the LB means in a world where the encryption (TLS session) is not end-to-end. The LB can now steal all the content and lie. So, same thing.
If connection security is a priority, you need the TLS session to be from client to destination server with no MITMs. This is where mutual TLS authentication is ideal, since any would-be MITMs can't fake being the client.
This is why allowing TLS MITMs such as cloudflare is such a terrible idea if confidentiality matters at all.
This assumes that a compromise of confidentiality is equally as undesirable as the ability to forge requests which change state. As I see it it's going to be fairly common to have applications where one of these things is much less desirable than the other. If confidentiality is less important than ensuring requests can't be forged, client certificates may be a bad option. If confidentiality is far more important (including against enterprise MitM) is more important than ensuring requests can't be forged, it may well be a better option. So it's a question of tradeoffs.
Of course there's also always the option to combine both and require both a client certificate and some sort of application-layer signature, for the best of both worlds at the cost of greater complexity.
I agree that the Cloudflare trend is a disaster though (as I wrote in my article about Cloudflare), people literally opting into having their traffic MitM'd.
For better or worse, a lot of services are built around layer 7 http load balancing, which means the balancer (or something in front of it) needs to unwrap the TLS before it gets to origins. Once you start sending /foo to one group of servers and /bar to another, you're stuck with a load balancer that sees all the content.
Stickiness also plays a role hear. If the LB does not terminate the TLS connection, it needs to route all requests in that TLS session to the same „sticky“ host.
HTTP being stateless, the LB can in theory distribute those requests to distinct hosts.
It's a tradeoff between how you route requests. If they all come from the same host, using the same source port, they are all probably related and it's not a bad idea to have them all go to the same server node to process.
In general I believe that having LBs decrypt HTTPS to HTTP for better routing is an anti-pattern. It makes the LB a high value target in a network. I wrote up a blog post in more detail about how to LB w/ TLS over here: https://er4hn.info/blog/2023.02.18-tls-load-balancer/
> In particular the shared key store between Windows world (Inc IE) and Chrome and absence of that with Firefox.
I don’t know about client certificates, but roots you can definitely convince Firefox ≥ 49 to pull from Windows if you set security.enterprise_roots.enabled=true in about:config[1]. (Intermediates you definitely can’t.) The caveat is that the roots will be pulled into NSS as a plain list of certs, not queried via the native Crypto API, so any accompanying info you might be using—like the undocumented externally-imposed name constraints[2]—will end up ignored.
Ah, apparently Firefox ≥ 75 knows how to pull client certs from the system, while ≥ 90 will even do it by default[3].
Favourite because I like the decentralised model, secrets in the perimeter instead of the core, and it was also fantastic for M2M comms before OAuth etc came on the scene.
Least favourite because the client side implementation was always ugly and unpleasant. In particular the shared key store between Windows world (Inc IE) and Chrome and absence of that with Firefox. Explaining to a user how to make establish with client certificates was such a pain.
I still use this today in professional life for at least 2 B2B relationships that spring to mind. Both of them are with Telcos. I also use it for a couple of older OpenVPN setups.
I think client side SSL still holds a strong place in the toolbelt of options for doing small scale micro service models. It handles many authentication and identity challenges "for free".
EasyRSA is a great goto tool to help you get started with it.