SSL Session Caching (in nginx)
Posted: Tue, 28 June 2011 | permalink | 17 Comments
Given that I was responsible for putting together the system architecture for Github when they moved away from their previous services provider to Anchor Systems, I keep an eye on what's going on there even though I've gotten out of professional IT.
Recently(ish), Ryan Tomayko blogged about some experiments in using Amazon CloudFront as a CDN for static assets. While discussing as-yet unsolved performance issues, he quoted a tweet from Shopify's CEO (who I would have thought would have known better):
Protip to make SSL fast: terminate in your load balancer, that makes SSL session cookies actually work.
No, no, NO -- a million times NOOOOOO!
Load balancers are, by their very nature, performance chokepoints. You can't horizontally scale your load balancer, because if you do (by, say, putting another load balancer in front of it -- don't laugh, I've seen it done) you've just re-introduced all the same problems that putting SSL on the load balancer was supposed to accomplish (that is, getting SSL session caching to work).
The only thing you accomplish by putting your SSL sessions onto the load balancer is bring the moment your load balancer melts down through excessive CPU usage that much closer. I dunno, maybe Shopify doesn't actually do that much traffic or something, but I certainly wouldn't be signing up for that headache if it were my infrastructure.
(Sidenote: yes, I know that you can get "SSL hardware accelerators", that put the basic crypto operations into dedicated silicon, but they're rarely cheap, and you can't horizontally scale those, either. They're effectively a specialised and outrageously expensive form of "scaling by bigger hardware" -- which is a valid method of scaling, but one with fairly terminal limitations that certainly wouldn't make it practical for Github).
The problem you're trying to solve by centralising SSL operations is that SSL includes a mechanism whereby a client making a new TCP connection can say "here's the crypto details from last time I connected to you" and the server can just pull out all of the per-connection data that would otherwise take a long time (several TCP round trips and a smallish chunk of CPU time) to setup. This can make SSL connections a lot snappier (in Ryan's opinion, "We spend roughly as much time handshaking as we do generating the average dynamic request."), so they're a definite win if you can do it (and yes, you can do it).
Webservers don't typically do this caching by default, but setting up a simple SSL session cache for a single nginx or Apache webserver is pretty trivial. Testing it is simple, too: just install gnutls-bin and run:
gnutls-cli -V -r HOSTNAME |grep 'Session ID'
If all three Session IDs are the same, then you've got SSL session caching running. (If you're wondering why I'm using gnutls-cli rather than openssl s_client, it's because openssl is living in the pre-IPv6 dark ages, and I've got IPv6-only services running I wanted to test).
So that's great for your little single-server hosting setup, but per-machine caches aren't much use when you're running behind a load balancer, Github style, because each SSL-enabled TCP connection can very well end up on a different server, and a per-machine cache won't help you very much. (If you think the answer to this problem is "session affinity", you're setting yourself up for a whole other world of pain and suffering.)
Here's the kicker, though -- the SSL session is just a small, opaque chunk of bytes (OpenSSL appears to store it internally as an ASN.1 string, but that's an irrelevant implementation detail). An SSL session cache is just a mapping of the SSL session ID (a string) to the SSL session data (another string). We know how to do "shared key-value caches" -- we have the technology. There won't be too many large-scale sites out there that aren't using memcached (or something practically equivalent).
So, rather than stuff around trying to run a load balancer with enough grunt to handle all your SSL communication for all time, you can continue to horizontally scale the whole site with as many backend webservers as you need. All that's needed is to teach your SSL-using server to talk to a shared key-value cache.
Far be it for me to claim any sort of originality in this idea, though -- patches for Apache's mod_ssl to do this have been floating around for a while, and according to the release notes, the 2.4 series will have it all built-in. However, Github uses nginx, to great effect, and nobody's done the work to put this feature into nginx (that I could find, anyway).
Until now.
I present, for the edification of all, a series of patches to nginx 0.8.x to provide memcached-enabled SSL session caching. Porting them to other versions of nginx hopefully won't be too arduous, and if you ask nicely I might give it a shot. If you don't use memcached (gasp!), my work should at least be a template to allow you to hook into whatever system you prefer. Sharding the memcached calls wouldn't be hard either, but I'm not a huge fan of that when other options are available.
I had originally hoped to be able to say "it works at Github!" as a testimonial, but for some reason it hasn't been deployed there yet (go forth and pester!), so instead I'm just releasing it out there for people to try out on a "well, I don't know that it doesn't work" basis. If it breaks anything, it's not my fault; feel free to supply patches if you want to fix anything I've missed.
17 Comments
From: Tobias
2011-07-02 03:08
Just FYI: mod_gnutls (for Apache) can use memcached since ages ;)
From: Matt Palmer
2011-07-02 07:58
Hi Tobias,
Thanks for that. I wasn’t even aware that mod_gnutls existed. Useful info for anyone who’s using Apache.
From: Lukas
2011-07-03 04:01
“You can’t horizontally scale your load balancer”
You can scale out the load balancer using ClusterIP. Of course, you then need a solution like the one you discussed in this blog post to share the SSL session data between the load balancers.
Out of curiosity, why aren’t you in professional IT anymore, and what do you do now? Drive trains?
From: Matt Palmer
2011-07-03 07:09
Hi Lukas,
Thanks for your comment. I’ve looked briefly at ClusterIP in the past, although I’ve never run it. The main thing that terrifies me about ClusterIP is that you need to completely reconfigure the cluster every time a node fails, or you add or remove a node for maintenance. This seems to me to be a showstopper, because presumably the hashing algorithm isn’t going to be able to know to keep routing existing connections to existing servers. Having the entire cluster “hiccup” every time you add new capacity isn’t something I’d like to try and sell to the business.
A centralised load balancer running IPVS, on the other hand, handles backend node failure or removal for maintenance by only restarting those connections which were on the failed node (for maintenance, you can gracefully remove the node from service to avoid dropping any connections). On load balancer failure, existing connection state can be maintained using the Server State Sync Daemon. Clients will see a small blip in throughput/new connection servicing while the load balancer fails over, but no existing connections should drop and this only happens if one particular machine fails – the currently active load balancer. I see that as a far more useful mechanism.
Also, I’m a little worried about the capacity issues that sending all (incoming) packets to all nodes poses. Switches that don’t handle link-level multicast “properly” (and that would be “most of them”, in my experience) are going to flood all the machines on the broadcast domain. It’s like going back to having hubs everywhere. I think I’d be really twitchy about relying on that network model in production as traffic scales.
Finally, to clear up a misconception in Flavio’s talk that you linked to, you don’t have to purchase any additional hardware to do IPVS load balancing. The real servers can double as load balancers, with failover and all the other good things in life. I don’t like to run things like that (separation of concerns and all that) but I’ve done it and it works well enough.
I got out of IT because I decided I needed a change, and yes, I went to become a train driver for Sydney’s suburban rail operator, Cityrail. They’re a bit down on people spilling the goss about their operations, so I don’t blog about it (much), but suffice it to say, I wanted a change, and gosh darn it I got a big one.
From: Florian
2011-07-05 17:54
Thanks for this post, very interesting, gonna have to think about it a bit though :)
On a different not, the link from Planet Debian ended in http://hezmatt.org/~mpalmer/blog/2011/06/28/ssl-session-caching-in-nginx - 404.
Then I’ve fatingered the url and / is also not helpful, but you probably knew that one :)
From: Sameh
2011-08-23 19:17
Load balancing is a simple task, which is not CPU consuming. A little network I/O and that’s it. At least if you don’t do computing tasks on them, but you shouldn’t do, since you don’t terminate SSL there, you cannot manipulate the stream.
That’s why I do not agree with the statement “Load balancers are, by their very nature, performance chokepoints.”.
On the other hand, having SSL terminated on the LB will allow modifying requests/responses, which is a feature a lot of people cannot avoid to have.
From: Matt Palmer
2011-08-23 20:36
Hi Sameh,
Thanks for your comment. I think we’re actually in violent agreement – my statement regarding load balancers as a chokepoint was meant only in the sense that you really need to think about every operation that your load balancer does, and whether it needs to be done there, because if you overload it, you can’t just scale it out like you do with your webservers. I completely agree that a good architecture should have the load balancer doing as little as possible, for exactly this reason.
As far as needing to modify requests/responses on the load balancer, I think that’s just down to poor architecture or other design decisions (like the never-to-be-sufficiently-damned “sticky sessions”).
From: Scott
2011-08-30 06:39
“You can’t horizontally scale your load balancer …”
how about a DNS round-robin A record and some arbitrary number of load balancers and/or datacenters? while a single LB may not scale horizontally, you can in fact scale out horizontally using LBs by using more than one, and pinning established sessions to an existing LB (via DNS TTL and optionally session cookies as well). the problem of sessions ending up on a different server behind the LB than originally served the content can be avoided by using a backing store (like redis or memcached) for session data, or sharding, and by avoiding re-encryption on the backend.
I largely agree with your post (and found it to be enlightening!), but I think the blanket characterization that offloading SSL to the load balancer doesn’t scale horizontally wasn’t entirely accurate.
cheers, /sf
From: George
2012-07-17 10:42
“You can’t horizontally scale your load balancer …”
Not entirely true.
I use 4 balancers - nginx ssl –> haproxy –> server farm and can add more easily by load-balancing ahead of them.
My firewalls support load balancing and I balance the SSL round robin to the balancers. The type of setup can be done with many commercial firewalls - and even open source. Take a look at pfSense - which has load balancing plug-ins. The boxes I use for SSL termination are 4 core cpus (3.0 ghz) with 4 GB ram. Never had an issue - even at 20-25 mbit sustained ssl through put.
Used Stunnel initially and it worked well for a while, but the forward-for patch for haproxy effectively disables session caching. Nginx does not suffer from this. I have never had the problem you have with servers jumping in mid-session. As long as the load balancer can see the IP of the client request, then persist the connection the back-end server based on the IP (or better yet a cookie) you are good to go.
From: Matt Palmer
2012-07-17 11:01
George,
From what I can tell of your system from your description, you don’t have horizontally scaled load balancers – you’ve got horizontally scaled SSL terminators. You’ve implemented session affinity to avoid the need for proper SSL session caching, which, as I’ve said before, is a disaster waiting to happen.
From: Cody A. Ray
2013-07-21 14:55
I’m very interested in the memcached patch for nginx. Any chance you have (or could) updated this for, say, 1.0.5?
From: Matt Palmer
2013-07-21 15:24
Hi Cody,
I’ve not updated my SSL caching patch for anything newer than 0.8. Happy to take a shot at forward-porting it, though. Any reason you can’t use 1.4? That’d be what I’d prefer to target, for simple longevity reasons.
From: Kal Sze
2013-09-05 20:23
gnutls-cli -r
has a shortcoming in that it disconnects the first connection, and then reconnects. This strategy can only confirm that the SSL session is cached and can be reused sequentially. It doesn’t tell whether the cached SSL session can be used by multiple forked child server processes concurrently.Would you know if there is a simple way to test concurrent use of the cached SSL session in the command line?
From: Andreas 'av0v' Specht
2013-09-16 05:33
I’ve tried to port the patch to Nginx 1.4.1 but it looks like it does not work with libmemcached 1.0.14 Also your patch with a 0.8 nginx does not work on my system (nothing is stored into memcached, first request takes nearly an endless amount of time.
Any ideas or plans for getting a working 1.4.x patch? :-)
From: Matt Palmer
2013-09-16 08:30
Hi Andreas, I haven’t got any specific plans to port to the patch to nginx 1.4.x, but if and when I get some spare time (hahahaha), I might take a stab at it.
As far as the 0.8 patch timing out, my guess would be firewalling more than code failure – long delays are likely to be TCP timeouts. A tcpdump or strace would probably provide more useful diagnostic information for you.
From: Matt Palmer
2013-10-02 14:34
Hi Kal, I’m afraid I don’t know of any standard tool that will check that a given SSL session can be used concurrently. It wouldn’t be hard to do with a couple of lines of $SCRIPTING_LANGUAGE.
From: Frederic Sagnes
2014-05-05 22:09
FWIW, here are the commands for OpenSSL: $ openssl s_client -connect github.com:443 -sess_out session.pem | grep “Cipher is” $ openssl s_client -connect github.com:443 -sess_in session.pem | grep “Cipher is”
If the second command says “Reused” instead of “New” you’re good.
Post a comment
All comments are held for moderation; markdown formatting accepted.