Caching: Varnish or Nginx?

TL;DR: Varnish lacks support for SSL and SPDY. Nginx handles it just fine, and has very fast cache with either memcache or disk storage (ramdisk). Both can serve stale cache if your backend is down. But Nginx can not write to the memcache storage directly, it has to be done by the application. Also, Nginx can not purge the cache itself, without you compiling your own package.

Updated April 15, 2013

Varnish will not implement SSL anytime soon. As the author, Poul-Henning Kamp, puts it: «[…] making it a huge waste of time and effort to even think about it»

This means that if I want to use Varnish, but have the need for SSL, I need a SSL proxy in front. The most popular option at the moment seems to be Nginx. But Nginx does not only work as a reverse proxy; it also has a caching engine that, when backed by memcached, is blazing fast and on par with the speed of Varnish.

When it comes to web performance, SPDY is the new cool kid on the block. I’m not going into detail on SPDY here, but a requirement of SDPY is SSL*. Also Poul-Henning Kamp has been very clear that he is not in favor of SPDY at all.

So if I want to use SPDY, essentially I have to make all requests go through Nginx – which has a fast caching engine. So why am I still talking about Varnish?

Varnish has something called grace mode which keeps objects in cache even after their TTL has expired. If my backend is down, problematic or returns a 503 during planned maintenance, Varnish can keep serving the stale resources until my backend is healthy again.

As Kaspars Dambis pointed out to me, Nginx has the same possibility to serve stale cache when the backend is problematic.

I’ve been a huge fan of Varnish in the past, but right now it looks like Nginx also might be a good candidate. Except for one, IMHO, huge issue:

Nginx doesn’t have any means of writing to Memcache. It actually can not put anything in the cache. This means that the logic in your caching layer leaks into your application layer and each of your application endpoints will have to implement a way of storing to Memcache using the same key logic that Nginx will be using to retrieve the cache. I’m seeing a maintenance mayhem here. If your application is distributed to multiple nodes, you will find yourself in knee-high maintenance shit pretty fast.

If you only have one application, with one endpoint, Nginx caching might be useable for you – just remember that your cache logic is stored two places and have to be syncronized. I think I’ll still stick to Varnish for now, making my stack like this:
Nginx proxy (SSL & SPDY) -> Varnish -> Nginx/Apache

Now add PHP-FPM and MySQL to that, and you’ll have a helluva stack to debug when something unexpected happes, but at least it’s pretty clean.

* It is in theory technically possible to use SPDY without SSL, but that does not go for a real world scenario.

3 Comments

    1. Even more so since Varnish 4, which handles the thundering herd very elegantly, came out. The same concept can be used to refresh the content e.g. every 10 seconds, but delivering stale content up to a week old. Always fast, always updated.

Comments are closed.