I generally subscribe to the attitude that premature optimizations are evil, but I strongly believe that a robust caching strategy should evolve alongside the rest of the system. Waiting too long makes it hard to cleanly and thoughtfully add caching. Besides, in my experience, a considered caching strategy generally means I worry less about performance in other areas – especially data access and data modelling. In other words, I can build those complex parts for maintainability, as opposed to having to worry about the cost of each individual query.
.NET developers are pretty cache-savvy – thanks largely in part to the powerful
System.Web.Caching namespace and ASP.NET’s simple to use OutputCaching capabilities. For that reason, and the fact that it tends to be very application specific, I don’t want to go over how to decide what to cache, how to deal with synch issues, updates and so on. Instead, I specifically want to talk about Memcached.
You’re probably already familiar with Memcached – it’s a highly efficient distributed caching system. It’s used generously by all the big web 2.0 players (In may 2007 it was revealed that Facebook relies on 200 16GB quad-core dedicated Memcached servers). Interest in Memcached from the .NET community has been relatively low (although over the last year more and more people are talking about it). Frankly, if you’re doing anything that requires horizontal scaling you’re seriously shooting yourself in the foot by overlooking it. It runs on windows – although we run it on Linux and there’s really no reason for you not to learn that too!
Fundamentally, there are two problems with the built-in cache. First, it’s limited to the memory of a single system which happens to be shared with the rest of your application domain. Secondly, if you have two servers, each with their own in-memory cache, users are likely to see very weird synching issues. Memcached isn’t as fast as in-memory caching, but will scale to virtually unlimited amount of memory. There isn’t any redundancy of failover, simply memory spread across multiple servers.
The best part is that it literally takes seconds to get it up and running. First, download a windows build onto your development machine here. (look for the win32 binary of memcached). Unzip the package somewhere, I put mine in
c:\program files\memcached\. Next, from the command line, run
memcached -d install. This will install memcached as a service. You can run
memcached -h for more command lines options. You’ll need to start the service (I also changed my startup type to manual, but that’s completely up to you).
The next step is to install the client library. I use suggest Enyim Memcached from CodePlex. The project comes with a sample configuration file, which you should be able to easily incorporate into your web.config or app.config. While developing, only put one server 127.0.0.1 on port 11211 (which is the default). You also need to add a reference to the two dlls.
Aside from that, you basically program against a simple API. You create an instance of
MemcachedClient (it’s thread-safe so you can use a singleton, or re-create it since it’s inexpensive to create), and call
Remove (or a few other useful methods) like you would the normal cache object. As I’ve blogged about before (here and here), I’m a fan of hiding all of this behind an interface to ease mocking and swapping.
Here’s an example:
MemcachedClient client = new MemcachedClient();
client.Store(StoreMode.Set, “Startup”, DateTime.Now, DateTime.Now.AddMinutes(20));
DateTime startup = client.Get<DateTime>(“Startup”);