Scale Cheaply – Memcached

I generally subscribe to the attitude that premature optimizations are evil, but I strongly believe that a robust caching strategy should evolve alongside the rest of the system. Waiting too long makes it hard to cleanly and thoughtfully add caching. Besides, in my experience, a considered caching strategy generally means I worry less about performance in other areas – especially data access and data modelling. In other words, I can build those complex parts for maintainability, as opposed to having to worry about the cost of each individual query.

.NET developers are pretty cache-savvy – thanks largely in part to the powerful System.Web.Caching namespace and ASP.NET’s simple to use OutputCaching capabilities. For that reason, and the fact that it tends to be very application specific, I don’t want to go over how to decide what to cache, how to deal with synch issues, updates and so on. Instead, I specifically want to talk about Memcached.

You’re probably already familiar with Memcached – it’s a highly efficient distributed caching system. It’s used generously by all the big web 2.0 players (In may 2007 it was revealed that Facebook relies on 200 16GB quad-core dedicated Memcached servers). Interest in Memcached from the .NET community has been relatively low (although over the last year more and more people are talking about it). Frankly, if you’re doing anything that requires horizontal scaling you’re seriously shooting yourself in the foot by overlooking it. It runs on windows – although we run it on Linux and there’s really no reason for you not to learn that too!

Fundamentally, there are two problems with the built-in cache. First, it’s limited to the memory of a single system which happens to be shared with the rest of your application domain. Secondly, if you have two servers, each with their own in-memory cache, users are likely to see very weird synching issues. Memcached isn’t as fast as in-memory caching, but will scale to virtually unlimited amount of memory. There isn’t any redundancy of failover, simply memory spread across multiple servers.

The best part is that it literally takes seconds to get it up and running. First, download a windows build onto your development machine here. (look for the win32 binary of memcached). Unzip the package somewhere, I put mine in c:\program files\memcached\. Next, from the command line, run memcached -d install. This will install memcached as a service. You can run memcached -h for more command lines options. You’ll need to start the service (I also changed my startup type to manual, but that’s completely up to you).

The next step is to install the client library. I use suggest Enyim Memcached from CodePlex. The project comes with a sample configuration file, which you should be able to easily incorporate into your web.config or app.config. While developing, only put one server on port 11211 (which is the default). You also need to add a reference to the two dlls.

Aside from that, you basically program against a simple API. You create an instance of MemcachedClient (it’s thread-safe so you can use a singleton, or re-create it since it’s inexpensive to create), and call Store, Get or Remove (or a few other useful methods) like you would the normal cache object. As I’ve blogged about before (here and here), I’m a fan of hiding all of this behind an interface to ease mocking and swapping.

Here’s an example:

MemcachedClient client = new MemcachedClient();
client.Store(StoreMode.Set, “Startup”, DateTime.Now, DateTime.Now.AddMinutes(20));
DateTime startup = client.Get<DateTime>(“Startup”);

This entry was posted in Uncategorized. Bookmark the permalink. Follow any comments here with the RSS feed for this post.

6 Responses to Scale Cheaply – Memcached

  1. karl says:

    I don’t think you can. But I must say, the SqlDepedency thing has always been at-odds with DDD in my mind. I agree it’s convenient and powerful, and possibly even the right approach to some applications (such as reporting).

    However, rather than letting a change in data be the trigger for clearing the cache, I really think this ought to be caused by a given behaviour. In other words, it shouldn’t be because a column within the UserAddresses table was changed that the cache is clear, but rather because user.Save() was called.

    This is traditionally how other frameworks address dependency issues. They provide callbacks, such as AfterSave which allow you to erase the cache (or, like Rails, they provide Sweepers which Observe domain objects for changes, which provide greater abstraction).

    The other nice thing about this, is that you’ve taken direct control over your caching strategy and aren’t tied to database provider or a caching provider.

    It will be interesting to see if Microsoft’s memcached (Velocity) supports it though…

  2. DotnetShadow says:

    Hi there,

    I was wondering if you could give an example of how you would achieve what the sqldependency would do. For example have a DB trigger that will flush out the cache or something?

    Regards DotnetShadow

  3. Greg Beech says:

    Another free option if you don’t want to rely on open source software (and there are many reasons why you might not such as the lack of any support contract) then you can just re-host the ASP.NET cache in a windows service as a distributed cache. It’ll take you about 30 minutes to set up:

    We run off a (not very) enhanced variant of the cache in this article.

  4. Carl,

    Check our Cacheonix if you want to scale reliably and expensively :)

    Cacheonix is in beta right now and we give a free license for each new bug you will find.

  5. bradk says:

    running this on 64-bit CentOS servers and using Enyim client rocks. we run 4 servers with 16G of cache each. Cacti has a nice monitoring plugin as well so you can see whats happening inside the cache as far as total items, evictions,etc.

  6. simone_b says:

    Definitely, memcached is the standard for high traffic websites which need to scale out. We’re using it very successfully on Windows with the client developed by Enyim. It’s fast and reliable.