Windows azure caching 101

In Windows Azure there are three options for caching- Shared Cache Service, In-role caching and Azure Cache Service:

  1. “Shared Caching Service”, this was caching on a shared cluster and one could access the cache using the secret key. This is a multi-tenant offering and enforced throttling behavior and hence many windows azure customers didn’t like this. This is being retired sometime Aug 2014. This btw, is not even there in the current HTML portal and hence many people don’t know about the existence of this mechanism.
  2. “In-role cache”

This was an offering where you could mention that a portion of your webrole or worker role be used for caching purpose.

webrole for caching

webrole for caching

worker role for dedicated caching

worker role for dedicated caching

You can mention this in a cloud service project in Visual Studio:

Visual Studio Settings for caching

In the web role properties, you can select the Caching section and turn on caching by checking the “Enable Caching” select box.

You can specify which percentage of the web role memory you want towards cache size if using “co-located role” model. If you select dedicated role (2nd graphic above) (please note: dedicated role caching is only supported on worker roles and cannot be configured on web roles), the worker role is dedicated for caching purpose. Billing for in-role caching is same as compute web/worker role billing. And it’s available in small (1.75), medium (3.5), large (7) and extra-large (14) sizes.  Although the compute roles have the said memory, please note that some resources would however be used by the OS.

3.  “Cache Service”

Cache service is the latest offering. It brings the best of both worlds. While in-role caching was available for use only from within the cloud service, cache service makes the cache data available on a public end point by use of a secret key. It has few other really wonderful aspects like highly available data for the cache data, in the sense that the data is cached in a cluster from a failover perspective. Both the service and the data itself are highly available in this case. It’s not a shared service as the originally available cache service, so no throttling! There are also few advanced features like notification to the client when cache changes and so on, which makes it really best a reliable and advanced cache service offering.

Note on memcached:

memcached is a high-performance, distributed memory object caching system, generic in nature, but originally intended for use in speeding up dynamic web applications by alleviating database load. The system uses a client–server architecture. The servers maintain a key–value associative array; the clients populate this array and query it. Keys are up to 250 bytes long and values can be at most 1 megabyte in size. Clients use client-side libraries to contact the servers which, by default, expose their service at port 11211. Each client knows all servers; the servers do not communicate with each other.

If you have existing applications which use memcached, you can readily use them in windows azure. Windows Azure Caching supports almost every API that other Memcache implementations support. Memcached in windows azure works with the in-role caching mechanism as of the writing of this blog post. In future it could be expected to be made available with “cache service” too.

This entry was posted in Azure and tagged , , . Bookmark the permalink.

Leave a Reply

Your email address will not be published. Required fields are marked *

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <s> <strike> <strong>