I'm creating a cloud service where I have a worker role running some heavy processing in the background, for which i would like a Redis instance to be running locally on the worker.
What i want to do is set up the worker role project in a way that the Redis instance is installed/configured when the worker is deployed.
The redis database would be cleared on every job startup.
I've looked at the MSOpenTech redis for windows with nuget installation, but i'm unsure how i would get this working on the worker role instance. Is there a smart way to set it up, or would it be by command-line calls?
Thanks.
To install any software on a worker role instance, you'd need to set this up to happen as a startup task.
You'll reference startup tasks in your ServiceDefinition.csdef file, in the <Startup>
element, with a reference to your command file which installs whatever software you want (such as Redis).
I haven't tried installing Redis in a worker role instance, so I can't comment about whether this will succeed. And you'll also need to worry about opening the right ports (whether external- or internal-facing), and scaling (e.g. what happens when you scale to two worker role instances, both running redis?). My answer is specific to how you install software on a role instance.
More info on startup task setup is here.
I'm not expecting to get this marked as an answer, but just wanted to add the add that this is a really bad approach for a real-world deployment.
I can understand why you might want to do this from a learning perspective, however in a production environment its a really bad idea, for several reasons:
- You cannot guarantee when a Worker Role will be restarted by the Azure Service Fabric (and you're not guaranteed to get the underlying VM in the same state before it went down) - you could potentially be re-populating the cache simply because the role was re-started.
- In a real-world implementation of Redis, you would run multiple nodes within a cluster so you benefit from a) the ability to automatically split your dataset among multiple nodes and b) continue operations when a subset of the nodes are experiencing failures - running within a Worker Role doesn't give you any of this. You also run the risk of multiple Redis instances (unaware of each other) every time you scale-out your Worker Role.
- You will need to manage your Redis installation within the Worker Role and they simply aren't designed for this. PaaS Worker Roles are designed to run the Worker Role Package that is deployed and nothing else. If you really want to run Redis yourself, you should probably look at IaaS VM's.
I would recommend that you take a look at the Azure Redis Cache SaaS offering (see http://azure.microsoft.com/en-gb/services/cache/) which offers a fully managed, highly-available, implementation of the Redis Cache. I use it on several projects and can highly recommend it.