So this post has been overdue for a long time. I wanted to talk a bit about the setup we have on my current project for doing automated deployments to our various environments. The concepts are relatively simple but as always the devil is in the details.
We use a push model where things are driven by an xml file for each environment. This file specifies the nodes/servers we are targeting and the roles each server is assigned to. This gives us very granular control on what goes where. I said nodes previously because not all the things we target are computers and the best example is the “virtual” targets we use to execute prep scripts before and after the computer nodes.
I had to change how things work here a couple of times because certain restrictions in our environment. At first I was packing the deployment artifacts in a .zip file that would get copied over to the target node with a couple of supporting files and the deployment would happen locally on that machine. In the end we ended up mapping network drives and copying files this way and doing IIS restarts and other tasks remotely - yuck!
A huge advantage to the package run locally approach is that each machine would do the work and we can deploy to a few nodes at the same time in parallel. If you need to deploy to tens or hundreds of nodes this is the ticket. Or maybe you should look into some serious deployment tools instead of rolling your own. Your mileage may vary if some of your nodes have dependencies on others being deployed to first but you can easily handle this scenario by introducing some node groups or a dependency tree.
They’re chasing me out of the local Starbucks now so I will have to finish this tomorrow with some code samples.