This past week was a whirlwind of activity, as we scrambled at work to spin up a new virtual environment in response to a last minute urgent need. Now that the dust has settled, I wanted to think back and reflect on things that I could have done better, as I often do after things go off the rails a bit. And they most certainly did go off the rails this week. I’ve said it before: Azure (and the cloud in general) is a game changer and I believe a net positive force on our industry, but it’s also requires huge shifts in how we go about our work, and sweeping away swaths of knowledge that are no longer valid.

Take disk layout for example. Traditionally, I’ve always like to have separate disks for system, user and temporary databases, with an additional one for backups. In addition, I generally size very conservatively for the sake of keeping costs low. But in Azure, virtual machines are limited in how many disks you can attach to them based on size, so we have to be very careful about leaving room for expansion. This already bit me once when I tried to add more disks (after going through all the work of installation and configuration of course), and boy did it stink to have to go back and redo everything.

As technology professionals, we’re going to be pushed at times (who am I kidding, more like often) to move quickly on new technology, even when we are far from comfortable or familiar with it. And if we’re good at our job, this is going to set off alarm bells in our heads as we think of all the terrible things that can go wrong, or that we don’t even know could go wrong. Good DBAs have a bit of a paranoid streak in them, because they tend to be very risk averse. But at the same time, we have to find a way to move with the times, less we risk being seen merely as naysayers and obstructionists, and be left behind.

When these pushes happen, there’s a few strategies that we can employ to ease our struggles and also keep our risks to a minimum.

First, prove out everything you do in a test environment as much as possible. I have a lab in Azure that I utilize on an almost daily basis to try things out and understand the various pieces before attempting to implement them for real. Especially when we’re dealing with new and unfamiliar technologies, experimentation and testing are key to ensuring success. And with the ability for anyone to bring up an Azure subscription and simply pay for whatever resources you use, the excuses to not have a lab ready to power up are pretty much nil.

Second, take meticulous notes on every action taken. After this week, I have pages upon pages of handwritten notes on things that didn’t work, things that did, and all the steps in between. After the dust (hopefully) settles later this week I intend to go back and organize these notes into more formal documentation that can be used again when the next need arises. It will also help allow creation of more automated processes, so that the next time isn’t so labor intensive.

Pro tip: if you use Powershell at all in your work (and if you don’t, why don’t you?!), the Start-Transcript cmdlet is a lifesaver. It literally records every action you run, as well as the output. I used it this week and was able to use it to produce a working script that carries out all the actions I did manually this time around. As I’ve said before, do it once, then automate it.

Finally, document any and all risks that you discover in writing, and make sure your boss is aware of them. You might think that you’re being a negative ninny, and perhaps you are, but it’s your job to make sure all the players are aware of potential issues. Note that this doesn’t mean that you refuse to move forward in the face of these! Throughout my career I have pushed through all sorts of uncomfortable situations, simply because those above me decided that, after all consideration, moving forward was still the best option. I have disagreed with them, vehemently at times, but once the orders are given, it’s my job to make the best of them. All we can do is make sure that our concerns are heard (and more importantly, understood).

When we raise these concerns, it’s very helpful to include a list of possible mitigation strategies, along with how each would impact the project at stake. For example, let’s say that you only have time to build a single stand-alone server, rather than a cluster or mirrored pair, and you’re worried about incurring possible downtime. When you document this concern, note how long it would delay the project if you were to be allowed to bring up and configure a second server. Perhaps you still won’t be allowed to delay the critical path of the project, but management will agree that in parallel your first priority is to get redundancy in place as soon as possible.

Let’s face it: change and challenge are inevitable in this business. When they come, we can complain and obstruct, or we can move forward with grace, make the best of the situation, and learn from our mistakes. Over time, we’ll get better, and we’ll be seen as an integral part of the continued success of the companies and clients we serve.

How Azure kicked my you-know-what

Leave a Reply

Your email address will not be published. Required fields are marked *