How NASDAQ makes strategic infrastructure decisions
For some time, NASDAQ has been a bit conservative in its approach to deployment automation, or any automation in general, including moving to the cloud, says Bhavani Yellapragada, AVP of DevOps. And the company wasn’t just opposed to the whole concept of moving to the cloud — it was very apprehensive about rapid change, he explains, until very recently.
“This is the year where people are beginning to realize that cloud does not necessarily mean a lack of security,” he says. “Traditionally, utility and security and risk were all decoupled. People thought that if you moved too fast, you were risking something. People now find that security, utility, and change can all coexist.”
But while it’s never the perfect time to switch, if you wait for all stars to align, unless you think you’re going to do a complete overhaul of your product road map, you should be able to move to hyperconverged infrastructure immediately. There will be, of course, more tactical decisions around commitments you’ve made to your customers, or for instance a quarterly release cycle that you don’t want to interrupt. But the faster you move away from error-prone manual ways of doing things, the better you are.
“Even if there is some cost to change, it’s still better than making mistakes today,” he says. “But you know for a fact that if you go the manual route, you’re making mistakes now. I would not hedge bets on something that’s only speculative, as opposed to a hard fact that I know — that automation is the only way to scale, the only way to get consistent predictable results.”
And of course, virtualization is a no-brainer on top of that, he adds, as not only a cost saver, but for automation of infrastructure provisioning.
“I press a button, I get a fresh environment, and it also means I have the capability to kill the old environment,” Yellapragada says. “And you also gain the ability to quickly support surge traffic. The only way to scale is virtualization.”
Most companies have had virtualization in place for quite a while, but if you’re now moving to HCI, you need to make sure those two integrate, because otherwise it would be too cost-prohibitive.
You need to know two things, he says: One is how easy it is for the solution to lend itself to extension and customization. The second is how easy it is to ramp up on that solution, and how easy it is to integrate with your existing tool set.
Additionally, a number of features are essential when choosing a new application, Yellapragada says.
“If you talk to anybody in my role, or who’s in DevOps, they’ll tell you the three things that are most important to them,” he says. “Agility: how fast can you set it up? Security: does it comply with the infosec [information security] guidelines we have? And the third is consistency: is it the same across all environments?”
But very basically speaking, he says, a tool is a tool. It comes out of the box more as a general purpose solution, which you then customize for your needs and your environment, whether it’s the cloud or a data center.
“One of the first guidelines for choosing any tool is, will I be able to use it, say, with any cloud provider?” he says. “The main cloud provider a couple of years ago would have been AWS. So am I able to use the same tool both in AWS and in our own data centers?”
Another consideration is customization, which involves some scripting language – so what is the necessary scripting language? Is it a more modern language? The third consideration is potentially the most important of them all: is the solution you’re choosing extensible?
“I know what I know now of my infrastructure needs,” he says. “But a true strategic leader will never pick a solution which fits the needs of today. These tools all scale, but extensibility is key.”
Scale also figures into cost considerations, particularly license structure, and you need to project how easy it is to upgrade.
For instance, if you choose a license that’s sold per server, and tomorrow you acquire four more companies in different geographic locations, each with their own data centers, he says, you have a problem — now you have to buy four more single-server licenses. Instead you want to look for something like an enterprise license, with an initial flat fee for a set number of servers, and additional flat fees for extending the number of servers.
“If you’re speaking very realistically, cost underlies everything,” he says. “It’s not about how much it costs now, but tomorrow, what would it cost me to make a change?”
The tactical decision then is governed by immediate cost. But the strategic decision is looking at the long term, Yellapragada says, and it depends on two things: vendor lock-in, and what it would cost to make a change.
For instance, Amazon’s AWS was king for a long time, and Microsoft’s Azure and Google’s web services were nowhere in play two years ago. But last year, Microsoft caught up — and now each service comes with their own bells and whistles, which complicates every decision. If you pick a vendor that works with AWS, would that vendor also work with Microsoft? What if a vendor has picked a fight with a cloud service, or vice versa?
“At the end of the day, this space is evolving so much,” says Yellapragada. “In the DevOps space, you have to be sprinting just to maintain your status quo. If you stand and try to maintain the status quo, you’re gone.”
Comments are closed.