Trendy news on Computing Technology
If you can’t measure it you can’t manage it, goes the adage. And if you can’t see it you can’t measure it. The more complex a system, and the larger the number of component parts the more important this becomes.
In the context of the modern cloud environment, there are multiple data streams that need to be monitored – and many different parties who need access to the data. The difficulty in doing so is exacerbated by the rate of change, the distribution of applications and data across multiple servers, the general IT skills gap and new architectures such as microservices.
Addressing the audience at the Computing‘s Cloud Live event on Wednesday, Matt Haberle, business operations manager EMEA at SaaS-based performance monitoring platform LogicMonitor, provided a long list of component factors that organisations – and particularly DevOps engineers – need to consider. A few of these are listed below:
Applications and services – applications are made up of multiple component parts, many originating elsewhere. There may be databases, streaming protocols, APIs, web services and many other elements.
- On-premises infrastructure – networks (all layers), virtualisation software, storage, load balancers, security appliances, power and authentication services.
- Cloud providers – resources, reliability, availability, running costs, the health of the provider, new services added, auto-scaling and tag synchronisation.
- Connectivity protocols between on-premises and cloud such as Direct Connect.
- DevOps and infrastructure – automated code builds and deployment, configurations and environment creation.
Many of these elements will have their built-in monitoring systems, but end-to-end monitoring across the piece is vital if organisations are to truly understand what is happening in their hybrid-cloud and multi-cloud systems, argued Haberle. In the context of DevOps Dev, Ops and Security staff all need to be privy to the same information to avoid teams descending into a destructive blame game.
Given the number of factors and dependencies involved, this monitoring needs to be automated. The alternative is an increasingly heavy workload for the operations team, said Haberle, quoting LogicMonitor’s CIO: “Half of my IT career has been developing workarounds in infrastructure to cover deficiencies in developer architectures.”
“Automation is a requirement,” said Haberle. “All the analysts say automation is necessary to keep up with the rapid pace of change.”
“You can offset complexity through effective monitoring,” he said. “Effective monitoring provides you with confidence and certainty that your applications are ready before they are in production and before they’re customer or employee facing.”