Explosion of Observability Data from Cloud Reaches Tipping Point, Dynatrace Says


(Foto-Ruhrgebiet/Shutterstock)

Companies have reached a tipping point where the volume and complexity of observability data (logs, metrics, traces, and events) created by cloud-native IT systems exceeds the value that companies are able to extract from it, according to a new report released by Dynatrace.

The advent of cloud-native systems built on technologies like Kubernetes has eased the burden on IT teams to stand up scalable applications quickly. But that front-end benefit comes at a significant back-end cost, as cloud-native systems generate much more observability data than the traditionally deployed systems they replace.

According to the State of Observability 2024 report, which was published yesterday by Dynatrace, companies are struggling to manage and analyze observability data in a timely manner. Specifically, the study found that 86% of technology leaders say cloud-native technology stacks produce an explosion of data that is beyond humans’ ability to manage, a 15% increase in two years,

The report, which is based on a survey of 1,300 CIOs and technology leaders in large organizations around the world, generated several other findings:

  • The average multi-cloud environment spans 12 different platforms, such as public clouds, infrastructure-as-a-service (IaaS) offerings, and on-prem systems;
  • 88% of survey respondents say the complexity of their technology stack has increased in the past 12 months;
  • 87% of technology leaders say multi-cloud complexity makes it more difficult to deliver outstanding customer experiences;
  • And 84% say it makes it harder to secure applications.

Kubernetes is the source of many of these issues. The Google-developed technology makes it a snap for administrators to quickly deploy and easily scale complex, multi-tiered applications in Docker containers. If the app needs to be moved to new infrastructure, Kubernetes makes it a breeze. The virtualization appears effortless to the administration, but it masks a significant amount of technical complexity going on under the covers.

(Source: Dynatrace State of observability 2024)

Dynatrace says only 13% of companies today are running mission-critical workloads on Kubernetes. But that usage is expected to jump to 21% over the next 12 months, ultimately reaching 35% in the next five years, as companies migrate critical systems like core banking applications and ERP systems to Kubernetes.

Companies have a variety of issues with Kubernetes, the survey found, including concerns around observability, security, cost management, user experience, and log analytics.

The average company surveyed by Dynatrace uses 10 different observability and IT monitoring tools, the survey says. That creates a situation where too much time is spent maintaining the observability tools and preparing data for analysis, which was cited as a problem by 81% of the IT leaders surveyed by Dynatrace. The same percentage said they will be looking to reduce the number of tools they have in the next year.

“[T]he rise of more dynamic and distributed cloud-native technology stacks has unleashed a firehose of data that IT and security teams struggle to contain,” Dynatrace says in its report. “These modern environments generate data at a rate that is impossible for teams to cost-effectively capture and analyze using traditional practices and fragmented monitoring tools. Teams simply cannot manually query all data, from all sources, in context, to access precise insights in a timely manner.”

Many companies adopting cloud-native technology have also adopted AIOps practices and tools to cope with the data deluge. However, according to Dynatrace, the AIOps tools they’re using are largely outdated, as they’re based on “probabilistic and training-based learning models.” Newer AIOps tools utilize “more precise and predictive” technologies (presumably like the ones Dynatrace sells) offer a better result, the company says.

“Cloud-native architectures have become mandatory for modern organizations, bringing the speed, scale, and agility they need to deliver innovation,” Dynatrace CTO Bernd Greifeneder says. “These architectures reflect a growing array of cloud platforms and services to support even the simplest digital transaction. The huge amount of data they produce makes it increasingly difficult to monitor and secure applications. As a result, critical business outcomes like customer experience are suffering, and it is becoming more difficult to protect against advanced cyber threats.”

Related Items:

GenAI Doesn’t Need Bigger LLMs. It Needs Better Data

Data Observability in the Age of AI: A Guide for Data Engineers

Companies Drowning in Observability Data, Dynatrace Says

 

Latest articles

spot_imgspot_img

Related articles

Leave a reply

Please enter your comment!
Please enter your name here

spot_imgspot_img