The cybersecurity industry constantly says we need new tools to make our organizations secure. BYOD? You need mobile device management (MDM) and endpoint detection and response (EDR). Cloud? You need cloud configuration managers, hybrid observability tools, and specialized point solutions for managing and scanning exposed secrets, not to mention a lot more distributed web application firewalls. Kubernetes? You need a new set of tools that mirror older tools like linters, dynamic application security testing (DAST), static application security testing (SAST), scanners, and more. Now, there’s artificial intelligence (AI) — and chief information security officers (CISOs) and cybersecurity teams need tools such as scanning layers for AI-powered coding to address this emerging space. In short, tools rule.
Yet despite the constant accretion of new tools to solve new problems, the most common root cause of serious cybersecurity incidents remains failed processes. According to Gutsy’s 2023 State of Security Governance survey, which collected responses from more than 50 enterprise chief information security officers in August 2023, 33% of all security incidents are identifiably traced to process errors. The total may be much higher, given the complexity and multistage event chains of many incidents. A clear sign that tools aren’t solving our cybersecurity problems is poor operationalization of security tools: 55% of all security tools are not put into operation or are not actively managed. Just adding tools is not the solution.
From Security Post-Mortem to Continuous Process Mining
To fix process failures, you must address the factors at the root of the problems. The only way to accurately identify those factors is to observe, record, and document the failed processes that led to the problems. To date, this has mostly meant poring over logs and conducting post-mortems after incidents. But examining only the failed processes is like looking for crime under a streetlight — it ignores all the other potential process failures that have not happened yet.
A new approach is required that can be more easily scaled to record and map myriad interactions and processes continuously and at enterprise scale. Enter process mining for cybersecurity. Process mining has existed in numerous industries for over a decade. From enterprise resource management (ERP) systems to robotic process automation (RPA), where mapping a process is the first stage of deployment, capturing human interactions with technology as they run through their jobs is a familiar strategy.
However, this approach has not been applied to cybersecurity for a handful of reasons. First, analyzing and cataloging processes is tedious work that many cybersecurity and IT teams prefer to leave to auditors. Asking the cybersecurity or IT or networking teams to add this to their already heavy workloads of monitoring and securing infrastructure and software is unsustainable.
Second, while cybersecurity and audit teams have long relied on data collected by agents, that data is largely tied to events and changes in security tools, not on processes. This makes traditional process analysis a manual assignment built painstakingly through interviews, reading email chains, and sifting through logs. Data generated by different tools and systems is not always clean or easy to normalize, making process analysis more complicated, time-consuming, and costly.
Why More CISOs Embrace Process Mining
Several changes are forcing companies to revisit continuous, automated process mining for cybersecurity and technology governance workflows. On the technical side, lightweight, cloud-native technologies and infrastructure combined with more sophisticated ways of normalizing data streams have made it less resource intensive and costly to build effective process-mining products. At the same time, the growing recognition that tools are not the solution has led many CISOs to emphasize human factors over point solutions for the latest security threats.
Notably, the OWASP Top 10 has remained largely static for the past decade, even as incidents and Common Vulnerabilities and Exposures (CVEs) have hit record levels for each of the past five years. Savvy attackers recycle and recompile the same attack packages, knowing that what has worked in the past will probably work in the future. This clearly demonstrates that tools are not making companies safer. Something else must be done.
Another factor is the growing shortage of cybersecurity professionals creating opportunities for younger workers to enter the field. To be successful, these less-experienced people require more education and support, including systems to help them learn in real time and guardrails to keep them from making catastrophic errors.
Finally, the impact of attacks preying on process errors has grown markedly worse. Casino company MGM and cleaning products company Clorox have recently reported that ransomware events will materially impact their revenues. In the case of MGM, the damage was over $100 million.
Even the savviest companies are prone to public and highly embarrassing process failures. The recent compromise of Okta’s support systems by bad actors using social engineering tactics is a classic example of process failure. It resulted in painful post-mortem blogs from prominent customers like Cloudflare and 1Password and broad negative media coverage on their permanent record.
Focus on Helping Humans Rather Than New Threat Types
The best way to fix failed processes is not by giving human operators another tool. Rather, give them a process and framework, a way of thinking about their job (or specific parts of it) that is repeatable and logical. Technology teams need visibility into the processes they’re trying to follow, including all the variations that prevent them from getting the results they want. They need a systematic, scalable, and on-demand way to gain visibility. What is not measured does not matter, including in processes.
We love our tools, but to truly reduce risk and the number of successful attacks, we must start viewing security failures as a process problem rather than a technology problem. This is a profound shift that requires a different lens on security, but it is necessary to address the root cause of most cybersecurity problems. Tools may feel good and check the latest analyst quadrant box. But mining the process, educating the operators, and monitoring for process anomalies is the real solution.
About the Author
Aqsa Taylor, author of “Process Mining: The Security Angle” e-book, is Director of Product Management at Gutsy, a cybersecurity startup specializing in process mining for security operations. A specialist in cloud security, Aqsa was the first Solutions Engineer and Escalation Engineer at Twistlock, the pioneering container security vendor acquired by Palo Alto Networks for $410 million in 2019. At Palo Alto Networks, Aqsa served as the Product Line Manager responsible for introducing agentless workload security and generally integrating workload security into Prisma Cloud, Palo Alto Network’s Cloud Native Application Protection Platform. Throughout her career, Aqsa helped many enterprise organizations from diverse industry sectors, including 45% of Fortune 100 companies, improve their cloud security outlook.