Europe Sees More Hacktivism, GDPR Echoes, and New Security Laws Ahead for 2024


An evolving geopolitical landscape has impacted cybersecurity in Europe this year, posing specific challenges for safeguarding critical infrastructure and sensitive data.

The Ukraine war and the conflict in Gaza have led to a rise in hacktivism, and ransomware gangs have excelled in capitalizing quickly on new critical vulnerabilities to gain initial access within many organizations.

This is exacerbated by threat actors having more access to various means of automation, be it readily available command-and-control (C2) toolkits, generative AI (GenAI) to support their spear-phishing efforts, or commercially available ransomware from the Dark Web.

This means that critical infrastructure is more in the crosshairs of attackers than ever before, according to Max Heinemeyer, chief product officer at Darktrace.

“It’s good to see various parts of legislation acknowledging that, including the European NIS2 directive, as well as local legislation, like the IT-security law 2.0 in Germany, over the last few years,” he says.

Hacktivism and Critical Infrastructure

The conflict in Ukraine dominated the early part of the year, with the threat of nation-state cyberattacks and counter attacks potentially escaping from the theater of war into the wider European cyber ecosystem, says Gareth Lindahl-Wise, CISO at Ontinue.

“Critical infrastructure will remain a target for both propaganda and genuine disruption purposes,” he says. “Sensitive data will continue to be actively sought for operational military advantage, criminal extortion purposes, and also for nation-state and commercial advantage.”

The European Union Agency for Cybersecurity (ENISA), the EU agency dedicated to achieving a high common level of cybersecurity across Europe, performs a yearly analysis of cybersecurity threats and publishes the results of its findings in its “Threat Landscape” reports.

According to ENISA spokesperson Laura Heuvinck, the agency recorded approximately 2,580 incidents during the reporting period from July 2022 to June 2023.

“To this total must be added 220 incidents specifically targeting two or more EU member states,” she says. “In most cases, top threats may be motivated by a combination of intentions, such as financial gain, disruption, espionage, destruction, or ideology in the case of hacktivism.”

The NIS2 Directive text includes provisions to raise the cybersecurity requirements for digital services used in critical sectors of the economy and society, including sectors such as waste management and manufacturing.

Hybrid Work and Its Security Challenges

Digital transformation is leading to increasing complexity for defenders, with the past few years bringing significant increases in remote and hybrid work, bring your own device (BYOD) policies, multicloud adoption, and industry 4.0 trends, along with more digitalized supply chains, says Darktrace’s Heinemeyer.

“Staying on top of these complexities is the real challenge facing organizations,” he says. “It makes it increasingly difficult to understand their risks and know what they need to defend.”

This complexity is quickly capitalized on by threat actors, who are continuously looking to break into organizations through targeted phishing, Internet-facing vulnerabilities, and supply chain compromises.

“Organizations are adapting by using AI to break through this complexity and identify anomalous activity early on, and by consolidating visibility into fewer panes of glass,” Heinemeyer says.

GDPR Impact and Enforcement

The General Data Protection Regulation (GDPR) — a comprehensive data protection law implemented by the EU in May 2018 — has really become the regulatory “hammer de rigueur,” with many multimillion-euro penalties being issued, says Coalfire vice president Andrew Barratt.

“The Digital Services and Digital Market acts intend to create a level playing field but are sometimes seen as jabs at the large, predominantly US-based tech firms, for which the EU has no real response and is arguably losing ground to China,” he notes.

Ontinue’s Lindahl-Wise says GDPR has undoubtedly driven a significant amount of focus and energy in people who staff security functions to better understand the data they have, where it is, how it is secured, and who it is shared with.

“Outside of the ‘consent’ and ‘right to use’ elements, these should have been core basics for data security from the get-go,” he says. “There is a danger that commercially sensitive yet non-PII data is left as a poor relative in prioritization.”

In recent years, the EU has taken numerous measures to strengthen cybersecurity in Europe in a sustainable manner, says Jochen Michels, head of public affairs in Europe for Kaspersky.

Some of the examples include the aforementioned NIS2 Directive, an EU-wide law taking measures for a high common level of cybersecurity across the union. The Cyber Resilience Act, which aims to safeguard consumers and businesses using digital products, is currently under negotiation but expected to take effect in early 2024.

Other efforts include the creation of the European Cybersecurity Skills Academy and the European Cybersecurity Competence Center, as well as the development of European Cyber Security Schemes, a comprehensive certification framework.

“These initiatives mainly focus on such aspects as supply chain security, transparency, security by design and skill building and training,” Michels says.

While GDPR has led to an increasing scrutiny on data privacy and data processing — e.g., who is using our data, where, and for what purpose — NIS2 is driving European organizations to significantly step up their cyber maturity, Heinemeyer adds.

“NIS2 has been a major topic at European security conferences this year, such as ITSA held in Nuremberg, Germany,” he explains. “Organizations are feeling the pressure to act and keep up with compliance.”

Securing AI/ML Security

Through the EU AI Act, which is currently in trialogue negotiations, the EU has reacted to potential cybersecurity risks from GenAI and AI/machine learning, Michels points out. An agreement on the act and its adoption, at least tentatively, is expected by the end of 2023.

“In that act, cybersecurity is mentioned as an important element of the requirements to ensure that high-risk AI systems are trustworthy,” Michels explains. “In addition, there are several initiatives on AI and cybersecurity.”

For example, ENISA is working on mapping the AI cybersecurity ecosystem and providing security recommendations for the challenges it foresees. The agency also published the “Artificial Intelligence and Cybersecurity Research” report, which aims to identify the need for research on cybersecurity uses of AI and on securing AI.

“At the same time, the legislators have proposed regulation in this area based on risk assessment,” ENISA’s Heuvinck says.

Specifically, the proposed EU AI Act foresees cybersecurity requirements for high-risk AI systems to ensure compliance, identify risks, and implement necessary security measures.

“A security risk assessment should be conducted taking into account the design of the system and its intended purpose,” she adds.

There are two different aspects to consider about the cybersecurity impact of AI, Heuvinck notes. On one hand, AI can be exploited to manipulate expected outcomes. For example, AI is used in ENISA’s Open Cyber Situational Awareness Machine, which automatically gathers, classifies, and presents information related to cybersecurity and cyber incidents from open sources.

On the other hand, AI techniques can be used to support security operations — but this can come with risks.

“The questions raised by AI come down to our capacity to assess its impact, to monitor and control it, with a view to making AI cyber secure and robust for its full potential to unfold,” she says.

From her perspective, the importance of cybersecurity and data protection in every part of the AI ecosystem to create trustworthy technology for end- users is undeniable.

“Cybersecurity is a given if we want to guarantee the trustworthiness, reliability, and robustness of AI systems, while additionally allowing for increased user acceptance, reliable deployment of AI systems, and regulatory compliance,” Heuvinck says.



Latest articles

spot_imgspot_img

Related articles

Leave a reply

Please enter your comment!
Please enter your name here

spot_imgspot_img