Recent ransomware attacks on European companies demonstrate that cyber threats are inevitable; organizations must prepare proactively rather than reacting after an incident occurs.
On January 11, 2025, a Dutch technology company specializing in sustainable waste processing was forced to halt operations due to a ransomware attack by the BlackBasta group, which encrypted over 500 GB of critical data. The same day, a Belgian manufacturing company suffered a similar attack, losing 600 GB of sensitive information. These incidents, while not directly impacting individual users’ personal data, disrupted supply chains and caused significant financial and reputational damage. Cybercriminals are targeting not just high-profile corporations, but critical industries. It’s no longer a matter of if an organization will be attacked, but when, and whether it is prepared.
Many organizations remain exposed because security is not fully integrated into their business processes, making a proactive, comprehensive strategy essential for long-term resilience.
Cybersecurity is a prerequisite for business continuity and should be a top priority in the boardroom, yet many companies remain vulnerable due to the lack of security integration into their organizational structure and business processes. Often, measures are taken only after incidents occur or due to regulatory pressure. In contrast, organizations that adopt a proactive, holistic approach experience fewer security failures. This article explores the importance of an integrated cybersecurity strategy, common shortcomings, and how businesses can build long-term resilience. In a series of follow-up articles, we will define clear strategies to implement and practical measures to take to build a foundation of resilience and information security in organizations.
Example of a missed opportunity: data breach due to lack of an integrated vision The 2017 Equifax breach shows that isolated technical fixes—without integrated patch management, risk analysis, and clear ownership—can lead to massive financial and reputational damage.
A notable example of the risks of ad hoc security is the 2017 Equifax data breach. This American credit bureau exposed the personal data of millions of customers due to a failure to patch a known software vulnerability in time. Despite having technological defenses such as firewalls and intrusion detection systems in place, an investigation revealed that patch management and risk analysis were not properly integrated into business processes. The organization lacked visibility into where vulnerable systems were operating and had unclear ownership of responsibilities.
The incident showed that applying a few separate technical solutions offers no guarantees if there is no integrated policy. The lack of ownership, poor communication between departments, and insufficient visibility in critical data all contributed to the impact of this data breach. The cost for Equifax was in the hundreds of millions of dollars, and the reputational damage was enormous.
Holistic safety thinking: more than technology
Effective cybersecurity goes beyond technology, requiring continuous human awareness, process-based assurance, and robust governance to embed security into every facet of the organization.
A key takeaway from such incidents is that cybersecurity requires more than just technical tools like antivirus software, firewalls, or a one-time penetration test. While these technologies are valuable, they must be embedded in a system where people, processes, and policies are seamlessly aligned.
- The Human factor. Many security incidents stem from human error—an inattentive click on a phishing link, a cloud misconfiguration, or unauthorized access to accounts. Awareness and continuous training are therefore indispensable. This goes beyond a one-off awareness program; It requires a culture in which employees dare to report incidents and near-incidents, without fear of sanctions.
- Process-based assurance. Safety should be a recurring agenda item: from product development and supplier selection to daily use of systems. For example, integrate risk management into change processes so that it is not something that has to be arranged “afterwards”.
- Governance and ownership. Without clear responsibilities and a clear division of tasks, cybersecurity often gets stuck in good intentions. Therefore, appoint a CISO or security manager with a clear mandate and reporting structure. This ensures that risks and incidents receive attention at the board level.
Integrating these elements into a unified framework ensures that security is not an afterthought but a core component of strategy and operations.
The role of laws and regulations Evolving frameworks like the EU Cyber Security Act, Cyber Resilience Act, and NIS2 impose stricter obligations and guidance, emphasizing that organizations must move beyond mere compliance to meet higher security expectations.
In Europe, the legal attention for digital security is growing considerably. Some important developments:
- EU Cyber Security Act. This regulation introduced a European framework for the certification of ICT products and services. Organizations can have their solutions assessed for security, creating a more transparent market.
- Cyber Resilience Act. This law enshrines the principle of ‘security by design’ throughout the entire life cycle of hardware and software. Manufacturers and suppliers remain obliged to fix vulnerabilities and inform users about them.
- NIS2 (Network and Information Security Directive). The tightened NIS Directive expands the scope to more sectors (including the energy sector, healthcare, transport and critical digital services) and sets higher requirements for incident reporting and administrative responsibility, among other things.
- EU AI Act. The EU AI Act establishes a comprehensive framework for regulating artificial intelligence by setting standards for transparency, accountability, and risk management. It ensures that AI systems are developed and deployed in a risk-based, secure, ethical, and reliable manner, serving as a crucial bridge between traditional cybersecurity measures and emerging AI-specific challenges with an additional focus on data governance and data integrity.
These regulations impose additional obligations on organizations while also offering clearer guidance. However, companies that merely comply with regulatory checklists will find that the bar for security expectations continues to rise.
AI and data: the next wave of risks
AI introduces new challenges—from adversarial attacks and model poisoning to privacy breaches and bias—which must be managed through integrated security measures that align with overall organizational strategies.
Artificial Intelligence (AI) is revolutionizing industries, but it also introduces new cybersecurity challenges that organizations cannot afford to ignore. AI-powered systems process vast amounts of sensitive data, make critical decisions, and automate complex processes—making them attractive targets for cybercriminals. Without strong governance and security measures, AI can become both a target and a weapon in cyber warfare.
Bias in AI: A Security and Governance Issue
While AI bias is often discussed in ethical and social contexts, it is also a security and governance risk. Flawed AI decision-making can introduce hidden vulnerabilities, including:
- Fraud Detection Failures – If AI models used for fraud prevention favor certain patterns and overlook others, cybercriminals can exploit biases to evade detection.
- Security Threat Misclassification – AI-powered threat detection tools trained on biased datasets may fail to recognize novel cyber threats, making them ineffective against evolving attacks.
- Regulatory and Compliance Risks – The rise of the EU AI Act and other global regulations means that companies deploying AI must ensure transparency, accountability, and security compliance. Organizations that fail to govern and audit their AI models properly could face fines, reputational damage, or legal liability.
AI as a Security Risk
Artificial Intelligence (AI) models are only as reliable as the data they are trained on. When facial recognition datasets consist mainly of images of certain population groups, this can lead to a model that hardly recognizes other groups well. This is not only harmful to the people harmed by it but can also lead to reputational damage and legal consequences for the organization that uses AI.
- Deepfakes. Advanced AI models can generate realistic videos, audio, and images, which make people appear to say or do things that never happened. This phenomenon can be used for fraud, blackmail or politically motivated disinformation. Organizations should invest in detection tools and employee awareness training to help identify deepfake content early.
- Adversarial Attacks. Attackers feed manipulated input into AI models to deceive them. For example, small perturbations in an image can cause an AI-powered surveillance system to misidentify threats. These types of attacks require additional checks on the input data and periodic validation of the model.
- Model Poisoning. Cybercriminals inject malicious data into AI training sets, corrupting algorithms and leading to unreliable or dangerous outputs.
- Privacy and Data Breaches. Many AI models rely on vast amounts of sensitive personal and business data. If these are insufficiently anonymized or stored incorrectly, a hack or misconfiguration can lead to serious privacy violations. The emphasis is therefore on data classification, strict access control and encryption.
- AI-Powered Cyberattacks. Hackers are increasingly using AI to automate attacks, identify vulnerabilities faster, and bypass traditional security measures. AI-driven phishing campaigns, deepfake-based fraud, and automated penetration testing tools are already being used in the wild.
Integrating AI Security into Holistic Cybersecurity Strategies To secure AI-driven systems, organizations should integrate AI security into their broader cybersecurity and governance framework:
- Risk-Based AI Governance. AI models should be continuously monitored for bias, security flaws, and adversarial vulnerabilities.
- Data Protection and Access Controls. AI systems should be built with strong encryption, data anonymization, and strict role-based access controls.
- AI-Specific Cybersecurity Testing. Conduct adversarial attack simulations, red teaming, and regular audits to ensure AI security mechanisms are resilient.
- Cross-Functional AI Security Teams. AI security should not be left to IT alone. Legal, compliance, security, and engineering teams must work together to govern AI models and enforce security best practices.
A solid data governance structure is therefore essential: define what data you collect, where it is stored, who has access to it and what security measures are needed. Organizations implementing AI without adapting their security framework risk exposure to new attack vectors. Legislation such as the (upcoming) AI Act also requires transparency, explainability and documentation of AI systems. A holistic view of security is not a luxury, but a bitter necessity.
Learning from highly regulated industries Industries such as aviation and pharmaceuticals illustrate how rigorous, continuously monitored processes and comprehensive checklists lead to superior security and reliability, serving as valuable models for other sectors.
In industries where safety is literally of vital importance — such as aviation and the pharmaceutical industry — an integrated approach has been central for decades.
- Aviation. From preflight checklists to incident reporting, every step is procedurally recorded. Checklists here are not just formal checklists, but instruments that make crucial steps visible and verifiable. Errors are detected quickly and can be addressed systemically.
- Pharmacy. In the pharmaceutical sector, there are strict validation requirements for equipment and processes (‘Quality by Design’). Every link in the chain (from research to production) is documented and monitored. This builds a culture in which continuous improvement and thorough risk analysis are self-evident.
These sectors show that tightly regulated regulations and checklists are not by definition obstructive. On the contrary: if used properly, they help to minimize risks and detect errors at an early stage. The key is that people really work with it in practice and learn from it, instead of just “officially” complying with the rules. This principle — embedding security in all layers of the organization — can also be applied within IT and information security.
From compliance to intrinsic safety Transitioning from basic regulatory compliance to an intrinsic safety culture requires strong management commitment, risk-based approaches, effective use of checklists, elimination of silos, and continuous improvement.
How do you make the step from “minimum compliance” to an intrinsic safety culture? A few points of attention:
- Management commitment. If the management sees security mainly as a cost item or precondition, employees will experience it that way. Only when management makes it clear that safety is a strategic priority, and frees up resources (budget, people, time) for this, does the organization have the chance to take real steps.
- Risk-based working. Start with the question: “Where are our real crown jewels?” Focus security on the data, processes, and systems that cause the greatest damage if lost or misused. This prevents you from indiscriminately investing in ‘random’ solutions.
- Use checklists properly. Standards like ISO 27001, the ETSI TR 103 305 series and frameworks like NIST CSF provide a solid structure and help not to overlook anything important. The crux is that you understand why you tick off certain points and how it makes your organization safer. Think of checklists as a tool, not as an end in itself.
- Eliminate silos. Cybersecurity extends beyond IT and requires a cross-functional approach. Legal aspects (contracts, compliance), HR (hiring procedures, training), facilities (physical security) and procurement (supplier management) all play a role. Set up a multidisciplinary working group or steering group that is broadly responsible for the integrated approach.
- Continuous improvement. Cybersecurity is never “finished”. Create a fixed cycle of monitoring, evaluating, reporting and adjusting. Perform incident and trend analyses to identify patterns. A structured Plan-Do-Check-Act (PDCA) cycle aligns with quality assurance methods, helping organizations stay ahead of emerging threats and regulatory changes.
Conclusion: the way forward A holistic cybersecurity strategy, treated as a continuous improvement process, is essential for reducing risks and ensuring organizational resilience, ultimately transforming security into a genuine success factor.
Many organizations have the intention to improve their security but struggle with translating it into daily practice. In the event of incidents — such as the Equifax data breach and the Dutch and Belgian examples mentioned in the opening paragraph, or other numerous ransomware, phishing, Denial-of-Service, and other attacks — we always see that one or more crucial facets were missing: ownership, insight into the valuable data, process discipline or awareness among employees.
Meanwhile, pressure is growing from laws and regulations, but also from customers and cooperation partners who demand confidentiality and continuity to be guaranteed. However, there is also a positive side: those who opt for an integrated approach and invest in people, process and technology will find that compliance requirements are easier to fulfil. In addition, you reap the benefits of greater reliability and a better image.
“Holistic cybersecurity” can be seen as a continuous improvement process, in which you continuously learn from mistakes, new threats and sector-wide insights. By looking at the lessons learned in, for example, aviation and pharma, or the additional risks introduced by AI, you can strengthen the backbone of your own organization. This prevents security from being something you only arrange “afterward” or “on paper”, and you make it a real success factor.