In a time of rapidly evolving digital threats, a culture of continuous improvement is crucial for effective information security. Companies cannot suffice with a one-off risk inventory or a stack of security policies; they need to constantly sharpen their processes. The Plan-Do-Check-Act (PDCA) cycle - often called Plan, Do, Check, Act, or Plan, Execute, Check, Adjust - provides a structured approach to doing this. This PDCA cycle is at the heart of many quality and security programmes, and is explicitly recommended in standards such as ISO 27001 for information security.
- Plan: establish goals and plans for improvements (e.g. new security measures) and define measurement criteria.
- Do: execute the plans - implement the control or change.
- Check: measure and evaluate the results; are the changes having the intended effect?
- Act: secure successful improvements or adjust where necessary.
Crucially, this is not a one-off process but a continuous cycle. Just like a circle, PDCA never ends - after the Act phase, you start planning again, ensuring continuous improvement. It ensures that your organisation is doing a little better every day.
Continuous improvement with PDCA in information security
Information security benefits from the same continuous-improvement philosophy as other disciplines. The General Data Protection Regulation (AVG), for example, even mentions this mechanism! For information security, the PDCA (Plan-Do-Check-Act) cycle is an essential part of the ISO 27001 information security standard. Organisations implementing ISO 27001 are required to use an information security management system (ISMS) based on the PDCA cycle. This system helps organisations to continuously improve and adapt their information security measures to new risks and requirements
Threats change constantly and new vulnerabilities emerge; an organisation that stands still quickly falls behind. The PDCA cycle enforces a structural, iterative approach: after implementing security measures, control follows (e.g. via audits, monitoring or incident analyses) and then adjusting policies and measures based on the findings. This avoids getting stuck at a mediocre security level or leaving known vulnerabilities unresolved. Expert analyses show that without such a feedback loop, many teams stall after implementing a few “quick wins” - people are then unsure which additional controls add value and which are mostly bureaucracy. By measuring experiments (the Check phase), the team learns which measures are really effective. This allows them to focus on security measures that demonstrably deliver value, in line with business goals.
Giving real shape to this includes regular risk assessments, internal audits, management reviews and the implementation of improvement actions. Ideally, this makes information security not a one-off project but a continuous process with a fixed place in the organisation (e.g. via a permanent security team or committee).
PDCA on paper vs in practice
Although the PDCA cycle sounds simple on paper and is often neatly spelled out in policy, in practice it does not always go well. Some organisations do have an ISMS manual and annual audit planning, but do not really use the cycle to improve their security. A common pitfall is that information security degenerates into a tick-off exercise to meet compliance, without affecting day-to-day operations. It sounds good** on paper, but if the PDCA cycle is only followed pro forma, it remains linear thinking instead of true iteration.
You often hear that IT security should not become a purely administrative exercise. IT security should strengthen the business and not just become a paper task that only complies with guidelines. What strikes me then is that this becomes the reasoning for mainly letting things slide. In other words, a policy can be beautifully written - if there is no concrete follow-up on controls and actions, no improvement occurs and the organisation does not really become more secure. Worse still, you keep repeating the same mistakes, using the same ramshackle solutions and seeing information security as a cost of a nuisance.
Practical examples confirm this. For instance, I regularly see that companies do have policy documents and procedures (e.g. for incident response or access management) but that in the hustle and bustle of everyday life these are not strictly adhered to or evaluated. In auditing, we talk about intent, existence and operation for a reason. But in practice, there is still sometimes a gap between intent and existence and operation.
Regular testing of measures (e.g. by internal pen tests or simulations) is then omitted, or findings do not lead to adjustments. This undermines the PDCA principle. An annual audit, for example, may produce the same findings over and over again without structurally fixing them - a sign that the Act phase is failing. Without a living improvement cycle, security programmes quickly become static and vulnerable. It requires commitment from both management and implementers to make PDCA really work: management must support and release improvements, while the implementation team must actively measure and learn. Only then will the cycle come to life and result in ever-increasing information security quality.
Learning from other sectors: aviation as an example
Other industries with a strong focus on safety have been using continuous-improvement mechanisms for decades. The aviation industry is considered a prime example: flying today is extremely safe, which is partly due to systematic improvement cycles after incidents and proactive measures. Some telling examples:
- Crew Resource Management (CRM) - In the 1970s and 1980s, aviation introduced CRM training for cockpit crews in response to some serious accidents involving poor teamwork and communication. Previously, the captain was considered the ultimate authority, which sometimes led to mistakes as co-pilots hesitated to intervene. CRM taught cockpit teams better communication, shared decision-making and holding each other accountable for possible mistakes. CRM is now mandatory for commercial airline pilots worldwide. This has arguably saved lives: famed pilot Captain Al Haynes credited CRM training for successfully landing the damaged United Airlines Flight 232 aircraft in 1989 - without it, team coordination would have resulted in many more casualties. CRM has proved so effective in reducing human error that the model has been adopted in other fields, such as surgery and emergency medicine. Continually training and honing these non-technical skills is a form of continuously improving aviation safety culture.
- Mandatory checklists for pilots and technicians - Aviation has a culture where no step is too trivial to verify. Checklists are a simple but crucial measure for consistent and error-free operation. This principle was introduced, for example, after a Boeing Model 299 prototype (precursor of the B-17 bomber) crashed in 1935 because the crew forgot a simple operation (unlocking the rudders) - the aircraft was so complex that “pilots could no longer do all the steps by heart.” In response, they designed checklists for take-off, flight and landing, which henceforth had to be followed always. Since then, checklists have been indispensable. Every pilot - from sport pilot to line flight captain - works with standardised lists for each phase of flight. This drastically reduces the chance of forgotten actions and ensures that even under stress (think of an emergency) nothing essential is skipped. Using checklists increases consistency and catches human memory errors, directly contributing to safety. Conversely, studies of disasters show that when crucial mistakes are made by crew, very often the checklist has been ignored. Maintenance engineers also work with strict checklist protocols, to make sure that, for example, after a maintenance service, all bolts are retightened to the correct torque and no steps are missed. Constantly evaluating and improving these checklists (e.g. in response to incident investigations) is also a PDCA-like practice: if something does go wrong somewhere, the relevant checklist is revised to prevent recurrence.
- Preventive maintenance and component replacement (metal fatigue) - Technical improvement cycles are equally important. Every aircraft component has a certain lifespan and is often replaced before it fails, to minimise risks. This practice stems from lessons around metal fatigue, among other things. In the 1950s, the first jet passenger jet (the De Havilland Comet) suffered fatal crashes due to metal fatigue in the fuselage. Since then, thorough understanding of material fatigue has become part of the design, inspection and maintenance process. Aircraft undergo periodic detailed inspections (e.g. non-destructive testing for cracks) and have strict schedules for overhaul. Manufacturers and regulators set operational limits: after a certain number of flight hours or take-off-landing cycles, a component must be replaced or an aircraft taken out of service. This proactively prevents accidents due to wear and tear. This is a continuous improvement mechanism, as the limits are constantly adjusted based on experience and research. Every time a hairline crack or defect is still discovered, it leads to adjustments: shorter inspection intervals, improved materials, design changes or new maintenance instructions. Consider the Aloha Airlines incident in 1988, where metal fatigue caused part of the fuselage to tear off; this precipitated the introduction of strict inspection regimes for older aircraft. This system of precaution and learning has greatly reduced the risk due to material fatigue. Maintenance staff are continuously trained and procedures updated - a practical example of PDCA: plan (establish maintenance schedule), do (perform maintenance), check (analyse inspection results, investigate incidents) and act (adjust schedules or design to prevent future problems).
The common thread in these aviation examples is continuous improvement: every mistake or near-failure is reason to adjust procedures, training or technique. The result is an impressive safety statistic, achieved by constantly going through the feedback loop and creating a culture in which everyone, from mechanic to pilot, embraces improvements. The PDCA cycle is part of the mature culture.
Pharmaceutical industry: continuous improvement for quality and safety
Continuous improvement is also a core concept in the pharmaceutical industry, albeit under different names. This sector is under strict regulation (think GMP - Good Manufacturing Practice), precisely to guarantee the quality and safety of medicines. This requires a continuous cycle of measurement and improvement. For instance, pharmaceutical companies have systems for CAPA (Corrective and Preventive Action): for every deviation in the production process or every complaint about a product, they first correctively intervene and then preventively look at how to prevent recurrence - an approach very similar to PDCA (one plans improvement actions, implements them, checks effectiveness and secures them).
A key mechanism is the comprehensive quality management system, in which feedback is collected at all levels. For example, during production, batches of drugs are continuously tested (in-line or via samples in the lab). If a quality measurement falls outside specification, a whole procedure of analysis and improvement starts. This can lead to adjustments in the process, extra training for employees or even design changes to the product. Research shows that in pharma and healthcare, even more emphasis on continuous improvement is needed to cope with complex problems - this involves reinforcing regular quality assurance with improvement techniques, so that employees can structurally address process problems rather than individuals. In other words, cultivating a culture where every deviation is an opportunity for improvement is in the DNA of leading pharma companies.
Concrete examples show how effective this can be. Pharmaceutical companies, for example, apply Lean and Six Sigma to continuously optimise their processes. Pfizer, one of the world’s largest pharmaceutical companies, launched an improvement programme in the mid-2000s to drastically reduce the lead time of the production and distribution of the cholesterol-lowering drug Lipitor. Through value stream analysis and the elimination of bottlenecks, they managed to reduce the overall delivery time by 75% over two years. This meant that patients received their drugs faster and Pfizer was able to operate more efficiently - a win-win through continuous improvement. Another example is the broad introduction of process-analytical technologies and real-time monitoring in pharma production, so that adjustments can be made immediately if parameters deviate. In the past, quality issues were only discovered at final inspection; now we learn during the process and can take immediate Action.
In the pharmaceutical industry, as in aviation, safety is the driving force. Every recall of a medicine or every production error discovered triggers an extensive PDCA cycle: from investigation (Plan what to improve), implementation of corrective measures (Do), verification via audits and additional tests (Check) to adjustments of standard processes and training (Act). In addition, pharmacists have periodic management reviews of their quality systems, where trends in deviations and complaints are analysed and improvement plans are made. Because of this approach, medicines today are of very consistent quality and serious production incidents are relatively rare - and when they do occur, the whole industry learns from them (via shared guidelines, pharmacopoeia updates, etc.). Continuous improvement here is literally vital for patient safety. The side effect: better functioning systems and processes combined with efficiency leads to you being better off instead of just having a cost.
Parallels and lessons for information security
The examples from aviation and pharmaceuticals show that a structural improvement culture offers huge safety and effectiveness benefits. And although information security is a different field, the underlying principles are very similar. Some key lessons and parallels:
- Culture and human behaviour: Both CRM in the cockpit and quality culture in pharma are about people feeling free to raise and solve problems. It is the same in information security: an open reporting culture for security incidents or near-incidents (e.g. an employee who almost clicked on a phishing link, but reports it) creates learning opportunities. If employees hide mistakes for fear of sanctions, that valuable Check information is lost. Encouraging cooperation between management, IT security staff and end users (like CRM encourages teamwork between captain and co-pilot) ensures that signals are picked up in time and improvements become known to all. Training also plays a role: regular security awareness sessions for staff and drills for IT teams (such as emergency plan exercises) can be compared to mandatory annual CRM training for pilots. They maintain security awareness and continuously hone skills. This goes beyond an annual awareness moment.
- Procedures and checklists: Information security can benefit directly from the “checklist thinking” from aviation. Complex IT processes - think rolling out new servers, or responding to a cyber incident - lend themselves well to standardised checklists to avoid overlooking anything. For example, a patch management checklist (Plan: which patches, Do: deploy in test, Check: monitor systems, Act: deploy in production or rollback) or an incident response playbook with step-by-step actions. Like pilots, checklists reduce the likelihood of mistakes in stressful situations. More importantly, these procedures live: after each “flight” (read: after each significant IT incident or change), one should evaluate whether the procedure is still sufficient or needs adjustment. Such feedback is similar to updating a checklist or manual following a near miss. A well-known example of this in IT is holding a post-mortem after every large-scale failure, looking at what went wrong and how to prevent it in the future - a practice that is exactly the Check/Act-fase of PDCA embodies.
- Technology and preventive maintenance: The principle of replacing components in time to avoid accidents has its analogue in information security. Here, it is about phasing out outdated systems or components that show vulnerabilities in a timely manner. Just as an aircraft gets a thorough overhaul after X flight hours, a company must, for example, take legacy software that is no longer being updated out of production before it causes an incident. Patching is effectively replacing/herstellen of “worn” pieces of software (security holes) before they are exploited. You can also see capacity management and stress testing as preventive: preventing “metal fatigue” - in IT terms that systems crash under load or log files fill up. By measuring continuously (e.g. vulnerability scans, penetration tests, performance monitoring), you discover early where wear and tear is occurring and can plan action. The safety margin that aviation builds in (replace a component well before the break limit) is a good example for IT: don’t wait for a data breach to occur, but improve based on small signals (e.g. an increase in near-phishing incidents is a signal to do extra training even before someone actually falls in).
- Regulations and standards as catalysts: In both aviation and pharma, regulations play a big role in enforcing continuous improvement - think mandatory reporting of incidents to regulators or having to comply with increasingly stringent standards. In IT security, we see a similar trend with laws and regulations (AVG/GDPR, NIS2, ISO standards, etc.). However, the lesson is that compliance in itself is not enough - it has to be substantively supported. Just as an airline does not just introduce checklists to comply with the aviation authority, but mainly to keep their own flights safe, a company should not just want to certify itself to ISO 27001, but mainly use the PDCA cycle to become truly safer. One need not preclude the other: good regulation can drive continuous improvement, provided organisations embrace its spirit and not just the letter.
Finally, the comparison shows that safety and security are dynamic goals. Whether it is minimising plane crashes, preventing production errors in medicines, or fending off cyber attacks - to stand still is to go backwards. Continuous improvement through a cycle like PDCA means using every event, good or bad, to become smarter and stronger. In information security, this translates to fewer incidents, faster detection, smaller impact and an organisation that is agile in responding to new threats. The PDCA cycle helps create a learning organisation: one that learns lessons from every incident and adjusts its security measures accordingly. Such an organisation is not only demonstrably compliant on paper, but above all resilient in practice - and that is ultimately what it is all about.