In a world where technology increasingly determines our decisions, the call for critical reflection is greater than ever. Blind trust in technology leads to mistakes with far-reaching consequences, he writes in his book The. ICT managers hold the key to using technology effectively and securely, but they can only succeed if they understand how data, systems and processes affect each other.

A central theme in The Validation Crisis is the importance of data. A good example is an artificial intelligence (AI) experiment in which an AI system is fed data from previous hiring decisions, which can reinforce biases. In this case, the model was trained on a dataset in which men were hired more often than women. When the AI was deployed in practice, it reproduced this bias without the users’ knowledge. As a result, men were hired faster than women.

The problem? The input data was correct, but the model was built on a foundation of unconscious discrimination. This example shows that even the most sophisticated systems are prone to errors if the context and training phase are not validated. So data analysis and AI are not magic: without understanding how a model was trained and what assumptions were made, the technology is limited to repeating existing patterns, including the errors embedded in them.

The risk of techno-optimism

Another important issue is the role of techno-optimism within organisations. Many ICT managers see technology as a panacea that solves problems, reduces costs and speeds up processes. While technology can indeed be a powerful catalyst, this mindset often leads to poor evaluations and rushed implementations.

The move to cloud solutions is a striking example. Organisations see scalability, cost benefits and flexibility, but often forget the risks: loss of control over data, vendor lock-in and compliance issues. It has even happened that a cloud provider abruptly shut down its services, leaving companies in big trouble because they had not prepared backups or alternatives. This type of risk is often underestimated or even ignored in the drive for quick results.

AI and the need for validation

To further illustrate the validation crisis we are in, I conducted an experiment. For The Validation Crisis, an AI model was trained with fictional data about an intergalactic society on the moon inhabited by ‘Selenian cats’. This narrative, constructed from hundreds of fabricated documents - such as scientific studies, court records and news articles - describes a complex lunar culture centred on technology such as telepathic headsets and time compression. The model consistently produced output that seemed logical within the context of this fictional world, but which would be totally nonsensical in the real world.

The purpose of the experiment is simple but powerful: to demonstrate how generative AI works and the risks that arise when models are fed fabricated or biased data. The experiment shows that AI models cannot think critically autonomously and are completely dependent on the quality and context of their training data. After all, people who see texts about Selenian cats quickly conclude that they are nonsense, but an AI model does not see this. And this applies not only to fictional scenarios, but also to real-world applications, such as in healthcare, finance or law enforcement.

The conclusion is clear: “Garbage in, garbage out.” AI is only as reliable as the data and validation processes underlying it. Without transparency on how models are trained and what their limitations are, AI systems can generate harmful or misleading output. The experiment highlights the need for continuous validation and critical evaluation in the development and implementation of AI.

The AI experiment in The Validation Crisis also offers broader lessons for implementing artificial intelligence in organisations. AI is not an autonomous solution, but a tool that depends on the quality of its input and the control of its users. A lack of validation in AI models can lead to major risks. Without regular audits and transparency, organisations can make decisions based on inaccurate or distorted data, affecting everything from recruitment to risk management.

The human factor in technology

However, validation does not only have a technical side. There is also a human part. For instance, ICT managers must not only be experts in technology, but also leaders who foster a culture of critical thinking and collaboration. Listening summarising and questioning are essential skills to break assumptions and expose hidden risks. A good example that I also mention in The Validation Crisis is the gap between administrators and the shop floor in organisations like Boeing. The lack of communication and understanding of operational risks and the relationship between engineering and ICT played a major role in the fatal 737 Max crashes. I therefore advocate that managers close this gap by staying actively involved in day-to-day processes and investing in training and awareness within their teams.

##From control to resilience Validation is not only about avoiding risk, but also about building resilience. This means that organisations design systems and processes to cope with errors and unexpected situations. Think of the Ariane 5 rocket, which exploded on its first launch due to a software error that could easily have been avoided with better validation and redundancy.

Organisations can apply these lessons by designing systems that are not only secure and efficient, but also flexible enough to respond to changes and incidents. This requires a combination of technical expertise and strategic insight, as well as a culture where learning from mistakes is encouraged.

#The role of culture and leadership A culture change is thus essential to overcome the validation crisis. Therefore, transparency, diversity and critical thinking should be at the heart of organisations. This means that managers should not be afraid to ask difficult questions, even if the answers are confrontational.

There also needs to be more collaboration between departments. ICT managers should not only be responsible for technology, but also bridge the gap between technology, operations and regulations. This requires a multidisciplinary approach that includes both technical and human factors.

#From crisis to control The Validation Crisis is a wake-up call for ICT managers. Technology is only effective if it is deployed with a keen eye for data, context and risk. To this end, validation is not an extra step, but the core of success in a digital world. For ICT managers, this means:

  1. Understanding how technology works: From AI to cloud, make sure you know what is happening “under the bonnet”. Commit to transparency: Build systems and processes that are understandable and auditable. Invest in culture: Encourage critical thinking and collaboration within your team 3.

The future of technology is in our hands. It is up to ICT managers to shape it with care and responsibility, so that innovation brings not only efficiency, but also security, fairness and control.

The Validation Crisis is available as an e-book. See: validationcrisis.co.uk.

This article(pdf) appeared earlier on AG Connect.