
Opinion piece written by Stéphanie Ledoux – published by Maddyness.
Resilience in the era of AI: the delicate art of balance – Maddyness
Artificial intelligence is now a powerful performance accelerator. At a time when French companies are struggling to compete with their international rivals, we cannot afford to ignore its benefits.
Automation, anticipation, responsiveness: AI is transforming the way organisations produce, make decisions and interact. But this power must be used wisely.
The adoption of AI cannot be a technological reflex or a headlong rush. It must be part of a strategic approach, based on specific use cases, a clear ethical framework and a long-term vision.
Yes, AI undeniably boosts performance, but at the same time it redefines the contours of resilience. It creates exceptional levers of efficiency, while introducing new dependencies that must be anticipated and controlled. In a context where resilience is becoming imperative — driven by a strengthened regulatory framework (NIS2, DORA) and growing stakeholder sensitivity to business continuity — leaders must now think of performance and resilience as two sides of the same strategy.
The advent of AI, like cyber risk before it, requires these new parameters to be incorporated into continuity strategies. It is no longer just a question of protecting systems, but also of ensuring the company’s ability to function even without them.

AI as an accelerator of resilience
AI is already a powerful lever for strengthening organisational resilience. It enables faster detection of weak signals, real-time analysis of masses of data that are impossible for humans alone to process within a reasonable timeframe, optimisation of decision-making in crisis situations, and reduced recovery times through automation. It also provides unprecedented adaptability: simulating unlikely scenarios, preparing the organisation for unprecedented crises, anticipating rather than suffering.
In customer service, conversational AI absorbs the majority of traffic and allows advisors to focus on complex cases. In cybersecurity, predictive models identify suspicious behaviour that is invisible to the human eye. In crisis management, they offer increased visibility to guide decisions quickly.
Yes, AI increases resilience. But it does not guarantee it.
New risks introduced by AI
As with cyber risk yesterday, AI introduces new risks. What happens if AI systems fail, become inoperable as a result of a cyberattack, or suddenly malfunction?
Companies that have entrusted all or part of their business to AI could find themselves partially or even totally paralysed. Human expertise risks being eroded over time due to lack of practice.
As for critical processes, they will become dependent on a single tool, even though resilience relies on redundancy and diversification.
The point is not to reject AI outright, but to understand these new risks and tailor your resilience strategy accordingly. This is where robustness comes into play: anticipating the unavailability of AI as a credible scenario and preparing for it in practical terms.
Mapping critical activities entrusted to AI
The first step is mapping. Not only traditional critical processes, but also activities that are gradually being entrusted to AI. Which services already rely heavily on automation? What tipping points make an activity dependent on AI? What would be the consequence of total unavailability?
In customer relations, for example, if 80% of requests go through a chatbot, you need to plan who can take over and with what skills. In cybersecurity, if detection relies entirely on AI, which teams still know how to monitor and respond manually?
This mapping, which is necessarily dynamic, must evolve as new use cases are integrated, because dependence on AI is gradual and sometimes invisible until the day it becomes critical.
Identifying and maintaining critical human skills
Resilience does not stop at identifying processes: it requires preserving the human skills capable of ensuring continuity without AI. This involves:
- identifying key employees who possess this expertise,
- ensuring that these skills are shared and redundant, to avoid dependence on a single person (who has the right to take leave or retire!),
- integrating these new needs into forward-looking HR management and targeted training plans, ensuring that these critical skills are maintained over time,
- incorporating “AI-free” scenarios into continuity exercises to ensure that teams still know how to operate in degraded mode, or even manual degraded mode for certain essential processes.
In short, it is a question of keeping organisations operational, not only in terms of systems, but also in terms of the women and men who bring them to life.
Hybrid resilience: the responsibility of leaders
The resilience of tomorrow will certainly not be based on a choice between technology and people. On the contrary, it will require an intelligent combination of the two. AI must be fully integrated to increase the speed and accuracy of responses. And people must remain capable of taking over in extreme scenarios.
This is a responsibility for senior management. A company that bets everything on AI would expose itself to hidden fragility; conversely, a company that neglects it would deprive itself of a decisive lever for competitiveness and robustness.
Resilience in the age of AI is not a technological option: it is a strategy of balance. It involves harnessing the power of algorithms while investing in the operational maintenance of essential human skills.
Because while AI makes our companies more efficient on a daily basis, it is still humans, with their skills and adaptability, who will make them truly resilient in the future.
Read the article
When cognitive psychology sheds light on cyber crisis management
6 November 2025Read the article
Ransomware, Confusion, and Critical Decisions: A Cyber Crisis Simulation Autopsy - Alliancy
1 September 2025Read the article