Key takeaways

The rapid adoption of AI is profoundly transforming business processes and corporate decision-making
This new reality calls for a revision of traditional security approaches, favorising strategies that fully integrate the human dimension
Our expert deciphers the new cybersecurity challenges posed by AI and highlights actionable levers to put people back at the heart of defense systems

The rapid adoption of artificial intelligence (AI) is profoundly reshaping business processes and corporate decision-making. While this evolution offers significant performance levers, it also introduces new risks, often insufficiently addressed by traditional cybersecurity frameworks.

In particular, organizations must now contend with threats that no longer target only information systems but directly exploit employees’ cognitive mechanisms. This new reality requires a shift away from purely technical security approaches towards strategies that place a stronger emphasis on human factors.

Generative AI systems can now produce highly realistic content in seconds—synthetic voices, manipulated videos, tailored recommendations, and more. This capability enables sophisticated malicious uses that bypass technical defenses by directly undermining human vigilance.

Three types of scenarios are beginning to emerge within organizations:

  • Deepfake impersonation: an employee receives an urgent request from a senior executive via video call to authorize a transaction. The face, voice, and gestures appear authentic—but the request comes from an AI-generated clone;
  • Targeted desinformation : seemingly reliable documents (market analyses, industry reports, expert opinions) mislead decision-makers into biased strategic choices based on fabricated synthetic information.
  • Regulatory falsification : an AI assistant used by a compliance department provides convincingly argued but entirely fabricated regulatory recommendations, leading to flawed decision-making.

These examples illustrate a phenomenon increasingly under study: the cognitive security gap— the risk arising from manipulation of human perception, judgment, and trust.

Our expert breaks down the new cybersecurity challenges introduced by AI and outlines the levers for reinstating the human element at the core of defense systems.


A governance challenge : balancing human and technical measures 

Until now, cybersecurity has relied mainly on technical safeguards (firewalls, antivirus, access monitoring) and general awareness training. But in the face of AI-related cognitive risks, these tools alone are no longer sufficient.

Reference frameworks are evolving accordingly. The AI Risk Management Framework, developed by the National Institute of Standards and Technology (NIST) helps organizations evaluate, manage, and reduce risks associated with AI use in a trustworthy, transparent, and human-centered way.
 

For example, NIST emphasizes the importance of human oversight throughout the AI systems lifecycle. It encourages aligning technical controls with ethical governance and stronger user accountability.


Many organizations are now questioning how best to incorporate these considerations into their cybersecurity strategies. Five priority levers have been identified to achieve this: 
 

  1. Strengthen cognitive awareness
    Go beyond standard training by adding modules specifically focused on AI-related risks: detecting synthetic content, recognizing subtle manipulation signals, and identifying AI-generated “authority hallucinations.” This skill-building equips employees with new reflexes to counter more nuanced threats.
     
  2. Establish verification protocols
    When faced with AI-generated content or recommendations, it is essential to introduce checkpoints: dual human validation, decision traceability, and escalation procedures in case of doubt. These safeguards should be proportionate to risk levels and tailored to each business context.
     
  3. Regulate AI Use in Internal Processes
    Internal policies must clearly define the conditions for using AI tools (particularly generative ones), the types of information they may process, and associated responsibilities. This helps secure practices without stifling innovation.
     
  4. Map cognitive risks within entreprise risk assessments
    Touchpoints between AI and critical processes (finance, procurement, HR, compliance, etc.) require specific evaluation. The goal is to identify sensitive decisions potentially influenced by AI and design appropriate mitigation measures.
     
  5. Prepare teams for AI incident response
    Incident response units must be equipped to identify, assess, and manage novel attacks: deepfakes, AI-enabled data leaks, internal disinformation campaigns, and more. Regular exercises (such as tabletop drills) strengthen their responsiveness and coordination. 

     

A training imperative for cybersecurity teams

The evolving threat landscape demands continuous upskilling of cybersecurity teams. Beyond technological monitoring, training must include cognitive manipulation techniques, behavioral analysis, and the new forms of social engineering enabled by AI.

This upskilling involves:

  • Specialized training on AI attacks (deepfakes, biased contexts, deceptive content generation);
  • Immersive exercises replicating realistic scenarios;
  • Interdisciplinary monitoring that integrates insights from cognitive science, ethics, and law.

     

An entreprise-wide cultural shift 

Finally, fostering awareness of these new risks must extend across the entire organization—including HR functions. In sensitive environments (finance, legal, executive management), recruitment processes may even incorporate assessments of candidates’ discernment in the face of AI-generated content.

At the same time, clearly communicating cybersecurity expectations from the onboarding stage helps establish a shared framework. This approach supports the development of a proactive, responsible culture around AI usage.

 

The digital transformation accelerated by AI requires organizations to rethink their cybersecurity priorities. The most resilient enterprises will be those that successfully combine technological innovation with strengthened human capabilities—treating cognitive security as a strategic pillar in its own right.
 

By adopting best practices, implementing targeted training, and adapting internal processes, organizations can build an integrated cybersecurity approach where people are no longer passive risk factors but active defenders against tomorrow’s threats.