A CEO records a message announcing that the company is entering administration. A senior executive issues a transfer order during a video call. An employee receives an urgent voice message. Except none of this is real.
Once confined to entertainment, deepfakes are now a serious cybersecurity threat, having gone mainstream thanks to easy-to-use online tools.
How can you tell real from fake? What risks do they pose to businesses? And how can you defend against them?
Summary
Deepfakes: Understanding the technology behind the threat

A deepfake is an image, video, or audio file generated by artificial intelligence (AI) to convincingly mimic a real person.
Deepfakes generally fall into two categories:
- Asynchronous deepfakes, where a video or audio recording is manipulated before being shared (such as a fake message from a senior executive).
- Real-time deepfakes, which are created through “face swapping,” a technique that overlays someone’s face and/or voice live onto another speaker. An attacker can use this method to impersonate a company executive during a video meeting, making the fraud highly believable.
One striking example was the 2023 viral image of Pope Francis wearing a white puffer jacket. Despite it being an absurd scenario, many people believed it was real because of the image’s photorealistic quality. This case perfectly illustrates how the realism of deepfakes blurs the line between truth and falsehood, making even the most unlikely situations appear genuine.

At the core of deepfake creation are GANs (Generative Adversarial Networks), a system where two AI models compete: one generates synthetic content, while the other evaluates it against real data. Through repeated confrontations, the quality of the fakes improves, making them increasingly difficult to detect.
With each new iteration, the generator fine-tunes its output until it can deceive the discriminator. And the more the system learns, the more realistic—and harder to spot—the deepfakes become.
Why deepfakes pose a cybersecurity threat

How deepfakes are used in cyberattacks
Cybercriminals use deepfakes in a variety of ways to achieve different goals. Just a few seconds of a public recording, such as an interview, a speech, or a podcast, are enough to clone someone’s voice and trick a target into believing they’re speaking to a superior or partner.
These attacks exploit cognitive biases linked to trust and urgency, making them especially hard to spot.
Fake transfer orders: Millions stolen in a flash
In 2024, a company lost $25 million after an employee was duped during a deepfake video call: on screen, a fake CFO and fake colleagues convinced him to authorize fraudulent transfers to Hong Kong.
Another striking case occurred in the same year, when criminals used voice-cloning technology to imitate a company executive and persuade a bank manager to transfer funds. In total, $35 million dollars were stolen before the scam was detected.
Industrial espionage: Next-level social engineering
Cybercriminals use deepfakes to infiltrate organizations without arousing suspicion. In 2023, for instance, hackers impersonated the head of R&D at a U.S. technology company using a deepfake video, holding fake online meetings with engineers and gathering confidential information about prototypes the firm was developing.
Disinformation: Immediate reputational and financial fallout
At the end of 2023, a deepfake video showing a CEO making alarming statements about the state of the economy caused the company’s stock value to plummet. The scam was uncovered just a few hours later, but the financial impact amounted to millions of dollars. This kind of digital fraud poses a direct threat to the stability of businesses and financial markets.
Bypassing security systems: Leaving defenses vulnerable
Some deepfakes can fool biometric authentication systems, with ultra-realistic imitations duping the facial and voice recognition technologies used in advanced security protocols, thus putting sensitive data at risk. In 2023, a British journalist demonstrated how deepfakes could defeat voice recognition-based security systems, using just a short sample of his voice to produce an imitation that tricked his bank’s security system into allowing access to his account.
Bank fraud: Fake Italian minister scams business community
In 2024, scammers used AI-powered voice cloning technology to imitate Italian Defense Minister Guido Crosetto. Posing as the minister, they demanded several million euros in funds to secure the release of journalists held abroad. Their targets included Armani, Moratti, the Beretta family, and Menarini. One of the victims reportedly transferred nearly $1 million before the scam was exposed.
Who do deepfakes target and why?
Some individuals, teams, and sectors are naturally more vulnerable to deepfake scams because they have an extensive digital footprint or engage in sensitive activities. However, as is often the case in cybersecurity, no business can ever consider itself completely safe.
Executives and senior management
CEOs, CFOs, and CIOs are prime targets. Their voices and images can easily be found in interviews and speeches, allowing scammers to create highly convincing deepfakes and giving them fake authority to authorize transfers, announce false strategic decisions, or request sensitive information.
Finance and accounting teams
Departments that handle payments are frequently targeted. Using the cloned voice of a senior executive, hackers can urgently request a change of bank details or approve an exceptional transfer of funds, bypassing standard procedures.
Banks and insurance companies
In industries where trust is paramount, scammers exploit urgency bias to gain access to funds. For instance, a criminal could get hold of sensitive data by posing as an auditor from a regulatory body, or leverage stolen customer information to trick policyholders into transferring money to fraudulent accounts.
Manufacturers and industrial firms
Aside from financial fraud, companies in this sector are particularly exposed to industrial espionage. By impersonating a colleague or expert, an attacker can infiltrate project meetings and gain access to highly sensitive data such as prototypes and trade secrets. This stolen information can then be resold, used for blackmail, or exploited by competitors.
Retail and luxury goods sector
In the retail and luxury goods sector, deepfakes are used both for scams and to lend an air of legitimacy to counterfeit goods. Videos featuring cloned influencers can make fake product launches on marketplaces appear credible, while doctored ads can direct customers to mirror sites selling counterfeit goods. The impact is twofold, with immediate financial losses and long-term erosion of brand value.
How to spot a deepfake?

What are the warning signs of a deepfake?
Although the underlying technology is becoming more advanced, deepfakes are by no means undetectable. There are a number of telltale visual and audio clues to look out for. These are detailed below.
Clue 1: Unusual or inconsistent facial expressions
- Irregular blinking: An MIT study found that many deepfakes feature irregular blinking, or even a complete absence of blinking. This is because AI models are often trained on static images where subjects’ eyes are open. While newer models improve on this, some still struggle to reproduce natural blinking.
- Facial asymmetry: Small imperfections such as a crooked smile, a slightly misaligned gaze, or a nose that appears distorted are often signs of AI-generated content.
- Stiff or artificial facial expressions: Micro-expressions are critical to human communication, but AI still struggles to replicate them accurately. A face that appears too stiff or that changes expression abruptly can be a sign of a deepfake.
Clue 2: Mismatch between audio and lip movements
- Poor lip-sync alignment: A 2022 IEEE study revealed a recurring flaw in deepfakes: lip movements don’t always match the sounds being produced, especially when it comes to complex words. This misalignment can sometimes be brushed off as a network quality or latency issue, but it’s often a giveaway.
- Unnatural-sounding intonation: AI-generated voices can sound flat or emotionally off. According to DeepMind researchers, some deepfakes lack the natural variations in tone that characterize human speech, giving them a slightly robotic quality.
But as AI models evolve, and as impersonators practice mimicking their targets, these imperfections are becoming harder to detect.
Clue 3: Inconsistent lighting and shadows
- Abnormal reflections and shadows: The Fraunhofer-Gesellschaft research institute has identified a common giveaway in deepfakes: abnormal reflections in the eyes or glasses, or shadows that fall in the wrong direction. Inconsistencies like these often point to AI-enabled manipulation.
- Blurring or flickering: According to Microsoft Research, deepfakes can feature dynamic blurring or flickering effects, especially during fast movements, creating a sense of visual instability.
Clue 4: Unnatural textures and details
- Skin that looks too smooth or overly detailed: An artificially flawless complexion, with no pores and no imperfections, can be a clear sign of a deepfake.
- Poorly rendered hair and ears: AI still struggles to render hair naturally. Common issues include inconsistent strands, blurred hairlines, and ears that appear distorted or misaligned with the rest of the face.
Clue 5: Digital artifacts and image instability
- Movements around the edges of the face: When a deepfake isn’t rendered properly, distortions often appear around the edges of the face, especially when the person makes quick movements.
- Shaking or morphing: If the image is unstable, or there are subtle changes in facial proportions, this can be a sign of deepfake.
Tools for detecting deepfakes
As deepfakes become more sophisticated, various technologies have emerged for identifying manipulated content, drawing on AI, metadata analysis, and advanced content certification.
AI detection tools
AI-powered tools examine videos and images frame by frame, spotting anomalies that are invisible to the naked eye, such as inconsistencies in pixels, reflections, and shadows. Others use databases of synthetic content to become better at recognizing manipulated content created by advanced machine-learning models.
Blockchain and content certification
Another approach involves certifying content at the point of creation. These methods guarantee authenticity by embedding an unalterable digital fingerprint in images and videos. Metadata are also incorporated into multimedia files for traceability purposes.
Biometric and behavioral analysis
Monitoring facial movements and expressions is another effective way to flag deepfakes. AI models trained on human behavior analyze facial movements to detect any distortions that could indicate manipulation. Some deep-learning systems can even identify artifacts that are invisible to the human eye.
Audio detection and hybrid approaches
Deepfakes aren’t limited to visual content. Synthetic voices also pose a serious risk. Specialized algorithms analyze sound frequencies, prosody, and intonation patterns to detect telltale signs that a piece of content has been manipulated. When combined with facial and gesture recognition, these methods provide a more complete picture of potential fraud.
While these tools represent significant progress, they are not foolproof, which is why awareness remains a key factor in preventing users from falling victim to deepfakes.
Protecting against deepfakes: Strategies and best practices

Training your people to stay alert
Awareness is the first line of defense against deepfakes. Companies need to train their people to spot subtle warning signs and adopt sound verification practices. Implementing double-checking protocols—such as confirming a sensitive request through a separate communication channel—can significantly reduce risk. Measure like these should be embedded in internal, organization-wide procedures and automated wherever possible to ensure they are followed consistently.
Simulations and drills can help employees recognize attempted fraud and build a habit of systematic verification.
It’s equally important to have employees double-check both audio and video content as a matter of routine. Encouraging staff to question unusual requests and confirm them via a separate channel, such as a direct phone call or an internal email, can help prevent fraud.
Building hands-on exercises, tests, and simulations into your organization’s cybersecurity policies ensures your people are ready if a deepfake attack happens.
Securing communication channels and decision-making processes
Individual staff alertness is important. But businesses also need to safeguard their communication and decision-making processes. Implementing robust validation protocols helps prevent impersonation attempts that could compromise financial transactions or strategic decisions.
Multi-factor authentication (MFA) should be implemented as standard for all sensitive communications. Because when it comes to checking someone’s identity, a simple email or phone call is no longer enough. Advanced solutions like biometric authentication and digital certificates offer an extra layer of security.
Internal processes should also include additional checkpoints. For example, requests for fund transfers or changes to sensitive data should be confirmed by multiple sources through independent channels.
Secure communication measures should also extend to outside partners, who need to be trained in best practices to reduce exploitable vulnerabilities.
Anticipate and adapt to evolving deepfake threats
Deepfakes are constantly improving, making detection increasingly difficult. Businesses need to be proactive, regularly updating their cybersecurity strategies to keep risks to a minimum.
Robust technology watch processes are essential, because organizations that keep abreast of the latest developments in deepfake creation and detection are best-placed to adjust their protective systems. Collaborating with cybersecurity experts, researchers, and specialized organizations can help business anticipate new attack techniques.
But technology isn’t the only safeguard. Staff also need to be trained to recognize warning signs, including through regular exercises that sharpen their instincts and prevent them from responding impulsively to unusual requests. Companies that develop a strong cyber culture ensure that best practices are embedded in their day-to-day operations.
Insights similaires
-
Cybersecurity: the five major shifts shaping 2026
The 2025 edition of the Cybersecurity Forum opened with a warning message. "Winter is coming," […]
15 octobre 2025
-
Hardis Group launches a new cybersecurity business unit
The IT transformation consulting and digital services company announces the launch of a dedicated cybersecurity […]
-
Neuroscience and cybersecurity: Defeating cognitive biases to reduce human error
If cybercriminals are becoming increasingly ingenious in bypassing defense systems, their target remains the same: […]
15 janvier 2025