The Deepfake Dilemma

How AI-generated video and voice are redefining cyber threats in manufacturing.

Deepfake Orhan Turan
istock.com/OrhanTuran

In 2024, cybercriminals launched more than 105,000 deepfake attacks. That’s the equivalent of one every five minutes. Cybercriminals no longer treat deepfake audio and video as novelties. They’re using them to commit fraud, steal data and sabotage companies. In manufacturing, where teams rely heavily on remote coordination and supply chain trust, the risk grows even more severe. 

Deception at Scale

Verizon’s 2025 Data Breach Investigations Report (DBIR) reveals that social engineering accounted for 22 percent of breaches in the manufacturing industry. Phishing alone was responsible for 19 percent of breaches. That means attackers don’t need to hack into systems or exploit software vulnerabilities to cause severe damage. They simply need to deceive a human being. 

With generative AI at their fingertips, they’re doing precisely that at scale and with alarming sophistication. 

Generative AI has supercharged these deception tactics, providing attackers with new tools to manipulate perception and exploit trust in ways that traditional phishing has never been able to, including the use of deepfakes. 

Deepfakes are hyper-realistic synthetic media that attackers use deep learning to create, enabling them to impersonate executives, vendors or partners with uncanny accuracy. The CEO’s voice on a voicemail or a supply chain manager’s face in a video call can be enough to override internal skepticism and trigger costly actions. 

A Multi-Pronged Threat to Manufacturing Operations

An analysis by Google DeepMind of nearly 200 public cases of generative AI misuse identified two main threat categories: the exploitation of generative AI capabilities (such as impersonation or the creation of fake video calls) and the compromise of generative AI systems through jailbreaking or adversarial prompts. In real-world attacks targeting businesses, the vast majority of incidents stem from exploitation, often using widely available consumer-level tools. 

One such case involved an employee at a multinational company in Hong Kong. The cybercriminal tricked them into transferring the equivalent of $25.6 million during a deepfaked video conference. Every individual on the call, including someone who appeared to be the company’s chief financial officer, was a computer-generated impostor. As Hong Kong police reported, “Everyone he saw was fake.” 

In another example, a bank employee wired $35 million after receiving a voice call that mimicked a senior executive using AI-generated speech. 

Manufacturing depends on constant communication across teams, partners and time zones. Email, video calls and messaging platforms keep global supply chains moving, but they also provide attackers with numerous opportunities to pose as trusted contacts. That, combined with the fact that manufacturing’s digital footprint keeps growing, makes the sector especially vulnerable to deepfakes, particularly among smaller organizations.

Verizon’s DBIR shows that more than 90 percent of manufacturing organizations affected by breaches had fewer than 1,000 employees. Many of these companies run with small IT teams, constrained budgets and limited access to modern security tools. 

Combating Deepfake-Driven Cybercrime

There’s no single solution to the deepfake dilemma, but manufacturers can take critical steps to defend against this emerging threat. 

Start by adopting a zero trust mindset: “never trust, always verify.” Scrutinize all requests involving financial transactions, production changes or credential access — even if they appear to come from trusted contacts. If a request feels unusual, verify it through a separate channel before taking action. 

Employee education is another key defense. Google DeepMind’s research highlights the value of “prebunking,” proactively showing users how bad actors create deepfakes and what red flags to look for, such as unnatural facial movements, inconsistent lighting or voice-lip sync mismatches. Raising awareness helps teams resist manipulation, especially under pressure. 

Email is the primary entry point for these attacks. According to Barracuda’s 2025 Email Threats Report, one in four email messages is either malicious or unwanted spam, and 20% of companies experience at least one account takeover incident per month. 

Once inside, attackers can impersonate trusted individuals and launch highly targeted spear phishing or business email compromise (BEC) attacks. Meanwhile, nearly half of all companies lack a properly configured DMARC policy, and 77 percent aren’t actively preventing spoofed emails, leaving the door wide open for impersonation. As these tactics grow more convincing, traditional email security becomes insufficient on its own. 

Manufacturing organizations should also limit the amount of executive video and audio content published online. Many deepfakes use publicly available footage, such as earnings calls, interviews or promotional content. Reducing that digital footprint makes it harder for attackers to build convincing impersonations. 

A layered security approach is essential. That includes email security, identity and access management, communication verification protocols, and behavior-based monitoring tools. Most deepfake-based attacks are part of broader campaigns that combine spoofed communications with fake contracts, fake scenarios or insider reconnaissance. 

Finally, manufacturers must strengthen their incident response playbooks. Once a deepfake is discovered or suspected, quick action can minimize financial loss and reputational damage. Studies show that most reputational fallout occurs within the first 24 hours of an incident.

Building Digital Trust in an Era of Synthetic Deception

Synthetic media isn’t inherently harmful. It's widely used in engineering, training, marketing, and simulation. But deepfakes used for fraud, impersonation or extortion represent a fast-growing threat to the global manufacturing industry.

Generative AI-powered tools empower cybercriminals to make phishing and impersonation attempts far more convincing and easier to execute. What once required technical skill now just takes the right tool and a few stolen clips. To stay resilient, manufacturers must rethink how they define and defend trust and recognize that the next attack might not come through code, but through a familiar face on screen.

Adam Khan is the VP, Global Security Operations at Barracuda MSP.

More in Cybersecurity