AI Deepfake Attacks are the biggest data security risk businesses face right now

Artificial intelligence (AI) and deepfakes are proving to be a security nightmare for businesses everywhere, with new research claiming almost two-thirds (61%) of firms see AI as their top data security risk. The Thales 2026 Data Threat Report found that access control and management sit at the heart of this problem. But what does that actually mean in practice, and what should your business do about it?

In this post, we are going to unpack the findings, adds real-world context, and give you practical steps to reduce your exposure to AI deepfake attacks and AI-driven insider threats.

Why AI Has Become the Top Data Security Risk

Enterprises are adding AI into workflows, analytics, customer service, and development pipelines at a rapid pace. To make these tools work, they need broad, automated access to internal data and systems. The problem is that the controls put in place for human employees are almost always stricter than those applied to AI tools.

Think about it this way: a new employee at most companies has to go through identity verification, onboarding checks, role-based access limits, and regular access reviews. An AI system plugged into the same company’s data pipeline often gets far wider permissions with far less scrutiny. It becomes a trusted insider almost by default.

AI industry analysts have described this shift as AI agents becoming integral members of the corporate workforce, which requires a fundamental rethinking of how identity and access management works. Traditional identity and access management (IAM) tools were built for humans. They were not designed to handle autonomous systems that make decisions, call APIs, and touch multiple databases without a human approving each step.

 

 

What this looks like in practice

A company deploys an AI assistant for its finance team. The AI needs access to invoices, payment systems, and supplier records to do its job. Because it is automated, it gets broad permissions with a single API key rather than granular, time-limited credentials. If that key is compromised, or if the AI is manipulated through a technique called prompt injection, an attacker now has direct access to financial systems. A single compromised API key can expose entire training datasets, while a successful prompt injection attack can bypass years of security hardening.

The Scale of the AI Deepfake Attack Problem

Beyond the access control issue, AI is also being used as a weapon against businesses from the outside. The numbers here are striking.

Deepfake files surged from 500,000 in 2023 to a projected 8 million in 2025, with fraud attempts spiking by 3,000% in 2023 alone. Deepfake fraud cases surged 1,740% in North America between 2022 and 2023, with financial losses exceeding $200 million in Q1 2025 alone. The Thales report found that nearly 60% of companies reported experiencing AI deepfake attacks. These are not theoretical threats. They are happening to businesses across every sector right now.

What a deepfake attack actually looks like at work:

  1. Voice cloning to authorise payments: A finance employee receives a call from what sounds exactly like the CFO asking them to process an urgent wire transfer. The voice is AI-generated using just a few minutes of publicly available audio.
  2. Fake video calls: Attacks now include real-time voice clones of executives issuing payment instructions and manipulated video calls with synthetic faces and voices.
  3. Fake job candidates: Employment fraud is escalating in the remote workforce as generative AI tools generate hyper-tailored resumes and deepfake candidates capable of passing interviews in real time, giving bad actors access to sensitive internal systems.
  4. Reputational attacks: The Thales report found 48% of companies reporting reputational damage tied to AI-generated misinformation. Fabricated videos or statements attributed to executives can move stock prices, destroy customer trust, or trigger regulatory investigations.

Engineering firm Arup lost $25 million after a deepfake video call convinced an employee they were speaking with senior colleagues authorising a major transfer. The attacker had replicated multiple executives simultaneously in a single call. A UK-based company lost the equivalent of roughly $25 million in a separate CEO fraud attack using AI-generated deepfakes during a virtual meeting.

Voice cloning technology now requires just 20 to 30 seconds of audio, while convincing video deepfakes can be created in 45 minutes using freely available software.

 

 

Why Businesses Are Struggling to Detect These Attacks

You might assume that trained staff or detection software would catch deepfakes before any damage is done. The reality is far less reassuring. State-of-the-art automated detection systems experience accuracy drops of 45 to 50% when confronted with real-world deepfakes compared to laboratory conditions. Human ability to identify deepfakes hovers at just 55 to 60%, barely better than random chance.

Awareness among business leaders is also a serious gap. Just 13% of companies have anti-deepfake protocols in place, and 25% of executives have little or no familiarity with deepfakes at all.

Only 14% of organisations feel very prepared to manage the risks associated with generative AI.

The Insider Threat Problem: AI Tools With Too Much Access

As the Thales report notes, AI can be a latent malicious insider even when there is no external attacker involved. This happens in two main ways.

Accidental data exposure: An AI assistant given access to the entire company file system might surface a confidential document in response to a routine employee question. It does not need to be hacked to create a data leak. Wide permissions plus poorly scoped queries are enough.

Prompt injection attacks: A bad actor can embed hidden instructions inside a document, email, or webpage that an AI agent reads as part of its workflow. The AI then follows those instructions, potentially exfiltrating data or taking damaging actions, without any human realising what has happened. The first documented large-scale cyberattack executed by agentic AI occurred in September 2025, where AI systems performed 80 to 90% of the attack work with minimal human intervention, making thousands of requests per second and targeting approximately 30 global organisations.

“Insider risk is no longer just about people. It is also about automated systems that have been trusted too quickly,” says Sebastien Cano, Senior Vice President, Cybersecurity Products at Thales. “When identity governance, access policies, or encryption are weak, AI can amplify those weaknesses across corporate environments far faster than any human ever could.”

 

 

Practical Steps to Reduce Your Exposure to AI Deepfake Attacks

The good news is that there are concrete steps you can take without waiting for new regulation or massive security budgets.

Apply least privilege access to all AI tools – Every AI system in your business should have the minimum permissions needed to complete its specific task. An AI that handles customer service queries does not need access to payroll data. Restrict user, API, and system access based on necessity to minimise the risk of unauthorised modifications.

Give AI agents unique, trackable identities – Each agent should receive a unique identity with dedicated credentials and policies, so that every action it takes can be traced, audited, and revoked if needed.

Implement out-of-band verification for high-stakes actions – Never allow a single channel, whether a phone call, video call, or email, to authorise payments, system access changes, or sensitive data transfers. Require a second, pre-agreed channel for confirmation. This single step neutralises most deepfake fraud attempts.

Use short-lived, task-specific tokens for AI access – Enterprises can restrict access by issuing narrow, task-specific tokens with short lifespans, which limits the agent’s capabilities and reduces the potential impact of misconfigurations.

Train staff specifically on deepfake and AI social engineering threats – General security awareness training is no longer enough. Training should cover recognition of AI-generated and deepfake content, multi-channel phishing tactics, and reporting and escalation processes for suspicious communications, and should be frequent and scenario-based.

Audit your AI tools’ access permissions regularly – Most businesses that deploy AI tools never revisit the permissions they granted at setup. Schedule quarterly access reviews for all AI systems, just as you would for privileged human users.

Build an AI-specific incident response plan – Organisations should develop an AI incident response plan to prepare for security breaches and ensure rapid mitigation. This means knowing who is responsible, what gets isolated first, and how you communicate with customers and regulators if an AI-related breach occurs.

Move toward a Zero Trust architecture – Zero Trust means no system, human or AI, is automatically trusted. Every access request is verified, every time. Key elements include continuous authentication, risk-based access, device posture checks, and least privilege access with strong role-based controls.