Gen AI is quickly becoming a major security headache for businesses, but there are clear steps to stay safe

Companies across every sector are racing to roll out Gen AI systems and autonomous agents. The speed of adoption has created pressure to deliver immediate productivity gains, but it has also opened the door to a new wave of data exposure. A new Proofpoint report warns that these models are rapidly becoming one of the biggest insider risks in modern organisations. Two in five companies listed data loss through public or enterprise Gen AI tools as a top concern. More than a third worry that confidential material fed into these tools may be used for AI training.

AI agents behave like privileged users, and that is a real hazard

The report highlights a simple but uncomfortable reality. AI agents often operate with elevated access rights. They read files across different systems, automate tasks that span teams, and interact with sensitive data by default. That makes them powerful, but it also turns them into potential single point failures. Thirty eight percent of respondents classified unsupervised data access by these agents as a critical threat. Even worse, more than half admitted they do not have the visibility or controls needed to monitor what these systems are doing. In plain terms, many companies have deployed AI tools and allowed them to roam freely across their data.

Traditional security controls are already strained

Proofpoint’s chief strategy officer Ryan Kalember explains the broader issue. The volume of data continues to grow. Insider threats remain constant. AI adds another layer of uncertainty. Fragmented tools and narrow monitoring methods create blind spots. When companies cannot see how information moves between humans and AI systems, they cannot reliably protect it. Kalember says future data protection depends on security platforms that understand context, adapt in real time, and cover both human and agent behaviour.

Humans remain the largest source of data loss

Despite the attention that AI receives, people remain the most common cause of security incidents. Sixty six percent of organisations said their most serious data loss events came from careless employees or external contractors. Thirty one percent cited compromised accounts, and one third pointed to malicious insiders. Proofpoint’s analysis shows that a very small number of users drive the majority of incidents. Just one percent of users account for seventy six percent of data loss events. That pattern reinforces why behavioural analytics and continuous monitoring matter far more than simply adding more security checkpoints.

How businesses can reduce risk as AI adoption expands

Proofpoint recommends a shift toward behaviour driven security models. These systems focus on how data is used, which users are acting unexpectedly, and when AI agents are accessing material they should not touch. The goal is to identify early signs of risk instead of waiting for a breach to reveal the problem. Many companies are already moving in this direction. Sixty five percent have deployed AI enhanced data security capabilities. Even so, most organisations remain early in their security maturity, and the rapid expansion of Gen AI tools will continue to test existing defences.

The next phase of AI adoption demands tighter control

The takeaway from Proofpoint’s findings is straightforward. Gen AI can deliver strong productivity gains, but it introduces new attack surfaces that companies cannot afford to ignore. AI agents should not operate with broad access without oversight. Employees need continuous guidance on safe data handling. Security teams need unified visibility across human activity and autonomous systems. The companies that take this seriously now will avoid the costly lessons that often follow when new technology is adopted without guardrails.