SentinelOne wants to make sure the AI you’re adding to the business doesn’t quietly become the next breach.
The security company says it’s launching a batch of new AI-focused features that cover two fast-growing problems: protecting AI agents and using AI to speed up day-to-day security work. The updates are being shown at RSA Conference 2026 (booth N-5863), and include new products for agent governance and AI red teaming, plus a generally available “one-click” investigation feature inside its Singularity platform.
Table of Contents
Securing the agents you didn’t know you had
As teams experiment with autonomous “agents” that can call tools, move data around, and take actions in business systems, the risk profile changes. SentinelOne’s new Prompt AI Agent Security is pitched as a discovery and governance layer for these agentic workflows — giving security teams visibility into what agents are running, what they’re connecting to, and which actions should be blocked or allowed.
The company says it can enforce policy on agent interactions in real time and flag risky behavior before it turns into an incident — like an agent sending corporate data to an external endpoint or chaining actions to escalate privileges across systems.
Testing AI apps like they’re production systems
The other half of the announcement is Prompt AI Red Teaming, which aims to help organizations pressure-test internal and customer-facing AI applications. Think prompt-injection attempts, jailbreaks, privilege escalation, and data poisoning — the kinds of attacks that don’t look like classic malware, but can still leak data or trigger unintended actions.
SentinelOne’s argument is that traditional app security testing doesn’t map cleanly to agent-driven software, so AI-specific adversarial testing needs to be baked into the development lifecycle and run continuously as models and prompts change.
One-click investigations, now for everyone
SentinelOne is also making its Purple AI Auto Investigation generally available. The idea: an analyst clicks once, and the system pulls evidence across endpoint, cloud, identity, and other telemetry, then assembles an attack timeline and a verdict that can kick off remediation via automation — with an analyst still in the loop.
If it works as described, it’s a direct response to the modern SOC’s core pain: too many alerts and not enough time to do deep forensics on each one.
Why this matters
Security vendors are racing to sell “AI for security,” but enterprise buyers are increasingly asking the other question: how do we secure the AI we’re deploying? Agents, in particular, can become a new shadow-IT layer — capable of touching sensitive systems and data with fewer guardrails than traditional software. Tools that inventory, govern, and test these systems will likely become table stakes as AI moves from demos to core operations.
Background: SentinelOne (NYSE: S) sells its Singularity security platform across endpoint, cloud, and identity, and has been pushing Purple AI as an automation and investigation layer for security teams.
