IBM Bob, a generative AI coding assistant currently in beta, has been flagged by security firm Prompt Armor for its vulnerability to indirect prompt injection. This type of attack occurs when the AI reads instructions hidden within external sources like emails, calendar invites, or website data. If a malicious prompt is embedded in a file that Bob analyzes, the AI can be tricked into performing unauthorized actions without the user’s direct input. These actions can include establishing persistent access for a hacker or running hidden scripts designed to compromise the host system.
Table of Contents
IBM Bob and system permission requirements
IBM Bob requires specific user-granted permissions to be exploited in this manner. The most critical risk arises if a user enables an “always allow” setting for the AI’s commands. While this setting is intended to streamline the development process by reducing repetitive prompts, it removes a vital security layer that would otherwise require human approval before executing potentially dangerous shell scripts. Security experts warn that while the tool is still in beta, developers should avoid granting broad permissions that allow the AI to interact with the system’s core terminal without oversight.
IBM Bob and potential attack payloads
IBM Bob could serve as a gateway for various cyberattacks if the injection vulnerability is successfully leveraged. According to researchers, the flaw allows for the delivery of arbitrary shell script payloads, which could deploy ransomware, steal login credentials, or assimilate the device into a botnet. Because Bob can be accessed through both a command-line interface (CLI) and an integrated development environment (IDE), the attack vectors vary from simple data exfiltration to full device takeover. IBM is expected to implement additional safeguards to mitigate these risks before the tool reaches general availability later in 2026.

