Hackers are using the Invite your team feature in OpenAI to slip past your company security

It was only a matter of time before the tools we use to be more productive became the very weapons used against us. Cybersecurity researchers have recently flagged a sophisticated new trend where bad actors are leveraging the Invite your team feature within OpenAI to bypass traditional email defenses. By using a legitimate service to send malicious invites, hackers are finding a way into corporate inboxes that would normally block suspicious external links.

The beauty of this attack, from a hacker’s perspective at least, is its simplicity. Because the email actually comes from a trusted domain like @openai.com, it does not trigger the usual red flags that IT departments spend thousands of dollars trying to catch.

How the legitimate invite becomes a trap

The mechanics of the attack are surprisingly straightforward. A threat actor signs up for a legitimate ChatGPT Team or Enterprise account. Once they have access, they use the Invite your team feature to send out bulk invitations. However, instead of inviting actual colleagues, they target employees at a specific company they want to breach.

The email lands in the target’s inbox looking perfectly official. It has the correct branding, the right sender address, and it likely bypasses the spam folder entirely. The danger lies in the customized “team name” or the destination link. Attackers are naming their “teams” things like “Urgent Payroll Update” or “Mandatory Security Training” so that the automated email from OpenAI carries their malicious message for them.

Why your email filters are failing

Most modern email security systems rely on reputation scores. If an email comes from a known malicious server, it gets blocked. But OpenAI is one of the most reputable domains on the planet right now. When the Invite your team feature is used, the security software sees a valid invitation from a multi billion dollar company and waves it through.

Once a user clicks the button in the email, they are often redirected to a fake login page designed to harvest their credentials. Since the user was already expecting to deal with an OpenAI related task, they are much more likely to enter their username and password without a second thought. This is social engineering at its most effective because it hitches a ride on a tool that employees are already encouraged to use.

Protecting your business from the inside out

So, how do you stop a threat that looks exactly like a legitimate business process? The first step is awareness. Employees need to know that just because an email comes from a real “openai.com” address does not mean the person on the other end is trustworthy.

If your organization uses ChatGPT, it is worth establishing a clear internal protocol for how new users are added to the team. If an employee receives an unexpected request through the Invite your team feature, they should verify it through a separate communication channel like Slack or a quick phone call before clicking any links.

Security teams should also look into configuring their email gateways to flag any OpenAI invitations that originate from unrecognized account IDs or team names. It is a cat and mouse game, but staying informed is the best defense we have against these evolving tactics.

OpenAI has been notified of these types of abuses and continues to refine its automated detection for suspicious account activity. Currently, there is no “off switch” for the Invite your team feature that administrators can use globally, so the burden of defense falls on user education and granular email filtering. Most security firms recommend enforcing Multi-Factor Authentication (MFA) across all corporate accounts to mitigate the risk if a password is successfully harvested through these phishing attempts.