ChatGPT: A New Security Risk for Businesses?

A new report suggests that many bosses are banning ChatGPT and other generative AI tools out of fear of data leaks and similar cybersecurity incidents.

The report, which was conducted by enterprise generative AI platform Writer, polled 450 executives at large enterprises. The poll found that almost half (46%) believe someone in their company may have inadvertently shared corporate data with a generative AI tool.

Generative AI tools, such as ChatGPT, are trained on large amounts of data. This data can include text, images, and code. If a generative AI tool is given access to sensitive data, it could potentially generate output that contains that data. This could lead to a data leak, which could have serious consequences for the company.

For example, if a generative AI tool is given access to a company’s customer data, it could potentially generate output that contains customer names, addresses, and credit card numbers. This data could then be used by criminals to commit identity theft or other crimes.

In addition to data leaks, generative AI tools could also be used to generate malicious code. For example, a generative AI tool could be used to generate code that could infect a company’s computer systems with malware. This malware could then be used to steal data, disrupt operations, or even take control of the systems.

The report found that ChatGPT was the most banned generative AI tool, with 32% of respondents saying that they had banned it. CopyAI was the second most banned tool, with 28% of respondents saying that they had banned it. Jasper was the third most banned tool, with 23% of respondents saying that they had banned it.

Despite the potential security risks, generative AI tools are still extremely popular. Almost half (47%) of respondents said that they use ChatGPT at work every day. CopyAI is used by 35% of respondents, and Anyword is used by 26% of respondents.

The most common use of generative AI tools is for generating copy, such as ads, headlines, blogs, and knowledgebase articles. Other common uses include generating marketing materials, creating customer support content, and writing code.

Most firms don’t plan on sticking with the free version of generative AI tools for long. Fifty-nine per cent of respondents said that they purchased (or plan to purchase) at least one such tool this year. A fifth (19%) are using five or more generative AI tools.

The key selling proposition for generative AI tools is their ability to boost productivity. Respondents said that generative AI tools help them to improve employee productivity, generate higher-quality output, and save on costs.

“Enterprise executives need to take note,” said May Habib, Writer CEO and co-founder. “There is a real competitive advantage in implementing generative AI across their businesses, but it’s clear there’s a likelihood of security, privacy, and brand reputation risks.”

“We offer enterprises complete control – from what data LLMs can access to where that data and LLM is hosted,” Habib continued. “If you don’t control your generative AI rollout, you certainly can’t control the quality of output or the brand and security risks.”

The report’s findings suggest that businesses need to be aware of the potential security risks of generative AI tools. Businesses should take steps to mitigate these risks, such as carefully vetting the tools they use and implementing security measures to protect their data.