Snap stated today that an OpenAI chatbot (similar to ChatGPT) would be added to Snapchat. “My AI” is an experimental tool that will first be accessible to $3.99-per-month Snapchat+ customers, but the firm plans to ultimately make it available to all users. Snap’s bot will be available this week.
My AI will appear as a standard Snap user profile, implying that the business is selling it as a virtual buddy rather than an all-purpose writing machine. The basic concept is that we’re going to chat to AI every day, in addition to talking to our friends and family,” CEO Evan Spiegel told The Verge. “And as a messaging provider, we’re ideally positioned to achieve this.” When it becomes available, the bot will be pinned to the app’s chat area above discussions with friends.
According to Snap, the bot is powered by “the newest version of OpenAI’s GPT technology that the creators have modified for Snapchat.” It is purportedly referring to OpenAI’s Foundry, a newly revealed, invitation-only development programme for wealthy developers; it allows them to utilise GPT-3.5, the more sophisticated model on which ChatGPT is built. The company’s open API presently only supports GPT-3, an older and less sophisticated model. We asked Snap for clarity on the model utilised and will update this story as soon as we get a response.
To adhere to the platform’s trust and safety requirements, Snap’s chatbot will feature limits. Hopefully, it will not suffer the same fate as CNET’s AI-written articles, the AI Seinfeld experiment, or other AI bot train disasters. My AI, for example, is said to avoid language, violence, sexually explicit material, and political beliefs. Snap apparently intends to fine-tune the model as more users use it and flag incorrect or improper responses. (You may do this by holding down on a problematic message and providing input.)
Even with such safeguards, Snap’s bot has the potential to become a dumpster fire of disinformation and harmful material. “Like with any AI-powered chatbots, My AI is prone to delusion and can be misled into saying just about anything. Please be aware of its many flaws, and please accept our apologies in advance!” the firm said in its announcement post. “Although my AI is meant to eliminate biassed, erroneous, damaging, or misleading information, errors are possible.”