The spread of false information is set to grow much worse if Google’s Bard chatbot’s unauthorised launch is any indicator. Last week, the business used Twitter to promote its natural-language AI model by posting misleading information regarding the James Webb Space Telescope (JWST).
A brief Animation displays an example of a Q&A with Bard in the advertising (via Reuters). “What new discoveries from the James Webb Space Telescope can I tell my 9 about?” asks the inquiry. The algorithm immediately throws out three suggestions, including the final one that states, “JWST took the very first photos of a planet outside of our own solar system. These faraway worlds are referred to as ‘exoplanets.’ “Exo” implies “From outside.” While the information concerning exoplanets is correct, the claim that the JWST was the first to photograph them is incorrect. According to NASA, the European Southern Observatory’s Very Large Telescope (VLT) took the award in 2004.
While erroneous information in a Twitter ad is unlikely to do immediate harm, it’s tempting to see the error as a foreshadowing of the dangers of unleashing natural-language chatbots into the world. It’s similar to CNET’s initiative to compose financial advice articles using an AI chatbot, which was fraught with blunders.
Since chatbots get so many things right — and spew out answers with such certainty — anybody who doesn’t fact-check their replies may be left with incorrect assumptions. Given the havoc that (non-AI-powered) disinformation has already wreaked on society, unleashing this sometimes mind-boggling technology before it can be trusted to deliver genuine information reliably and consistently — even if it manages to elude Google’s copy editors — might be a wild ride.