Today, Google is making its Bard AI chatbot accessible

Google was careful to note in its introduction that large language models (LLMs) like LaMDA aren’t error-free and that faults do occur. For instance, Hsiao and Collins argued that because they are exposed to a variety of information that reflects real-world biases and stereotypes, those elements occasionally appear in their outputs.

They even gave an instance of when Bard has erred in the past. Bard clearly presented ideas when asked to provide a few recommendations for simple indoor plants, although certain details were inaccurate, such as the scientific name for the ZZ plant.

Google stated that it’s necessary to be aware of these difficulties and that quality and safety are important factors to take into account. To try to keep interactions constructive and on topic, “we’ve also built in guardrails, like capping the number of exchanges in a dialogue,” Hsiao and Collins noted. Though the number of dialogue exchanges is still unclear, we have contacted Google for more information and will update this piece as soon as we receive a response.