Microsoft reportedly published and then retracted an AI-generated article that recommended tourists visit a Canadian food bank as a tourist attraction. The article was part of Microsoft Start, the company’s AI-aggregated news service, and it included inappropriate content about visiting a food bank. The article was later removed, and Microsoft is investigating how it made it through their review process.
The article’s author was listed as “Microsoft Travel,” suggesting that human involvement might not have been present in its creation. Microsoft Start claims to use human oversight for the algorithms that sort through content from partners. This incident highlights the challenges and risks associated with using AI-generated content in news and other forms of media.
Other media outlets have also faced issues with AI-generated content, including errors and inappropriate recommendations. While some companies are embracing AI-generated content, concerns remain about the accuracy, quality, and ethical considerations of using AI in journalism and publishing.