As America gears up for the 2024 presidential election, AI firm OpenAI has laid out their game plan to fight misinformation. A big part of it? Cracking down on fake images and videos made using AI.
OpenAI plans to add a special digital code to images created by their system DALL-E 3. This code will basically act like an authenticity stamp, making it easier to spot AI-generated fakes. If you see a questionable political meme or video, you’ll be able to double check whether it actually came from a human or just an AI bot.
This isn’t a totally new idea though. Google and Facebook have talked about using similar techniques to identify AI-generated misinfo ahead of the next election. But OpenAI thinks their method will be more airtight. We’ll see if voters buy it.
Beyond just images, OpenAI says they’ll work to give ChatGPT users more context around the news stories and voting info it provides. For example, ChatGPT will now link directly to official government websites like CanIVote.org when asked about registration, polling places, etc. The goal is to cut down on accidental misinformation.
And OpenAI promises they’ll continue banning AI-generated political propaganda and deepfakes on their platforms. Violators may even get their accounts restricted.
It’s an ambitious plan, especially for a young company like OpenAI. Plenty could still go wrong before November 2024. But with fears of AI-powered misinfo at an all-time high, you have to appreciate them trying to get ahead of things. We’ll find out soon whether these precautions move the needle or not when election season heats up.