A group of 20 tech companies, including OpenAI, Google, Meta, Amazon, and Microsoft, signed an agreement Friday to help prevent the use of AI-generated deepfakes aimed at deceiving voters in the 2024 elections across over 40 countries. However, critics worry the pact’s vague language and lack of enforcement don’t go far enough.
Table of Contents
Google, OpenAI, and Microsoft Join Forces to Tackle AI-Generated Deepfakes Ahead of 2024 Elections
The signatories pledged to develop tools to detect deepfakes, assess risks in their AI models, seek out distribution of deceptive content on their platforms, address such content when found, collaborate across the industry, provide transparency around efforts, and work with civil society groups and academics.
The accord applies to AI-generated audio, video, and images that could deceptively alter political candidates and election details or provide false voting information. The companies say they will share detection tools and educate users.
Absent from the pact is Midjourney, whose AI image generator produces very realistic fake photos. The company previously said it may ban all political images near elections. Key platform Apple also didn’t sign, likely because it hasn’t launched generative AI products.
Experts say that while the principles sound promising, voluntary commitments without enforcement may not prevent the worst potential abuses of AI to influence elections. Deepfakes have already been utilized in recent US campaigns.
Tighter restrictions may be coming from governments like the EU, which passed an AI safety bill in December. But the divided US Congress has yet to act. “AI didn’t create election deception, but we must ensure it doesn’t help deception flourish,” said Microsoft’s Brad Smith.