The deployment of large language AI models like ChatGPT and GPT-4 may face a temporary halt due to a complaint filed by the nonprofit research organization Center for AI and Digital Policy (CAIDP) with the Federal Trade Commission (FTC). The complaint alleges that OpenAI is violating the FTC Act by releasing biased and deceptive models that threaten both privacy and public safety. According to CAIDP, these models also fail to meet Commission guidelines for transparency, fairness, and ease of explanation.
CAIDP has called for the FTC to investigate OpenAI and suspend the release of large language models until they comply with agency guidelines. The organization is also asking OpenAI to require independent reviews of its products and services before launch, and hopes that the FTC will create an incident reporting system and formal standards for AI generators.
OpenAI has not yet commented on the complaint, and the FTC has declined to do so as well. CAIDP president Marc Rotenberg was among the signatories of an open letter urging OpenAI and other AI researchers to pause work for six months to facilitate ethics discussions. OpenAI founder Elon Musk also signed the letter.
Critics of ChatGPT, Google Bard, and similar models have raised concerns about inaccurate statements, hate speech, and bias in their output. Additionally, CAIDP notes that users cannot replicate results from these models. OpenAI itself warns that AI can “reinforce” ideas regardless of their truth, and while upgrades like GPT-4 are more reliable, there is a concern that people may rely on the AI without double-checking its content. The FTC will have to decide how to address these concerns and whether to temporarily halt the deployment of large language AI models.