Last week, nonconsensual deepfake images of Taylor Swift began circulating online. Though X has policies against such content, it struggled to control the spread. Swift’s fans took action by reporting violating accounts and flooding hashtags, leading to some suspensions. But images persisted for days.
X aimed to remove the content but admitted challenges in staying “ahead of the problem.” Critics said it reacted slowly given the images’ visibility. By the weekend, searching Swift’s name on X yielded only errors to limit access. X called this a “temporary action” to prioritize safety.
The images likely originated from a Telegram group using AI to generate fake nude photos of women without consent. This highlighted issues around AI; Microsoft CEO Nadella said companies must balance safeguards and innovation around such technology. Overall the incident showed platforms are still struggling to moderate this kind of abusive content, though public pressure can push more decisive action.