As the prevalence of AI-generated images and videos grows on Twitter, the social media giant is taking steps to combat potentially misleading media. In an effort to provide users with more context and information, Twitter is currently experimenting with a new feature called Community Notes for media. This feature will leverage the platform’s crowd-sourced fact checks, applying them to specific photos and video clips shared on the site.
Community Notes allows contributors with high ratings to apply contextual notes to images within tweets. Similar to notes on tweets themselves, these labels can provide additional details, such as indicating if a photo was created using generative AI or if it has been manipulated in any way. By offering this transparency, Twitter aims to help users identify and understand the nature of the media they encounter on the platform.
One of the key benefits of this feature is its potential to curb the viral spread of misleading photos. Twitter’s objective is for the applied notes to automatically appear on “recent and future” copies of the same image, even if they are shared by different users in new tweets. However, Twitter acknowledges that perfecting the image matching process will take time. The current approach prioritizes precision, which means not every visually similar image will be flagged. The company intends to refine the matching system to expand coverage while minimizing false matches.
From AI-generated images to manipulated videos, it’s common to come across misleading media. Today we’re piloting a feature that puts a superpower into contributors’ hands: Notes on Media
Notes attached to an image will automatically appear on recent & future matching images. pic.twitter.com/89mxYU2Kir
— Community Notes (@CommunityNotes) May 30, 2023
It’s important to note that the track record of Community Notes is not flawless. While the feature can contribute nuanced fact checks and debunk false claims, contributors themselves have highlighted that it is not impervious to errors or the perpetuation of common misconceptions. Twitter remains committed to addressing these challenges and improving the accuracy and reliability of the feature.
In its initial testing phase, Twitter is rolling out notes for media specifically for tweets containing a single image. However, the company has plans to expand the feature to include tweets with multiple images and videos in the future. Twitter’s proactive approach to addressing the rise of generative AI and the spread of misinformation aligns with the broader industry efforts to tackle these issues.
Twitter is not alone in grappling with the challenges posed by AI-generated content and misinformation. Google, for example, has recently introduced features that enable users to track the history of an image in search, aiding in determining its authenticity.
As social media platforms continue to navigate the complex landscape of emerging technologies and misinformation, these proactive measures aim to empower users with the tools they need to make informed judgments about the media they encounter online. By fostering transparency and providing context, Twitter and other platforms seek to foster a safer and more trustworthy online environment.