Meta, the parent company of Instagram and Facebook, has announced that it is rolling back COVID-19 misinformation rules in countries where the pandemic is no longer considered a national emergency. The policy change will primarily affect the United States and certain other territories.
Last year, Meta sought the opinion of its Oversight Board regarding the evolving nature of the pandemic and its impact on misinformation policies. In April, the Oversight Board recommended that Meta continue removing false claims about COVID-19 that pose a direct risk of physical harm. The board also urged the company to reevaluate the types of pandemic-related claims that are subject to removal.
Additionally, the advisory group suggested that Meta prepare for the possibility of the World Health Organization (WHO) lifting the emergency status of COVID-19, emphasizing the need to protect freedom of expression and human rights in such circumstances. In May, the WHO indeed lifted its COVID-19 emergency designation, prompting Meta to respond to the Oversight Board’s recommendations.
In an updated blog post, Meta stated, “We will take a more tailored approach to our COVID-19 misinformation rules consistent with the Board’s guidance and our existing policies. In countries that have a COVID-19 public health emergency declaration, we will continue to remove content for violating our COVID-19 misinformation policies given the risk of imminent physical harm.” The company also noted that it is consulting with health experts to identify the specific types of misinformation that could still pose a significant risk.
The global public health emergency declaration that initially triggered Meta’s COVID-19 misinformation rules is no longer in effect, leading to the policy’s discontinuation on a global scale.
In response to the onset of the pandemic, social media platforms faced increasing pressure to combat the spread of COVID-19 misinformation, particularly false claims related to vaccines. Meta, Twitter, and YouTube were among the platforms that implemented policies to address COVID-19 falsehoods.
These rules have undergone revisions over time. For instance, in May 2021, Meta announced that it would no longer remove claims suggesting COVID-19 was “man-made.” As highlighted by the Oversight Board, between March 2020 and July 2022, Meta removed approximately 27 million Facebook and Instagram posts containing COVID-19 misinformation.
Twitter ceased enforcement of its COVID-19 misinformation policy in November, following organizational changes that occurred after Elon Musk assumed control of the company. Likewise, YouTube recently updated its misinformation policy to no longer prohibit videos featuring denialism regarding the 2020 election.
As the pandemic situation continues to evolve, social media platforms face ongoing challenges in balancing the fight against misinformation with preserving freedom of expression and adapting to changing global health circumstances.