Skip to content

Meta’s move away from fact-checking could allow more false or misleading content, content moderation expert says

John Wihbey, an associate professor of media innovation and technology at Northeastern University, says the policy changes could have downstream effects — not only in the U.S., but elsewhere around the world.

Mark Zuckerberg speaking at an event.
Meta founder and CEO Mark Zuckerberg described the shift as part of an effort to “get back to our roots around free expression.” AP Photo/Godofredo A. Vásquez

Meta’s move away from fact-checking in content moderation practices could potentially allow more hate speech or mis- or disinformation, a Northeastern University social media expert says.

Meta is adopting a model similar to the one used by X, called Community Notes.

John Wihbey, an associate professor of media innovation and technology at Northeastern University, sees the move as the company repositioning itself ahead of President-elect Donald Trump’s inauguration. But third party fact-checking, while difficult to scale on a platform with billions of users, “is an important symbol of commitment to trust and safety and information integrity,” Wihbey says. 

It is “dangerous,” he says, to break from those norms at a moment when “the winds of authoritarian populism are blowing across the globe.”

In a video message, Meta founder and CEO Mark Zuckerberg described the shift as part of an effort to “get back to our roots around free expression,” noting, among other things, that the company’s fact-checking system has resulted in “too many mistakes and too much censorship.” 

He also cited the 2024 presidential election, describing the election of Trump as a “cultural tipping point” toward “once again prioritizing speech.” 

On X, the Community Notes model uses crowdsourced input from users to fact-check posts, usually in the form of added context. Wihbey described Zuckerberg’s announcement as confusing, noting that Meta’s third party fact-checkers played a minimal role in day-to-day moderation when compared to its sophisticated algorithmic tools, which can sometimes result in false positives or negatives.

As part of the policy shift, Meta says it will scale back its content moderation algorithms. 

In addition, the company says that it wants to pivot to a more laissez-faire approach to civic or political content after tightening controls in recent years to curb the spread of mis- and disinformation. 

“In recent years we’ve developed increasingly complex systems to manage content across our platforms, partly in response to societal and political pressure to moderate content,” the company said in a statement. “This approach has gone too far. As well-intentioned as many of these efforts have been, they have expanded over time to the point where we are making too many mistakes, frustrating our users and too often getting in the way of the free expression we set out to enable.”

The policy changes could have downstream effects — not only in the U.S., but elsewhere around the world, warns Wihbey, whose forthcoming book, “Governing Babel: The Debate over Social Media Platforms and Free Speech – and What Comes Next,” delves into content moderation and free speech.  

Northeastern Global News, in your inbox.

Sign up for NGN’s daily newsletter for news, discovery and analysis from around the world.