January 22, 2025
Fact-Checking is Not Censorship!
As Meta scales back curation of content on its platforms, remember this important fact: Fact-Checking is Not Censorship!
Farewell warning
In his farewell address, President Biden warned Americans that information pollution threatens to smother our democracy. “Americans are being buried under an avalanche of misinformation and disinformation enabling the abuse of power. The free press is crumbling. Editors are disappearing. Social media is giving up on fact-checking. The truth is smothered by lies told for power and for profit.”
Not only was Biden responding to a resounding chorus of observers around the globe alarmed by the explosion of AI-boosted disinformation, but also to news that Meta CEO Mark Zuckerberg made when he announced that his company will change how it handles the problem. After years of building up a global network of fact-checkers to prevent hoaxes, disinformation campaigns and conspiracy theories from running wild on its platforms, Meta will stop fact-checking content on Facebook, Instagram and Threads.
Meta's History of Disinformation & Fact-Checking
Public perception fears fuel early voluntary fact-checking efforts
Since 2016, when the Pizzagate conspiracy spread like wildfire and revealed how bad faith communicators exploited Facebook’s engagement-hungry algorithm to push disinformation, the company has been trying to prove it deserves the public trust. A year later, when studies showed Facebook’s algorithm helped fuel a genocide in Myanmar, the company continued its public relations scramble.
Facebook’s news pollution problem, especially in countries with multiple languages, was always one of scale. How could algorithmically-driven distribution platforms to do what editorial staffs at newspapers have always done: namely use their First Amendment right to curate the information they published to make their brand more trustworthy. Despite the challenges, Facebook tried, using machine learning and A.I. tools along with its fact checking network to clean things up.
They didn’t have to. Section 230 of the Communication Decency Act of 1996 exempts digital platforms from accountability for publishing things like libel or hate speech because “users” create the content. Content distribution platforms like Twitter or Facebook just profit from feeding it to people to engage with while they are exposed to advertising. Yet Facebook worried that if the public perceived that it profited from amplifying and distributing divisive disinformation by prioritizing engagement metrics that helped it mine user data, its stock valuation would depreciate. Like many media companies before them who feared accountability and oversight following a scandal, they promised to self-regulate to win back the public trust.
The rebranded Meta partners with misinformation experts and claims to create a safer environment
Since then, Facebook has rebranded itself as Meta and has continued to tack back and forth in its approach to disinformation. After January 6, when researchers found that Facebook had been used by the “Stop The Steal” promoters to amplify the lie that the election had been stolen and incite insurrection, it responded vigorously. Like Twitter, which also played a crucial role in the insurrectionists’ communication campaign, Meta poured money into trust and safety teams that identified and suspended accounts that were misinformation super-spreaders – those who repeatedly pushed demonstrable false and harmful content.
To identify harmful content, Meta worked with organizations credentialed by the International Fact Checking Network in 119 countries around the world. According to their own analysis, it worked. Fact-checking, down-ranking misinformation and deplatforming super-spreaders limited its reach and created a safer environment.
Meta backs off efforts, citing cost and censorship concerns; implements Community Notes approach
Yet though it helped contain news pollution, fact-checking was costly. For the last several years, company has decreased its investment in fact checking. So though troubling to the tens of thousands of people who worked to make Meta’s platforms more trustworthy, the announcement was not a complete surprise.
Meta’s Chief global affairs officer, Joel Kaplan, indicated that the break from fact-checking was “starting in the US” because “a program intended to inform too often became a tool to censor.” Better to err on the side of disinformation, it would seem, than be accused of censorship.
Instead of fact-checking, Meta will now use the Community Notes model that Elon Musk adopted after he bought Twitter, fired the trust and safety teams, and reinstated bad faith super-polluters like Alex Jones.
Who benefits from Community Notes?
Does the Community Notes method work? If you look at what’s happened to X, which decayed into a cesspool of A.I. generated garbage and drove away millions of users, the answer is: “works for whom?”
What was Musk’s solution to the failure of his community notes strategy to limit the reach of disinformation? Make it impossible for researchers to study the spread of disinformation on the platform by shutting off access to its API. Where Twitter used to be a place where communication and misinformation scholars could study the problem, the impact of X’s algorithms are now hidden from accountability.
And when one of the deplatformed super-spreaders ran for president, Musk used his media power to amplify partisan misinformation entrepreneurs and their content while throttling critics. It worked. They won.
Which brings us back to Mark Zuckerberg and Meta’s latest shift on “fact checking.” Like Musk, Zuckerberg also made it harder to see what was going on Facebook, Instagram and Threads by killing off CrowdTangle in August of this year. Like Musk, Meta defended the choice via a radical and self-serving libertarian framing of free speech, which privileges the priorities of those who own the means of amplification and distribution.
We can speculate about whether Meta is tacitly admitting that their global scale makes disinformation mitigation on their platforms impossible, the real issue is the bad faith framing of fact-checking as censorship.
Fact-checking is not censorship
Pointing out that something is factually incorrect or just plain wrong is not censorship. Fact checking is not censoring speech, it’s more speech. It’s the other side of the dialogue that public communication is supposed to foster in a democracy. Someone says something; somebody responds to what is said. That’s how we deliberate.
Fact checking is not censoring speech, it’s more speech. It’s the other side of the dialogue that public communication is supposed to foster in a democracy.
Over the past decade, as the platforms dominating our media have normalized strategic deception, fact-checking has become politicized. Bad faith communicators who deploy disinformation for political or monetary gain are now pushing this frame as a feature of their messaging. And it just so happens that one of them, Brendan Carr, is now going to be heading the FCC.
Incoming FCC director attempts to censor the fact-checkers
Take the example of NewsGuard, whose journalists track down “misinformation fingerprints” – where it started and who spreads it – and offer “reliability scores” for media outlets based on whether the things they publish are accurate, misleading or demonstrably bogus. Advertisers can use these reliability scores to make sure the digital ads they buy through programmatic ad auctions are not associated with hoaxes or Russian disinformation campaigns. They also offer a browser extension so that when users come across something on Facebook or X and click on it, they can know whether the publisher is reliable. They are completely transparent about the process and a Washington Post story showed that their scores are trustworthy and apolitical. In our digital media ecosystem, if you want news over noise, this is a valuable information product.
Carr, who wrote the telecommunications policy section of Project 2025, accused NewsGuard of being “Orwellian” and in an appearance on Newsmax - an infotainment outlet that has railed against fact-checking - threatened to take action against them. Incidentally, NewsGuard gave Newsmax a reliability score of 20/100, mostly because it pushed the Big Lie knowingly and had to settle civil libel suit for defaming Smartmatic voting machines and Dominion Voting Systems to avoid company-killing penalties.
In short, Carr threatened to use his government power as FCC chair to censor the fact-checkers all while claiming to be an advocate for free speech. Big Brother would be proud.
Fack-checkers swiftly bring misinformation to light
For people trying to understand the reach and scope of misinformation in our media ecosystem, NewsGuard’s Reality Check newsletter is extremely helpful. And their reporters are very busy. NewsGuard’s Jack Brewster, who we interviewed on News Over Noise last year, broke the story of the rumor on a partisan Facebook page that set off the immigrants “eating the dogs in Springfield” pollution campaign that, fueled by Elon Musk, Donald Trump and the neo-Nazi group Blood Tribe, swamped the media ecosystem.
This week NewsGuard has been working overtime to keep up with misinformation entrepreneurs and political operatives who have been shameless exploiting the wildfires ravaging Los Angeles to sew chaos and chase engagement. Meta’s soon-to-be-fired fact checkers were busy at work as well, trying to contain the viral disinformation. Alan Duke, the founder of Lead Stories, one of the fact-checking organizations that Meta uses, described the stakes succinctly, “Cutting fact checkers from social platforms is like disbanding your fire department.”
Misinformation signals opportunity for tech oligarchs and the incoming administration
Given that one of 2025’s biggest information super-polluters is Elon Musk, who has been relentlessly pushing misinformation on the LA fires to millions on X and will now be part of the incoming administration, it’s easy to see why Zuckerberg would be caving to pressure to create a more misinformation-friendly media ecosystem. Misinformation-at-scale is not a problem for the technology oligarchs or the incoming President that they spent hundreds of millions to install, it’s an opportunity to be exploited for gain.
But while this regulatory environment may give Meta, Musk’s X, and whichever technology baron is eventually bestowed TikTok the impunity their content distribution and data mining businesses need to flourish, it doesn’t mean `we the people’ have to adopt the framing that enables the misplaced power of the tech-industrial complex and normalizes the actions of those who deploy misinformation-at-scale for profit or power.
So once more for those in the back of the room: Fact-Checking is not censorship. It is what responsible journalists are supposed to do and what responsible communicators and citizens should welcome as part of our ongoing democratic deliberation. –MJ
Fact-Checking is not censorship. It is what responsible journalists are supposed to do and what responsible communicators and citizens should welcome as part of our ongoing democratic deliberation.