RESEARCH PAPERS

New research reveals a troubling paradox: fact-checking demonstrably improves belief accuracy, yet can simultaneously reduce overall social welfare through controversy costs. This analysis explores the implications for information policy and democratic discourse.

The Fact-Checking Paradox: When Truth Wins But Society Loses

· 9 min read

The Fact-Checking Paradox: When Truth Wins But Society Loses

In an era where misinformation spreads faster than truth and fact-checking has become a cornerstone of information integrity, a new research paper by Markus Leippold from the University of Zurich presents a deeply unsettling finding: fact-checking works, yet truth can still lose.

The paper, “The Fact-Checking Paradox: Strategic Misinformation, Controversy Externalities, and Optimal Verification,” published in October 2025, reveals that while fact-checking consistently improves individual belief accuracy, it can simultaneously reduce overall social welfare through what the author terms “controversy externalities.”

The Core Paradox

In simple terms: Fact-checking successfully corrects individual false beliefs, but when people feel their identity is threatened by corrections, they fight back hard. This defensive reaction wastes everyone’s time and energy, making society worse off even though more people now believe the truth.

The research builds on a well-established empirical foundation: meta-analyses confirm that corrections consistently reduce false beliefs, with no systematic backfire effects. During COVID-19, English-language fact-checks surged 900% in three months. Yet this unprecedented verification effort coincided with significant erosion of shared epistemic foundations, as partisan identity increasingly trumped factual agreement.

How can interventions that demonstrably succeed at correcting individual beliefs fail systematically at preserving social epistemology?

Leippold’s answer lies in what he calls “controversy costs”—deadweight losses from identity-defensive cognitive effort. When fact-checks challenge entrenched beliefs in polarized settings, they trigger socially sterile rent-seeking behavior. Individuals spend hours searching for contrarian evidence, engage in heated debates, and double down on prior beliefs to protect their worldview. This expenditure of time and cognitive effort is a pure deadweight loss to society.

The Mechanism: Cognitive Rent-Seeking

In simple terms: When fact-checkers challenge false beliefs that people have held for a long time, those people invest lots of mental energy defending their position. The more misinformation there is and the more fact-checking happens, the more energy gets wasted on this defensive behavior.

The paper micro-founds this through a model of “cognitive rent-seeking.” When verification intensity (v) challenges beliefs entrenched by misinformation (m), individuals exert costly cognitive effort to resist belief updating. The marginal return to this resistance scales with the interaction between challenge salience and belief entrenchment: R = k · m · v.

This creates a quadratic controversy cost: Γ(m, v) = (γ₀/2) m²v². The key insight is that controversy requires both misinformation and verification—it’s the interaction that generates the social friction.

Three Critical Findings

1. The Fact-Checking Paradox

In simple terms: More fact-checking always makes people more accurate, but there’s a sweet spot for overall social benefit. Too little fact-checking misses opportunities, but too much creates so much controversy that society loses more than it gains.

The central result is an inverted-U welfare curve. As verification intensity increases:

  • Belief accuracy improves monotonically (B* increases)
  • Social welfare follows an inverted-U pattern (peaks then declines)

The welfare derivative reveals the trade-off. At the social optimum, the first-order condition balances:

MAB(v) + IEB(v) = C'v(v) + DCC(v)

Where:

  • MAB = Marginal Accuracy Benefit: ψ(1 - B*) dB*/dv (positive)
  • IEB = Indirect Environmental Benefit: γ₀ m* v² (-dm*/dv) (positive, as verification reduces toxicity)
  • C’v(v) = Operational cost of verification
  • DCC = Direct Controversy Cost: γ₀ m*² v (grows with verification intensity and toxicity)

At low verification levels, accuracy benefits dominate. But as verification intensifies in toxic environments, controversy costs grow quadratically (scaling with m*²v) while accuracy benefits exhibit diminishing returns (MAB(v) = O(1/v³) as v → ∞), eventually overwhelming the gains. This creates the single-peaked welfare pattern even as accuracy improves monotonically.

2. Verification-Induced Polarization

In simple terms: When fact-checking works better for people who already trust institutions, it widens the gap between believers and skeptics. Eventually, enough fact-checking helps everyone, but in the middle range, it can make polarization worse before it gets better.

The paper demonstrates that uniform fact-checking can systematically widen belief gaps when audiences exhibit asymmetric trust or verification efficacy. This provides an economic mechanism for the empirical finding (Lord et al., 1979) that identical evidence polarizes opposing groups.

Verification acts as a trust-complementary public good, differentially benefiting high-trust audiences while leaving skeptical populations behind. The belief gap follows an inverted-U pattern: initially widening as verification benefits high-trust groups disproportionately (dP*/dv > 0 at v = 0), then eventually narrowing as intensive verification promotes convergence (dP*/dv < 0 for sufficiently large v), with lim(v→∞) P*(v) = 0.

3. The Strategic Flooding Equilibrium

In simple terms: When the information environment gets toxic enough, misinformation creators stop trying to convince people and just flood the zone with noise. This overwhelms people’s ability to process information, causing them to tune out entirely—which is exactly what the bad actors want.

Perhaps most concerning, the research identifies conditions under which rational misinformation producers abandon persuasion entirely for systematic noise generation—“flooding the zone.” When cognitive bandwidth becomes scarce, producers exploit this constraint through attention-dilution, flooding the information space to induce rational disengagement.

This mirrors Akerlof’s market for lemons: strategic noise creation severely impairs the market for credible information. In the flooding equilibrium, fact-checkers withdraw because the cost of signaling quality exceeds the benefit in a market paralyzed by noise.

Policy Implications: State-Contingent Instruments

In simple terms: There’s no one-size-fits-all solution for fact-checking policy. Sometimes we should support it with subsidies, sometimes we should limit it with taxes, depending on how toxic the information environment has become. Always tax misinformation producers, but adapt fact-checking support to the situation.

The paper’s policy prescriptions challenge prevailing views of fact-checking. The optimal corrective instruments are necessarily state-contingent:

  1. Misinformation producers should always be taxed (τₕ > 0) to internalize controversy externalities, attention-dilution harms, and accuracy degradation.

  2. Fact-checking support must adapt to environmental conditions:

    • Subsidize (τᵥ < 0) when |εₘ,ᵥ| > 1 (verification meaningfully reduces toxicity)
    • Tax (τᵥ > 0) when |εₘ,ᵥ| < 1 (controversy costs dominate at the margin)

    Where εₘ,ᵥ = (v/m*) · (dm*/dv) is the elasticity of toxicity with respect to verification.

  3. Under measurement error, quantity instruments (liability rules) can dominate Pigouvian taxes precisely when intervention is most needed—in high-toxicity environments where controversy costs exhibit high curvature (Γₕₕ is large).

  4. Prevention dominates correction: Once producers pivot to systematic noise generation, fact-checking becomes ineffective, implying prevention through platform liability dominates reactive correction.

  5. Decentralized verification systems (like community-based fact-checking) can produce either too much or too little verification compared to the social optimum, depending on whether accuracy externalities or controversy externalities dominate. In low-toxicity environments, free-riding leads to under-provision; as polarization increases, users may over-provide verification, ignoring cognitive friction costs.

Why This Matters for Climate Communication

In simple terms: Climate misinformation is often tied to people’s identity, so aggressive fact-checking can backfire by triggering defensive reactions. The solution isn’t to stop fact-checking, but to intervene early and adapt strategies based on how polarized the environment has become.

For those working in climate communication and fact-checking, this research raises fundamental questions about strategy. The paper suggests that in highly polarized environments—exactly where climate misinformation is most prevalent—aggressive fact-checking may be counterproductive.

The controversy externality is particularly relevant for climate discourse, where identity-protective cognition is well-documented. When fact-checks challenge climate denial, they may trigger defensive reactions that consume cognitive resources without advancing understanding.

This doesn’t mean abandoning fact-checking. Rather, it suggests:

  • Early intervention to prevent environmental degradation is substantially more efficient than attempting to correct toxic environments later
  • State-contingent approaches that adapt verification intensity to environmental conditions
  • Upstream interventions (taxing misinformation producers) that restore conditions where verification delivers social value

Critical Questions for Practitioners

In simple terms: This research raises practical questions we don’t yet have answers for. How do we measure how toxic an information environment is? Can AI help manage fact-checking at scale? How do we balance correcting falsehoods with maintaining social trust?

The research leaves several open questions that deserve attention:

  1. How do we measure environmental toxicity in real-time? The model requires accurate data on belief states and misinformation ratios, which platforms may misreport.

  2. What role does network structure play? The paper notes that controversy costs may scale differently in polarized networks, suggesting community structure matters as much as content.

  3. Can AI-augmented systems help? The conclusion mentions exploring optimal designs for human-AI collaboration in decentralized verification—a promising direction given the scale of the problem.

  4. How do we balance accuracy and social cohesion? The paradox suggests we may face trade-offs between correcting false beliefs and maintaining social trust.

A Nuanced View of Information Ecosystems

In simple terms: This research doesn’t give us easy answers, which is actually its strength. It shows that fact-checking can work for individuals while creating problems for society as a whole, requiring careful, context-specific policy responses rather than blanket solutions.

What makes this research particularly valuable is its refusal to offer simple solutions. It acknowledges that fact-checking works at the micro level while potentially failing at the macro level—a tension that requires sophisticated policy responses.

The paper’s framework integrates insights from Bayesian persuasion, motivated reasoning, and political economy while incorporating empirical regularities about resistance to belief-threatening information. This theoretical rigor, combined with numerical illustrations anchored in literature, provides a foundation for evidence-based information policy.

The Challenge for Democratic Institutions

In simple terms: Democratic societies face a difficult balancing act: correcting falsehoods is essential, but doing so can create social friction that undermines trust and cohesion. We need to manage both the misinformation and the controversy that correcting it creates.

The research clarifies a central trade-off for democratic institutions: in a world made toxic by misinformation producers, the very act of successfully correcting falsehoods can create a new kind of social cost—the controversy externality—that we must understand and manage.

Preserving truth, therefore, requires managing not only the falsehoods themselves but also the social friction generated by their correction. This is a more complex challenge than simply “fact-checking more,” but it’s one we must address if we’re to maintain both epistemic accuracy and social cohesion.

Access the Research

The full paper “The Fact-Checking Paradox: Strategic Misinformation, Controversy Externalities, and Optimal Verification” by Markus Leippold is available on SSRN. The research was conducted at the University of Zurich, Department of Finance, and Swiss Finance Institute, with support from the Swiss National Science Foundation (Grant Agreement No. 207800).

This research connects to several ongoing projects in automated fact-checking and information economics. For those interested in the technical implementation of fact-checking systems, see our work on Climinator and the Climate+Tech FactChecker.

The paper’s insights are particularly relevant for understanding the challenges facing AI tools for democratic discourse and the design of information ecosystems that balance accuracy with social cohesion.


Note: This analysis reflects our interpretation of the research findings. The paper presents a formal economic model with rigorous proofs; this summary emphasizes the practical implications for information policy and climate communication. For technical details, mathematical derivations, and complete policy prescriptions, readers should consult the original paper.