The Math of Regime Collapse: Why Uncertainty Disfavors Incumbents

March 19, 2026
15 min read
By Avi Turetsky

Using a conversation with Claude about Iranian regime collapse probability as a teaching example for bounded distributions, second-order uncertainty, and why uncertainty mathematically disfavors incumbents. Note: I am not an Iran expert—the math is the point.

Share:

⚠️ Please Read First: A Note on My Background and Its Limits

I want to be honest about what I do and do not bring to this topic. I have spent considerable time in the Middle East, primarily in the Gulf and Israel. I speak Hebrew fluently and Arabic at a working level. I am Jewish and have studied Islam seriously. I also hold a PhD in strategy.

But I want to be equally honest about what that background does not give me. I have never been to Iran. I speak no Persian. And if I am being candid, I genuinely feel I could not give a coherent account of the Iranian regime's motivations, its relationship to its own population, or why it holds the positions it does toward the United States and Israel. My regional and strategic background gives me some context, but not the specific knowledge that would make me a credible analyst of Iranian internal politics. I have also never served in any military capacity.

The purpose of this post is not to offer expert analysis of Iran. It is to use a real-world conversation about Iran as a teaching example for Bayesian math—specifically, how bounded distributions, second-order uncertainty, and coordination problems interact. Readers with genuine expertise in Iranian politics should substitute their own priors; the math will still work.

This post is based on a conversation I had with Claude (Anthropic's AI assistant) on March 16–17, 2026, about the probability of the Iranian regime “substantively falling” by end of 2026. What started as a simple question—"what's the probability?"—turned into a rich discussion of mathematical ideas that apply far beyond Iran: to election forecasting, medical diagnosis, business risk, and anywhere you have a probability estimate near zero or one under high uncertainty.

I am sharing it here because the conversation naturally illustrated three distinct Bayesian insights that are worth understanding on their own terms.

The Setup: A Simple Probability Estimate

The conversation began with a straightforward question: given the ongoing US-Israel military campaign against Iran (which began in late February 2026 and included the reported killing of Supreme Leader Khamenei), what is the probability of the Iranian regime "substantively falling" by end of 2026?

Claude's answer, synthesizing publicly available intelligence reporting, was approximately 15%, with a range of roughly 8–30%. Here are the key inputs to that estimate, along with my honest assessment of how much confidence to place in each:

AssumptionSource / BasisMy Confidence
US intelligence assesses regime will likely remain in place, weakened but intactWashington Post, Reuters (citing 3 intelligence sources), March 2026Moderate — cannot verify independently
IRGC is consolidating power, not fracturingAl Jazeera analysis, March 17, 2026; ACLED expert commentModerate — consistent across multiple sources
Regime is reframing war as defense of territorial integrity, not clerical ruleAl Jazeera, March 2026Moderate
Iran's inflation reached 68.1% point-to-point in February 2026IranWire, Statistical Center of Iran, February 2026High — official statistical data
Authoritarian regimes under external military pressure rarely fall during the assaultHistorical base rate, political science literatureModerate — base rates are real but imprecise
Five conditions for revolution were "nearly" met pre-war (fiscal crisis, divided elites, oppositional coalition, resistance narrative, favorable international environment)The Atlantic, January 2026Low-to-moderate — one analyst's framework

Important: I am accepting Claude's synthesis of public reporting as my starting point. I have no independent ability to evaluate these claims. A genuine Iran expert might weight these inputs very differently, and their posterior would be better calibrated than mine. I am using this as a teaching example, not as a forecast.

Bayesian Insight #1: Second-Order Uncertainty

When I told Claude I would "go a bit higher" than 15% because of the high variance (driven largely by how much I don’t know), Claude pushed back—usefully. The pushback was:

"High variance is a reason to be humble about your point estimate, not necessarily to move it. Unless you have a specific reason to think your priors are systematically biased downward."

This is an important distinction. There are two separate things:

First-Order Probability

The probability the regime falls — Claude's estimate of ~15%. This is the object of interest.

Second-Order Uncertainty

My uncertainty about that 15% estimate — the 8–30% range. This is uncertainty about the uncertainty.

High second-order uncertainty (a wide range of possible values) means I should be humble about the point estimate. It does not, by itself, shift the mean. The information asymmetry I was invoking — "there's so much I don't know that the players do" — cuts both ways: private information could favor collapse or resilience. Historically, authoritarian security states tend to have more internal coherence than outside observers assume, precisely because the ugly coordination happens off-camera.

Bayesian Insight #2: Bounded Distributions Pull the Mean Above the Mode

Here is where the math gets genuinely interesting. Collapse probability is bounded: it can't go below 0% or above 100%. Claude's estimate of ~15% sits close to the lower bound. This creates an unavoidable asymmetry:

Left Tail: Truncated

The probability cannot go below 0, so there are only 15 percentage points of room to the left of the estimate.

Right Tail: Unconstrained

There are 85 percentage points of room to the right before hitting the ceiling at 1.

If there is meaningful uncertainty in the estimate, the distribution cannot be symmetric: the left tail is cut off by the zero floor, while the right tail has plenty of room to extend. This forces the average (mean) above the most-likely value (mode). The more uncertain you are, the bigger the gap.

Claude's concession was direct: "I was implicitly treating my point estimate as a mean when it was probably closer to a mode or median. The Bayesian-correct move, given acknowledged high uncertainty, is to recognize the right-skew and shade the mean upward."

The Math Illustrated: Same Guess, Different Confidence

The Beta distribution is the standard mathematical tool for modeling a probability that lives between 0 and 1. It has two parameters, α and β. The simplest way to think about them: α is the weight of evidence pushing the probability up, and β is the weight pushing it down. A large α + β means you have a lot of evidence and your distribution is tight. A small α + β means you have little evidence and your distribution is wide and flat. In Scenario A (high confidence), α + β ≈ 50—as if you had synthesized roughly 50 data points. In Scenario B (low confidence), α + β ≈ 8—as if you had almost nothing to go on.

In both scenarios below, the best guess (mode) is the same: 15%. What changes is how confident we are in that guess:

Scenario A: High ConfidenceBeta(8.2, 41.8)
Mode (best guess)
15%
Mean (expected value)
16.4%
Gap
+1.4 pts

Tight distribution, nearly symmetric. The zero-bound barely matters because the spread is small. Mean ≈ mode.

Scenario B: Low ConfidenceBeta(1.9, 6.1)
Mode (best guess)
15%
Mean (expected value)
23.8%
Gap
+8.8 pts

Wide distribution, strongly right-skewed. The zero-bound truncates the left tail while the right tail stretches freely. Mean is pulled nearly 9 points above the mode.

The punchline: Both analysts agree the most likely outcome is 15% collapse probability. But the analyst who acknowledges more uncertainty has a higher expected value—23.8% vs 16.4%—purely because of the mathematics of bounded distributions. Claude's acknowledged uncertainty meant the true mean was probably in the 18–25% range, not 15%.

🎛️ Interactive Beta Distribution Explorer

Adjust the sliders to see how your point estimate and confidence level shape the distribution—and how the mean diverges from the mode.

This becomes the mode—the peak of the curve.

2%70%

Low confidence = wide, skewed curve. High confidence = tight, symmetric curve.

Very uncertainVery confident
0%25%50%75%100%ModeMean
Mode (your guess)
15.0%
Mean (expected value)
18.0%
Spread (std dev)
±7.8%
Mean − Mode gap
+3.0%
Mild skew: The mean (18%) is slightly above the mode (15%). As you lower confidence, this gap will widen.

Under the hood: Beta(α=4.15, β=18.85). The "confidence" slider controls the total concentration α+β; higher concentration = tighter distribution.

🔑 Key Takeaway: When your probability estimate is near zero (or near one) and your uncertainty is high, the true expected value is higher (or lower) than your modal estimate. This is not intuition—it is a mathematical consequence of bounded support. It applies to any bounded probability: election forecasting, epidemiology, business risk, medical diagnosis.

Bayesian Insight #3: Uncertainty Mathematically Disfavors the Incumbent

This is perhaps the most elegant observation in the conversation. The mathematical and intuitive readings converge on the same conclusion:

The Mathematical Channel

The regime's default state is survival—the zero floor reflects institutional inertia, a coercive apparatus, and the weight of incumbency. When uncertainty increases (the distribution widens), the mean is mechanically pulled upward away from that floor. More uncertainty = higher expected collapse probability, mathematically.

The Intuitive Channel

Regime stability is a coordination problem. The IRGC stays loyal because each commander believes the others will. Elites don't defect because they're certain the regime will survive and punish defectors. When that certainty erodes, the calculus flips—elites no longer know whether the regime can protect or punish them, and defection starts to look rational. The incumbent needs certainty to survive; the challenger only needs uncertainty to have a chance.

Both channels say the same thing: uncertainty is what creates the possibility of departure from the status quo. The mathematics and the political intuition are pointing in the same direction—which is a good sign the model is capturing something real.

This framework also offers a way to interpret the US-Israel campaign itself. Assassinating senior leaders, striking IRGC outposts, and degrading command-and-control infrastructure are not just kinetic pressure—they are, in distributional terms, acts of deliberate uncertainty injection. Each strike removes a node of common knowledge: commanders no longer know who is in charge, elites no longer know whether the coercive apparatus will protect them, and the population no longer knows whether the regime can project force. The intent is not necessarily to shift the mode (the most likely outcome is still survival) but to widen the distribution—which, as we have seen, mechanically raises the mean.

The Iranian attacks on Gulf neighbors can be read through the same lens, but in reverse—and with a seemingly less successful outcome. The intent is presumably to shift the distribution leftward: to demonstrate coercive reach, restore deterrence credibility, and compress the right tail of collapse scenarios by signaling regime strength. But attacks on civilian infrastructure and shipping lanes likely have the opposite distributional effect. Rather than narrowing uncertainty in the regime’s favor, they widen it—raising the variance without shifting the mode left. A regime that must demonstrate its strength is already advertising its uncertainty.

The corollary, which Claude noted, is that the regime's most rational response is to reduce uncertainty—to re-establish common knowledge of regime coherence before the window closes. This explains the "territorial integrity" nationalist reframe and the rapid succession process: both are attempts to restore the coordination equilibrium by re-establishing common knowledge of regime coherence.

The Strategic Implications (Where I Have No Expertise)

⚠️ Disclaimer: The following section discusses strategic implications that emerged from the conversation. I want to be very clear: I am not qualified to evaluate whether any of this strategic reasoning is correct as applied to Iran. I am presenting it only because it illustrates how distributional thinking maps onto strategic objectives—a pedagogical point that applies to many domains beyond geopolitics.

The distributional framing suggests that the US-Israel campaign's strategic objectives can be mapped onto three distinct operational tracks:

TrackDistributional GoalExample Actions
Mean-shiftingSustained pressure degrading coercive capacityStrikes, sanctions, information warfare
Right-tail extensionCreating viable pathways to collapse with near-zero current probabilityCredible opposition, elite defection incentives, succession alternatives
Left-tail compressionReducing probability of catastrophic outcomesPost-collapse planning, regional coordination, back-channels

Claude noted that most military campaigns are competent at mean-shifting, mediocre at right-tail extension, and almost entirely neglect left-tail compression—which is why the historical base rate of "external pressure produces better successor regime" is poor. The conversation also noted the risk of maximizing uncertainty indiscriminately: a uniform distribution on [0,1] has a mean of 50%, but it also dramatically fattens the catastrophic left tail (nuclear breakout, ethnic fragmentation, prolonged civil war).

Again: I have no expertise to evaluate whether this applies correctly to Iran. What I find pedagogically valuable is the framework itself—the idea that strategic objectives can be stated as distributional goals rather than just "win" or "lose."

What This Teaches Us About Bayesian Reasoning

Setting aside the Iran specifics entirely, this conversation illustrates several transferable lessons:

1. Distinguish your best guess from your uncertainty about that guess

A wide range of possible outcomes is not by itself a reason to revise your best guess upward. It is a reason to be humble about that guess. "I'm very uncertain" does not automatically mean "I should go higher."

2. Bounded distributions are asymmetric

When your probability estimate is near zero (or near one), high uncertainty mechanically pulls the mean away from the boundary. This applies to any bounded probability estimate.

3. Updating in real time is a skill

Claude's willingness to concede "I was underpricing it" when presented with the distributional argument is an example of good Bayesian practice—updating on a valid argument rather than defending the original estimate.

4. Transparency about assumptions is essential

The 15% estimate rests on a chain of assumptions, each with its own uncertainty. Making those assumptions explicit allows readers to substitute their own priors and reach better-calibrated conclusions.

5. Information asymmetry cuts both ways

"There's so much I don't know" is not a reason to shade higher or lower—it is a reason to widen your range symmetrically, unless you have a specific reason to think the unknown information is directionally skewed.

Conclusion

The Iranian situation is genuinely uncertain, and I am genuinely not qualified to resolve that uncertainty. What I hope this post demonstrates is that Bayesian reasoning provides a rigorous framework for thinking about that uncertainty—distinguishing modal estimates from means, recognizing how bounded distributions behave, and understanding why uncertainty itself has directional implications for coordination problems.

If you are an Iran expert, I would genuinely welcome your assessment of the input assumptions in the table above. Better inputs produce better posteriors, and the math is only as good as the premises it rests on.

The conversation with Claude took place on March 16–17, 2026. All source links refer to articles available as of March 18, 2026. The author has no expertise in Iranian politics, military strategy, or geopolitics beyond that of an ordinarily informed reader.

Share:

Get New Posts via Email

Subscribe to receive updates when we publish new Bayesian analysis articles.