“Past performance does not guarantee future results.”
While chance plays a role in many, if not all, of our decisions and consequences, its role is both partial and variable. As a result, our understanding of “cause” is ambiguous, which, in turn, distorts our judgments and predictions. It helps to explain why all my achievements come from hard work, while yours were due to luck. To generalize, we all underestimate the role of chance in the outcomes of our actions, viewing our “task performance over time as diagnostic of ability.”
The research, reported in PNAS Nexus, investigates situations entirely determined by chance, e.g., coin flips, where past performance should have no bearing on future expectations. The study examined how people's expectations and behaviors were affected by actual lucky successes and unlucky failures.
Using both real and virtual coins, participants were asked to predict the outcomes of a sequence of five coin tosses. The researchers observed how the experience of varying degrees of "lucky successes" and "unlucky failures" influenced subsequent expectations and behaviors, anticipating three possible responses.
The actions of economic’s mythical “rational actor.” An individual, knowing the “base rate” of a coin toss — 50% heads and 50% tails — would consistently forecast their future performance in line with this base rate, demonstrating “base rate adherence.” Past outcomes would have no bearing on future, independent results.- Individuals who subscribe to the “gambler’s fallacy” believe the past influences future performance. It comes in two forms. The first, termed by the researchers “base rate compensation,” describes individuals who have a “stock of luck” to draw upon. Where the especially lucky anticipate a “run of bad luck,” while the especially unlucky know that a win is “just around the corner.”
The second form, “base rate updating,” took the opposite tack, updating their probability of success as persistent based on their recent experience, continuing their “hot hand” or being plagued by “a cloud hanging over them.”
The experiments aimed to determine which of these patterns best described how people calibrated their future expectations based on transparently random events.
Why We Misread Randomness
The study revealed that people’s reactions to coin toss outcomes were anything but rational. Participants who guessed correctly grew more optimistic about their future performance and were more willing to take risks. By contrast, those who guessed wrong became pessimistic and risk-averse. This happened even though everyone knew the game was random and each toss remained a 50:50 chance—demonstrating just how powerfully our minds invent patterns where none exist.
Importantly, the results could not be explained away by a lack of probability knowledge, distrust in the coins, personal risk attitudes, or the presence of rewards. Instead, the bias seemed to come from within: a tendency to detect order in noise.
Some key findings stood out:
- Prediction switching amplified the effect. Participants who frequently changed their guesses were even more likely to see illusory links between their choices and outcomes.
- Losses loomed larger than gains. Unlucky failures had a stronger psychological impact than lucky successes, leading to greater pessimism—echoing well-established research that humans are more sensitive to losses than wins.
- Education offers little protection. Even when people understood odds and randomness, their belief in predictive ability remained largely unchanged. As the researchers noted, teaching probability may sharpen technical skills but does little to blunt gambling behavior or the pull of fallacies.
In essence, our inherent drive to find structure and meaning leads us to often misinterpret random events; we are base-rate updaters, behaving as if good or bad fortune is a reliable predictor of what's to come. [1] Moreover, this blind spot is seen not just with the extremes of a “hot hand,” but for more intermediate illusions of control. These experiments reveal a broader truth about how we misread randomness: we impose patterns where none exist. Psychologists call this the illusion of control.
The Illusion of Control: Finding Patterns in Noise
“…past experience, even when events are randomly determined, exerts an irresistible influence on people’s future expectations.”
The research demonstrates how our cognitive bias leads us to believe that past performance does predict future behavior. Another of our mistaken beliefs that furthers this illusion of control is the law of small numbers, which holds that a handful of outcomes are believed to reliably reflect the long-run average. The cognitive blind spot is exacerbated because we often attribute causality to events based on short temporal intervals. Humans have a harder time associating events over time periods greater than a few days. While coins are trivial, the same flawed reasoning underpins how we make choices in health, finance, and policy. What begins as a harmless superstition about streaks becomes dangerous when anecdotes drive decisions that should rest on evidence.
From Coin Tosses to Vaccines: When Anecdotes Override Data
Small samples are noisy, streaky, and unrepresentative — but our minds mistake them for patterns. This psychological bias directly parallels how many people approach health decisions, especially those related to vaccination. The anti-vaccine movement thrives on narratives built from small samples:
- “My child was fine before being vaccinated, now he has autism,” echoes the coin tossers who interpret a short temporal interval as a meaningful direct causation, despite overwhelming knowledge and evidence that it is not.
- “I never got the flu shot, and I never got sick,” echoes the coin tosser who believes their success reflects skill rather than luck. A personal anecdote can confuse short-term personal outcomes with robust evidence.
- Secretary Kennedy stated during his Senate confirmation that the COVID vaccine “was the deadliest vaccine ever made,” citing an increased number of unvetted reports to the Vaccine Adverse Event Reporting System. He, like all the coin tossers, is convinced that a rare outcome reflects a larger, inevitable truth.
These errors permeate other health judgments. People smoke for years without consequence and infer immunity to lung cancer, while others see breakthrough infections and declare vaccines useless. Policymakers, too, can swing from optimism to alarm based on a handful of outcomes, producing public health “whiplash.” Each case reflects the same bias: humans find order within noise and build sweeping beliefs from small samples.
We crave pattern, and data alone cannot counteract the pull of personal anecdotes. Just as gamblers misread runs of heads or tails, people mistake anecdotes and local clusters for evidence, undermining trust in preventive measures. The law of small numbers ensures that small, unrepresentative stories can carry more weight in human psychology than large, carefully gathered datasets—a challenge that public health communication must confront directly.
Human judgment is wired to see order where none exists. Whether at a casino table or in a public health debate, we give disproportionate weight to vivid anecdotes and recent streaks while discounting hard evidence and statistical truth. This tendency to impose patterns on randomness may once have served an evolutionary purpose—helping us spot danger or opportunity quickly—but today it often undermines rational decision-making. If we want to build better policies, healthier lives, and a stronger trust in science, we must learn to check the power of stories with the discipline of data. Anecdotes will always be compelling, but without the anchor of evidence, they can lead us badly astray.
[1] Interestingly, while this ‘base-rate updating’ is a bias in everyday reasoning, it mirrors the Bayesian' updating of priors,’ a rational method in formal probability. Base rate updating is also a metaphorical explanation of theories of conscious thought as “prediction machines.”
Source: People Calibrate Future Expectations To Past Performance When Predicting Transparently Random Events PNAS Nexus DOI: 10.1093/pnasnexus/pgaf237
