Back to book summaries
The Great Mental Models
books

The Great Mental Models

by Shane Parrish

Most people carry mental models only from their own field, creating blind spots everywhere else. Parrish collects the most reliable thinking frameworks from physics, biology, and mathematics into a single toolkit for clearer reasoning and better decisions.


The Great Mental Models — Shane Parrish

Impressions

The strongest chapters are grounded in concrete examples. The Map Is Not the Territory and First Principles Thinking deliver genuinely useful frameworks with clear applications. Probabilistic Thinking is the chapter most likely to change how readers actually make decisions, because it replaces the binary thinking ("this will work" / "this won't work") that most people default to. The weaker chapters (Thought Experiment, Hanlon's Razor) feel thinner, covering ground most thoughtful readers have already internalized. The book is strongest when it shows how models interact and compound. It is weakest when each model is treated as an isolated concept with no connection to the others.

Who Should Read It?

  • Anyone who makes decisions under uncertainty and wants a structured way to reduce blind spots.
  • People who rely on one dominant framework (engineering thinking, financial thinking, legal thinking) and need to cross-pollinate.
  • Leaders who want a shared vocabulary for better thinking across their team.

How the Book Changed Me

  • I started catching myself confusing maps for territory. When a plan or model stops matching reality, I now update the model instead of defending it. This was the single biggest shift.
  • First principles thinking changed how I evaluate costs and constraints. I now actively separate real constraints from conventions before accepting that something "can't be done."
  • I began assigning rough probabilities to outcomes before making decisions instead of defaulting to best-case or worst-case scenarios. Probabilistic thinking turned my decision process from binary to continuous.

My Top 3 Quotes

"The map of reality is not reality. Even the best maps are imperfect."

This is the foundational idea of the book. I come back to it every time I notice myself defending a model instead of questioning it.

"As to methods, there may be a million and then some, but principles are few. The man who grasps principles can successfully select his own methods. The man who tries methods, ignoring principles, is sure to have trouble."

This quote captures why first principles thinking matters. I use it as a filter when I catch myself copying someone else's approach without understanding why it works.

"Failing to consider second and third-order effects is the cause of a lot of painfully bad decisions."

A useful reminder that the obvious first move is rarely the whole picture. I now force myself to ask "and then what?" at least twice before committing to a decision.

Summary + Notes

The Map Is Not the Territory

Every model is a simplification of reality. The map is a useful tool, but it is not the territory itself.

When the map is mistaken for the territory, the instinct is to defend the model of reality rather than update it when new information arrives. Frameworks that once worked get preserved even when circumstances have shifted.

"The map of reality is not reality. Even the best maps are imperfect."

What this means in practice:

  • Mental models are approximations, not truth.
  • Update models when the evidence contradicts them.
  • Be especially skeptical of models held for a long time without being questioned.

A doctor who has only ever treated a particular demographic might develop a mental map of "normal" that does not apply to other patients. A software engineer might map every new problem onto the architecture they already know best. Both are letting the map override the territory.

The fix is not to abandon models. They are essential. The discipline is to hold them loosely and to actively seek out evidence that might require updating them.


Circle of Competence

"I'm no genius. I'm smart in spots, but I stay around those spots." (Thomas Watson)

The circle of competence concept is deceptively simple: there are areas where knowledge is deep and reliable, and areas where it is shallow. Knowing which is which is one of the most important things a person can know.

What is a circle of competence?

A circle of competence is built through experience, study, and feedback over time. It is not a fixed boundary. It can expand. But expansion takes real effort and honest self-assessment. The danger is assuming the circle is larger than it actually is.

Most people operate with a circle of competence they have never honestly examined. They assume expertise in domains they have only surface familiarity with. This is where most bad decisions come from.

How to know when you have a circle of competence

Three signals:

  1. You can explain it simply. If the fundamentals of a domain cannot be explained to a smart non-expert, the understanding is probably shallower than assumed.
  2. You can identify the limits of your knowledge. Real experts know what they do not know. They can name the questions their field has not yet answered.
  3. You have a track record. Performance over time, with feedback, in varied conditions. This is the evidence.

How to build and maintain a circle of competence

Building real competence requires:

  • Curiosity over a long time. Genuine depth has no shortcut.
  • Feedback loops. Honest, timely information on whether judgements are correct. Without feedback, practicing the wrong thing for years produces more confidence without improvement.
  • Learning from others who have already made the mistakes yet to be made. Mentors, case studies, and failure post-mortems compress the feedback loop.

About self-feedback

One of the most valuable practices is keeping a record of decisions and the reasoning behind them. Revisit them. Where was the reasoning right? Where was it wrong? What was missed?

Most people never do this. They remember correct predictions clearly and forget the wrong ones. The result is chronically overestimated competence.

How to operate outside a circle of competence

Sometimes decisions must be made in areas without expertise. The right posture is:

  • Acknowledge the gap explicitly. Name the fact that the decision is outside the circle.
  • Rely on first principles rather than domain-specific heuristics that are not fully understood.
  • Bring in people who have genuine expertise and listen to them carefully.
  • Move slower than inside the circle. The margin for error is smaller.

Falsifiability

A competence that cannot be tested is not competence. It is belief. Good thinking requires falsifiable claims: specifying what evidence would change the conclusion. If no evidence could change the position, the foundation is faith, not knowledge.


First Principles Thinking

"As to methods, there may be a million and then some, but principles are few. The man who grasps principles can successfully select his own methods. The man who tries methods, ignoring principles, is sure to have trouble."

First principles thinking means breaking a problem all the way down to its most basic, foundational truths and then building reasoning back up from there, rather than from analogy or convention.

Most people do not think in first principles. They think by analogy: "This is like that other thing, so the solution is probably similar." Analogy is useful for well-understood problems, but it is limiting when the goal is genuine innovation or when existing solutions are inadequate.

What is first principles thinking?

A first principle is a foundational proposition that cannot be deduced from anything more basic. In physics, the laws of thermodynamics are first principles. In geometry, Euclid's axioms are first principles.

In everyday thinking, a first principle is the bedrock assumption that can be relied on. The thing known to be true even after stripping away all conventions, precedents, and received wisdom.

Elon Musk's approach to rocket manufacturing is a well-known example. Rather than accepting the industry-standard cost of rockets as a given, he asked: what are rockets made of? What do those raw materials actually cost? The answer was that the materials cost far less than the finished product. The gap was filled by convention, inefficiency, and lack of competition. First principles thinking revealed an opportunity that analogy thinking would have missed entirely.

Techniques for establishing first principles

Socratic questioning is the most powerful tool. Ask:

  1. Why do I believe this?
  2. What is the evidence?
  3. What am I assuming that might not be true?
  4. What would have to change for this belief to be wrong?
  5. What are the consequences of accepting this?

The five whys. Keep asking "why?" until bedrock is reached. Most beliefs dissolve or transform significantly when subjected to this process.

Separate constraints from conventions. Some constraints are real: physical limits, mathematical truths. Others are merely conventional: "we have always done it this way," "that is not how this industry works." First principles thinking requires distinguishing between the two. Conventions can be changed. Physical constraints cannot.

Incremental innovation and paradigm shifts

Most innovation is incremental. It happens within an existing framework, improving on what is already there. First principles thinking is what enables paradigm shifts: new frameworks that replace the old ones rather than building on them.

Incremental innovation optimises the map. Paradigm shifts redraw the map entirely.

This is why incumbents in established industries often lose to newcomers. The newcomers are not constrained by commitment to the existing framework. They can ask first-principles questions that the incumbents cannot ask without undermining their own position.


Thought Experiment

"What if?"

A thought experiment is a way of exploring the consequences of an idea without running the actual experiment. A simplified scenario is constructed, one variable is changed, and the logic is followed wherever it leads. No cost, risk, or time required.

This is one of the oldest and most powerful tools in human reasoning. Einstein imagined riding alongside a beam of light. Galileo imagined dropping two balls of different weights from a tower. Neither needed to conduct the physical experiment first. The reasoning alone was enough to reveal something profound about the nature of reality.

Why thought experiments work

The real world is noisy. Confounding factors, incomplete data, and messy details obscure the core principles at play. A thought experiment strips all of that away. It isolates the variable in question and lets its implications be examined in a controlled mental environment.

This does not mean thought experiments always produce correct answers. They are constrained by the quality of the assumptions. But they are remarkably effective at:

  • Revealing hidden assumptions. Walking through a scenario step by step often uncovers beliefs that were not consciously held.
  • Testing the boundaries of a theory. Push an idea to its logical extreme and see if it still holds. If it breaks, something valuable about its limits has been learned.
  • Exploring unintended consequences. Before making a decision, mentally simulate what would happen if everyone acted the same way, or if the conditions changed.

How to use thought experiments

  1. Define the question clearly. What is being examined? What is the specific claim or theory under review?
  2. Construct a simplified scenario. Remove the noise. Create a mental model with only the essential variables.
  3. Change one variable. What happens if one condition is altered? Follow the implications rigorously.
  4. Examine the outcome. Does the result match what the theory predicts? Does it violate common sense? Does it reveal something unexpected?

Limitations

Thought experiments are only as good as the assumptions they rest on. If the mental model of the situation is flawed, the thought experiment will lead to the wrong conclusion with high confidence. This is why thought experiments should complement empirical observation, not replace it.

The other risk is that thought experiments can feel so satisfying that they discourage actual testing. A beautiful mental argument can be seductive even when it is wrong. The discipline is to treat the thought experiment as a hypothesis generator, not a conclusion generator.

Thought experiments in everyday decisions

These are not reserved for physicists. They apply to everyday reasoning:

  • Before accepting a new job: imagine six months in. What does a typical day look like? What has changed? What has been given up?
  • Before launching a product feature: imagine the user who encounters it for the first time. What do they expect? Where do they get confused?
  • Before making a commitment: imagine the version of the future where this decision is regretted. What went wrong? What was not anticipated?

The power of "what if?" lies in its ability to make the future less surprising. Perfect prediction is impossible, but the number of outcomes that arrive as complete surprises can be reduced.


Second-Order Thinking

Most people think about the immediate consequences of their actions. Second-order thinking asks: and then what?

Every decision sets off a chain of consequences. The first-order effect is the direct, intended outcome. The second-order effect is what happens as a result of the first-order effect. The third-order effect is what happens as a result of that. And so on.

Good decisions often require thinking several moves ahead. Bad decisions are often ones where the first-order effect was positive but the second and third-order effects were negative. The decision-maker never looked past the first.

"Failing to consider second and third-order effects is the cause of a lot of painfully bad decisions."

Examples of second-order thinking:

  • A government subsidises a commodity to lower prices for consumers (first-order: prices fall). But producers respond to lower prices by producing less (second-order: supply shrinks). Long-term, prices might be higher than before (third-order: the policy achieves the opposite of its goal).
  • A company lays off workers to cut costs (first-order: costs fall). But morale collapses among remaining employees, who work less productively (second-order: output falls). The best people leave for competitors (third-order: institutional knowledge is lost).

Second-order thinking is not about being pessimistic. It is about being thorough. Every intervention has unintended consequences. The goal is to anticipate as many of them as possible before committing.


Probabilistic Thinking

The world is not deterministic. Outcomes are probabilistic. Yet most people reason as though events are either certain or impossible.

Probabilistic thinking is the habit of assigning rough probabilities to outcomes and making decisions based on expected value rather than hoping for the best case or fearing the worst.

Why it matters

When thinking probabilistically, the goal shifts from predicting exactly what will happen to thinking about the distribution of possible outcomes. This changes how decisions are evaluated:

  • A decision with a 70% chance of a good outcome and a 30% chance of a bad one is not the same as a coin flip. But it is not a sure thing either.
  • A decision with a 1% chance of catastrophic irreversible harm should be weighted very differently from a decision with a 1% chance of a minor setback.

Fat tails

Normal distributions are common in nature, but some domains are governed by power-law distributions, where extreme outcomes are far more common than a normal distribution would predict. Financial markets, social media virality, and earthquake magnitudes all follow power laws.

When operating in a fat-tailed domain, the ordinary tools of probabilistic thinking break down. Rare, extreme outcomes need to be weighted much more heavily than their frequency alone would suggest.


Inversion

Inversion means thinking about a problem backwards. Instead of asking "how do I achieve X?", ask "what would guarantee that I fail at X, and how do I avoid those things?"

This is one of the most underrated thinking tools available. Most people approach problems forward: they list what needs to be done to succeed. Inversion asks for a list of what would guarantee failure. Then avoid those things.

Charlie Munger put it simply: "Invert, always invert."

Why inversion works

  • Forward thinking is optimistic and tends to overlook obstacles.
  • Inverse thinking forces explicit confrontation with the failure modes.
  • Avoiding catastrophic failure is often more important than optimising for success.

Example: The goal is to build a successful product. Forward thinking: identify customer needs, design solutions, ship features, grow users. Inversion: what would guarantee this product fails? Terrible onboarding, not solving a real problem, too expensive, nobody knows it exists. Fix those things first.

The two approaches are complementary, not competing. Together they give a more complete picture than either alone.


Occam's Razor

"Anybody can make the simple complicated. Creativity is making the complicated simple."

Occam's Razor is the principle that, when faced with competing explanations for the same evidence, the simpler one is more likely to be correct. Technically: entities should not be multiplied beyond necessity.

This is not a law of nature. It is a heuristic. A guide to plausibility. Simpler explanations require fewer assumptions, and each additional assumption is an additional place where the explanation can be wrong.

Essence of Occam's Razor

Given two explanations that account equally well for the observed facts, prefer the one with fewer moving parts.

This does not mean the simplest possible explanation is always right. Reality is sometimes complex. Occam's Razor is a starting point, not an endpoint. It tells where to begin the investigation. Start with the simplest adequate explanation and only add complexity when the evidence requires it.

Human tendency to make complicated narratives

Human brains are pattern-matching machines. Randomness and ambiguity are deeply uncomfortable. When something happens, the brain wants a story. A cause, an agent, a reason.

This drive produces narratives that are far more complex and intentional than reality often warrants. Conspiracies are seen where there is incompetence. Strategy is perceived where there is accident. Hidden motives are assumed where there is simple human error.

The simpler explanation ("they made a mistake," "it was coincidence," "nobody planned this") is usually closer to the truth and is usually less satisfying to the narrative-hungry mind.

Why more complicated explanations are less likely to be true

Each component of an explanation must be true for the whole explanation to be true. A three-part explanation requires all three parts to hold. A one-part explanation requires only one part to hold.

If each component has a 90% chance of being correct (already optimistic), a three-part explanation has a 0.9 × 0.9 × 0.9 = 72.9% chance of being correct overall. A ten-part explanation falls to 0.9^10 = 34.9%. Complexity compounds uncertainty.

This is why simpler is more likely. Not because reality is simple, but because explanations carry compounding error rates.

Simplicity can increase efficiency

Beyond epistemics, Occam's Razor has practical value. Simpler systems are easier to build, easier to maintain, easier to debug, and easier to communicate. Each added component is a potential failure point, a maintenance burden, and an obstacle to understanding.

Good engineers, good writers, and good strategists all know this: the discipline of removing things that are not necessary is harder than adding them, and more valuable.

"A designer knows he has achieved perfection not when there is nothing left to add, but when there is nothing left to take away." (Antoine de Saint-Exupéry)


Hanlon's Razor

A close relative of Occam's Razor: never attribute to malice what can be adequately explained by incompetence (or ignorance, or indifference).

Most bad outcomes in organisations, relationships, and society are not the result of people trying to cause harm. They are the result of people making mistakes, failing to communicate, operating with incomplete information, or simply not thinking about the impact at all.

Assuming malice is expensive. It generates conflict, erodes trust, and leads to responses that are disproportionate and often counterproductive. Starting from the more charitable explanation ("they probably just did not understand") preserves relationships and usually leads to better outcomes.

This is not naivety. Malice does exist. But it is far less common than incompetence, and incompetence is the more useful default assumption.


Putting It Together

Mental models do not operate in isolation. The real skill is in recognising which framework applies to which situation and in having enough of them to avoid being stuck with only one.

A few principles for building and using mental models well:

  • Have many models from many disciplines. A single model applied everywhere is a hammer that treats everything as a nail.
  • Hold them loosely. Every model is wrong in some conditions. Be willing to switch.
  • Look for convergence. When multiple models from different domains point to the same conclusion, confidence is more warranted.
  • Stay inside the circle of competence for high-stakes decisions. Venture outside it carefully, with explicit awareness.
  • Use inversion. Before committing to any decision, ask what would guarantee failure and whether those things are being avoided.
  • Prefer simple explanations until the evidence demands complexity.

The goal is not to become a walking encyclopedia of frameworks. The goal is to think more clearly, make fewer avoidable mistakes, and be genuinely useful in a broader range of situations.