The Simulation Mistake: On What Algorithms to Live By Gets Right and Wrong
Algorithms to Live By, Brian Christian and Tom Griffiths
There is a philosophical error at the heart of Algorithms to Live By, and it is the more insidious for being so beautifully concealed. Brian Christian and Tom Griffiths have written an unusually intelligent, unusually honest popular science book. They acknowledge the gap between formal proofs and human application, they flag the assumptions their algorithms require, and they resist the worst tendencies of the genre. And yet the book’s organizing premise is mistaken in a way that matters, and tracing that mistake tells us something important about the limits of the computational metaphor — and about what education in the age of machines should actually be trying to accomplish.
The premise is this: the problems posed by human life are instances of problems that computer science has already solved. Schedule your tasks like a processor. Cache your possessions like RAM. Manage uncertainty like a Bayesian network. The appeal of this claim is genuine. Human beings do face versions of optimal stopping problems, explore-exploit trade-offs, and scheduling conflicts. The formalism illuminates real structure. I am not disputing that.
What I am disputing is the move the book makes quietly but persistently — the move from structural resemblance to prescriptive equivalence. Because an apartment search resembles the secretary problem in its formal structure does not mean that the 37% rule is the correct prescription for human apartment searches. The 37% rule is optimal under a specific and demanding set of conditions: serial irreversible observation, no cardinal information, and an objective function of finding the single best option. Relax any of these — and in real apartment searches, all of them relax — and the prescription changes, sometimes dramatically. The book acknowledges this in footnotes and qualifications, then proceeds as though it hasn’t. The algorithm is introduced, a story is told about how someone applied it to their love life, the reader absorbs the prescription. The qualifications evaporate in the narrative heat.
This is not a small caveat. It is the central methodological problem. An optimal algorithm is optimal for its problem specification. The step from “this algorithm solves problem P” to “you should apply this algorithm to your situation” requires establishing that your situation is an instance of P. That step is almost never taken. Instead it is gestured at, through examples close enough to seem convincing.
Here the Searlean distinction between observer-independent and observer-relative facts earns its keep. In computer science, “computation” is observer-relative: it is assigned to a physical process by someone who interprets that process as implementing an algorithm. Stones falling off cliffs “compute” trajectories under the right interpretive frame. When Christian and Griffiths say that the brain “runs” a Bayesian algorithm or that elderly people’s social pruning “implements” the explore-exploit trade-off, they are making a categorization that may be illuminating but is not a discovery about the brain’s intrinsic nature. The brain is doing what it does. We are choosing to describe it computationally. That description licenses neither the prescriptions nor the sense of deep isomorphism that gives the book its rhetorical power.
The Tenenbaum-Griffiths experiments — which demonstrated that human predictions for things like movie grosses and congressional terms closely match Bayesian posteriors computed from real distributional data — are the book’s strongest empirical claim, and they are genuinely striking. But notice what they establish. They establish that humans carry well-calibrated priors in familiar domains, and that their predictions conform to Bayesian expectations post hoc. They do not establish that humans are performing explicit Bayesian inference. A stopped clock displays the right time twice a day, but we should not infer that it is computing the time. The behavioral match is real; the mechanistic claim is a further step that the evidence does not take.
This matters because the book’s prescriptive ambitions depend on the mechanistic claim. If the brain actually implements Bayesian inference, then improving prediction means calibrating the priors. If the brain merely produces outputs that sometimes match Bayesian predictions, the mechanism could be anything and the prescription is obscure.
There is, however, something the book gets profoundly right — right in a way that its authors may not fully appreciate.
The game theory chapters quietly undermine the entire individualistic orientation of everything that precedes them. Early chapters offer algorithms for individual optimization: how you should stop, explore, schedule, cache. Then the game theory chapters arrive to announce that many of the most consequential problems humans face — climate coordination, market bubbles, the tragedy of the commons, the prisoner’s dilemma — are intractable at the individual level. No amount of individually optimal scheduling prevents a commons from being grazed to destruction. No amount of individually calibrated Bayesian inference prevents an information cascade. These are structural failures that require structural solutions. Mechanism design — changing the rules of the game rather than the strategies of the players — is the correct intervention, and it operates entirely outside the individual optimization framework that dominates the book.
The book calls this “computational kindness” in its conclusion, and the concept is valuable: framing problems in ways that lower the cognitive burden on others is a genuine social good. But the deeper implication is not drawn. The Tier 6 intelligences — collective intelligence, collaborative synthesis, the distributed epistemic systems that produce science, markets, and democratic deliberation — are not decomposable into better individual algorithms. They are emergent from the friction and coordination of minds in genuine relationship. The book cannot account for this, because it begins from a framework of individuals and never escapes it.
There is also something important in the observation that LLMs — which Christian and Griffiths would presumably welcome as the apotheosis of the computational approach to cognition — may themselves be understood as lossy compressions of collective human intelligence. The machine was trained on what we wrote, argued, and got wrong over centuries. It reflects our Tier 1 pattern-making back at us with extraordinary fidelity: linguistic, logical-mathematical, associative, retrieval. What it cannot reflect is the thing that happened between us — the collaborative friction that refined an idea, the trust that made knowledge transmissible, the stakes that gave wisdom its teeth. No training corpus captures the difference between knowing what Bayesian inference prescribes and knowing when to trust your priors.
The educational implication is the one that matters most.
Algorithms to Live By is, among other things, a self-help book, and the self-help tradition has always been in tension with the kinds of intelligence that actually need developing. The book teaches readers to recognize algorithmic structure in everyday situations — an underrated skill — and to apply formal tools where the formal conditions obtain. These are genuine contributions. But the Tier 4 intelligences that the book itself exemplifies in its best moments — plausibility auditing, the detection of hidden assumptions, interpretive judgment about when a model applies — are not what the book teaches. They are what the book uses, in its critical moments, to identify where the prescriptions break down.
The 37% rule fails 63% of the time. The book says this, correctly, and frames it as comfort: even optimal strategies produce bad outcomes, so don’t blame yourself. But there is a different lesson available, the harder one: learning when a problem’s formal structure matches an available algorithm requires exactly the kind of judgment that is almost never taught and that no algorithm supplies. The judgment that your apartment search is sufficiently serial and irreversible and rank-order-only to make the 37% rule applicable is not itself a computation. It is something else — call it practical wisdom, call it phronesis, call it Tier 7 intelligence operating on Tier 4 metacognition. It requires knowing the territory well enough to know when the map applies.
This is the correct educational conclusion: stop training people to apply algorithms, and start training them to evaluate whether the algorithm fits. The former is increasingly automated. The latter is precisely what the machines cannot do — not because they lack the processing power, but because it requires being situated in a world with real stakes, real uncertainty, and a genuine stake in the outcome.
Algorithms to Live By is a book that earns its own most important lesson by accident. The best moments — when the authors notice that humans stop too early, that the assumptions have been violated, that mechanism design operates at a different level than individual optimization — are not algorithmic moments. They are moments of judgment. The book is most useful not as a source of prescriptions but as a training ground for the kind of critical attention that knows when to trust a proof and when to ignore it.
The algorithm for that kind of attention has not been written, and I suspect it cannot be.
Tags: algorithms decision theory, observer-relative computation, computational metaphor limits, Tier 4 plausibility auditing, Tier 6 collective intelligence, mechanism design game theory, overfitting prescription, Christian Griffiths, theorist.ai
This piece is part of the ongoing argument at Theorist.ai — a dedicated home for the question of what education owes the next generation of thinkers, at the precise moment when machines have become genuinely good at answering questions and genuinely poor at knowing which questions are worth asking.

