The Supervision Problem: What Automate This Couldn't Name
A literary review essay — Automate This: How Algorithms Came to Rule Our World, Christopher Steiner (2012)
Here is the thing Christopher Steiner’s book keeps circling without ever landing on: the problem was never the algorithm. The problem was always the person who stopped watching it.
Automate This opens with two scenes. In the first, two Amazon sellers’ pricing bots enter a recursive feedback loop and escalate the price of an out-of-print genetics textbook to $23,698,655.93. Nobody intended this. Nobody noticed until a human being happened to look. The second scene is the Flash Crash of May 6, 2010 — $1 trillion in market value evaporated in 300 seconds, then recovered almost as fast. Steiner says, rightly, that the market could not have moved so far so fast without algorithms. What he does not say — what the book never quite gets around to saying — is that both events end the same way: a human being steps in and overrides the system. The algorithm did exactly what it was designed to do. The design did not anticipate those circumstances. No human was watching closely enough to stop it before the damage propagated.
That is the book’s true subject. It never becomes the book’s primary argument.
Steiner is a journalist who knows how to find a story, and Automate This is full of them. Thomas Peterffy, born in a Budapest bomb shelter in 1944, who taught himself programming from English manuals and hacked the NASDAQ with a mechanical typing machine. David Cope, the music professor who spent seven years building an algorithmic Bach and then destroyed the databases in 2004 because the machine had become more famous than his art. The construction crews blasting through Pennsylvania granite to build Spread Networks’ 825-mile dark fiber line — a $200 million tunnel shaving four milliseconds off the Chicago-to-New-York round trip. These are genuine narratives, reported with care. The book is compulsively readable. It is also, considered as an argument, asymmetric in a way that compounds with every chapter.
Steiner is consistently right about what algorithms do. He is consistently optimistic about what that means for humans. The gap between those two positions is where the important question lives — and it is a question his book equips us to ask without fully asking it.
What the Chapters Actually Document
Run through Automate This with a different lens — not “what have algorithms learned to do?” but “what specific human capacity does each domain require, and what happens to the people and institutions that stop supplying it?” — and the book’s chapters reorganize themselves around a distinction it never develops.
There are things machines are now superhuman at: pattern recognition across large datasets, fact retrieval from enormous corpora, arithmetic without fatigue, syntactic correctness in language and code, identifying correlations in historical data. The curriculum built for the industrial economy taught humans to develop exactly these capacities, and not because educators were foolish. Before machines arrived, arithmetic speed and fact retrieval were genuinely valuable. They are no longer valuable as competitive human capacities. Training people to develop them at the expense of everything else is now something close to malpractice.
What machines cannot do — and what no serious researcher claims they are close to doing — is something harder to name but instantly recognizable in practice. Call it knowing when to distrust the result. Knowing what the model assumes and whether those assumptions hold here. Deciding what is worth solving in the first place. Constructing the problem itself, not just running a computation once the problem has been handed to you. Asking what would have happened under different conditions, in ways that require genuine understanding of mechanism rather than statistical pattern. Knowing, without recomputing, that this answer is wrong.
These are not soft skills. They are not emotional intelligence or interpersonal sensitivity, though those matter too. They are specific cognitive capacities — metacognitive and causal — that allow a person to use a powerful tool rather than be used by it. And every failure mode in Automate This is a case study in their absence.
Chapter by Chapter: The Intelligence Being Skipped
Chapter 1, the Peterffy chapter, is the book’s best, because it traces a mechanism rather than celebrating an outcome. The three phases Steiner derives from Peterffy’s career are genuinely clarifying: Phase 1 (algorithms advise), Phase 2 (algorithms execute), Phase 3 (algorithms adjust independently and write new algorithms). Phases 1 and 2 are demonstrated with specific, dateable evidence. Phase 3 is asserted — it is where the book’s most alarming implications live — but never demonstrated with comparable rigor. More telling is what Peterffy himself says at the chapter’s close. The man who built the first fully automated trading system, who earned $50 million in 1988, who rang the NASDAQ’s opening bell at his $12 billion IPO — he wants minimum holding times on bids. He fears a liquidity crisis from rogue algorithms. He says: “I only saw the good sides at the time.” Steiner reports this and moves on. It is, in fact, the most important sentence in the chapter.
What failed in Peterffy’s own near-catastrophe — the phantom NYSE trades generated by a spare tablet device left near a drafty door — was something no algorithm can supply for itself: the capacity to notice that the outputs no longer make sense, to ask whether what is happening is what was intended, and to stop the system before the damage becomes irreversible. That capacity requires someone whose job it is to watch. Peterffy had to run physically from the World Trade Center to the NYSE floor to find out what was happening. When no one is watching, algorithms do not worry. They execute.
Chapter 2, the mathematical history chapter, contains the book’s most important buried critique. Gauss’s own warning — that errors of any magnitude are possible within a normal distribution — appears briefly and disappears. The Gaussian copula, David X. Lee’s formula that Wall Street deployed as stone-solid fact in the years before 2008, is noted as misuse rather than named as a structural failure of institutional judgment. Lee’s formula was a tool for measuring the risk of correlated mortgage defaults. It assumed away the very correlation it was supposed to measure in extreme conditions. Wall Street did not misunderstand the formula. Wall Street chose to make it the only arrow in the quiver — to replace the judgment that asks “does this model hold under these conditions?” with the model itself. What failed was not the mathematics. What failed was the human capacity to interrogate a model’s assumptions before staking a financial system on them. To ask: what does this formula actually require to be true, and is it true here?
This is not arcane. It is the most basic thing you can do with a tool: understand when it applies and when it does not. An algorithm cannot perform that check on itself. An institution that stops performing it has outsourced its judgment to its own machinery.
Chapter 3, the music chapter, is where the book is most honest about limitation. Cope’s Emmy can produce Bach imitations that fool experts in blind tests. That is a hard result. It cannot produce Nirvana’s Nevermind — music that is structurally discontinuous with the corpus from which it learned. Steiner acknowledges this directly: “Almost impossible.” What he is naming, without quite naming it, is the difference between retrieval and origination. An algorithm trained on what was popular will reliably reproduce what was popular. It has compressed the pattern. What it cannot do is produce the first instance of a pattern that has never existed — the thing that sounds wrong to everyone until it sounds like the only thing that ever mattered. That requires something the training data cannot contain, because the training data, by definition, precedes it.
Chapter 6, the medicine chapter, contains the book’s strongest empirical evidence for algorithmic superiority and its most underweighted counterargument. The diagnostic statistics are specific and clinically significant — algorithms improving PAP test cancer detection rates, reducing mammogram false negatives at Stanford, the UCSF robot pharmacy’s two million prescriptions without error. These are genuine outcomes. But Jerome Groopman’s Anne Dodge case — fifteen years of misdiagnosis, a diet recommendation actively killing her, saved finally by a gastroenterologist who recognized celiac disease — is not merely a heartwarming counterexample. It is a precise description of what algorithms are structurally worst at: the patient whose symptoms are statistically rare, whose presentation activates anchoring bias in both human and algorithmic systems, and whose correct diagnosis requires overriding the prior probability assigned to the presenting cluster.
Steiner’s resolution — reserve exceptional diagnosticians for atypical cases — is sensible on its face. It assumes we can correctly identify atypical patients before diagnosis. That is precisely the problem algorithms are supposed to solve. The circularity is left unaddressed. And it points toward something real: algorithmic diagnosis is very good at the modal patient, the one whose presentation matches the training distribution. What it requires humans to supply is the judgment to ask, for this particular person in front of me, whether we are still inside that distribution. That judgment cannot be delegated to the algorithm, because the algorithm is the thing being questioned.
Chapter 7, the personality-reading algorithm chapter, may be the book’s most socially consequential section. Kelly Conway’s E-Loyalty system routes customer service calls to agents who share the caller’s personality type. Personality-matched calls achieve 92% resolution in five minutes; mismatched calls achieve 47% resolution in ten. If that result holds, it is significant. What Steiner surfaces in the final paragraphs — and immediately leaves — is the structural implication: if algorithms route all of our professional and commercial interactions to like-minded personalities, we systematically eliminate the productive friction of difference.
This matters beyond individual comfort. The capacity of groups to solve what no individual could — what researchers call collective intelligence — depends on genuine epistemic diversity, on encounter with minds that process information differently, on the discomfort of having your assumptions challenged by someone who does not share them. Science works this way. Markets sometimes aggregate information correctly this way. Democracy, at its best, works this way. An algorithm optimizing individual interactions by routing everyone toward pleasant agreement is, at institutional scale, dismantling the cognitive substrate on which collective intelligence depends. It is solving the wrong problem with extraordinary efficiency. Steiner is the first person in the book to notice this. He is the last person in the book to pursue it.
The Question the Book Cannot Ask
The deepest limitation of Automate This is not analytical, it is structural. Steiner is writing in 2012, before the vocabulary for this argument existed in widely circulated form. He cannot name what he is seeing because the framework had not yet been built. He has the cases. He has the pattern. He has Peterffy’s own second thoughts, the Groopman counterargument, the Finkle-Karney research showing that dating algorithms do not outperform random matching, Conway’s unresolved concern about homogenizing human interaction. What he lacks is the organizing question that would allow him to say: the real issue is not what algorithms have learned to do, but what specific human capacities each algorithmic domain requires humans to continue supplying — and what happens, institutionally and socially, when those capacities atrophy or are never developed in the first place.
The education prescription in the book’s final chapters — universal programming education, national debt forgiveness for engineering graduates who don’t enter finance — is not wrong. It is incomplete in a way that matters. Teaching the next generation to write code is teaching them to build better tools. Teaching them to know when the tool’s output should be trusted, to construct the problem the tool is meant to solve, to ask what the model assumes and whether those assumptions hold, to notice that the result doesn’t make sense before acting on it — that is a different and harder curriculum. The book does not distinguish between these. It cannot, quite, because it has not yet named what it is asking for.
What Automate This documents, chapter by chapter, is not a story about machines replacing humans. It is a story about institutions that applied algorithms without supplying the human intelligence those algorithms required — and about the specific, categorizable costs of that failure. The Amazon bots needed someone whose job was to ask: does this price make sense? Wall Street needed people who could interrogate a model’s assumptions: what does this formula require to be true, and is it true here? Medicine needs what Groopman provides: the capacity to override the statistical prior when the patient in front of you is the exception the cluster cannot contain. Conway’s routing system needs someone to ask: what are we optimizing toward, and is the thing we are discarding — the productive friction of encountering difference — actually the thing we most need to preserve?
The machines are not the problem. The curriculum that didn’t notice the machines is the problem.
Peterffy said it plainly, and Steiner reported it plainly: I only saw the good sides at the time. That sentence is the book’s argument. It is just not the book’s conclusion. The conclusion is optimistic — learn to code, ride the wave, the machines will save us more than they destroy. The evidence is more complicated than that. It has always been more complicated than that.
What the evidence actually says is this: the machines will do what we design them to do, in the circumstances we anticipate, without error and without fatigue. What they will not do is notice when the circumstances have changed. That is irreducibly human work. And we have spent forty years building institutions that forgot it.
Tags: Automate This Christopher Steiner review, algorithmic intelligence human oversight, Flash Crash supervision failure, irreducibly human AI, causal reasoning institutions

