The Dictator's Handbook

Power pyramids matter

Bruce Bueno de Mesquita and Alastair Smith developed selectorate theory through formal mathematical modeling in political science during the 1990s and early 2000s. The theory proposed that political outcomes could be predicted by analyzing three populations: the nominal selectorate (everyone who could theoretically choose a leader), the real selectorate (those who actually choose), and the winning coalition (the essential supporters whose loyalty keeps a leader in power).

The sizes of these different groups has a profound effect on the nature of the outcomes.

Academic Reception

The academic reception was mixed. Using game theory and rational choice models to predict political behavior was controversial. The theory's stark claim - that leaders prioritize survival over public welfare, and that this explains governance patterns across democracies and autocracies alike - struck some as overly reductive.

The empirical record was harder to dismiss. Bueno de Mesquita and Smith analyzed bilateral aid transfers by OECD nations between 1960 and 2001. They found that leaders in recipient countries were more likely to grant policy concessions when their winning coalitions were small, since they could easily compensate their supporters for unpopular decisions. The mathematics kept predicting real outcomes.

Popularization

In 2011, Bueno de Mesquita and Smith wrote The Dictator's Handbook: Why Bad Behavior is Almost Always Good Politics . The book translated selectorate theory for general readers. The argument: when the winning coalition is small, leaders use private goods to maintain support. When the winning coalition is large, leaders must provide public goods because individually rewarding millions of supporters is economically impossible.

This is a stark contrast. It can be phrased in terms of pyramids of power. A steep pyramid favours the few, and few public goods. A gently sloping pyramid favours things done for the public good, and is better for the majority.

The theory extends beyond governments. Most publicly traded companies operate on the dictator side of the scale - a small number of people determine CEO survival, small enough that enriching this group matters more than creating shareholder value.

Value of Simplicity

What makes selectorate theory useful is that it predicts behavior without requiring moral judgments. The simplicity of the framework - three variables predicting complex political outcomes - made it memetically successful.

Critics argue the theory is too reductive. Jessica L.P. Weeks contends that selectorate theory makes flawed assumptions about authoritarian regimes, wrongly presuming that members of small winning coalitions lose power when rulers fall, and incorrectly assuming all actors perceive situations identically. The theory may work better as a first-order approximation than as a complete model.

Simple models have value. They suggest that any mechanism that reduces the cost of maintaining a small coalition - better surveillance, more effective propaganda, cheaper ways to reward loyalists - pushes organizations toward autocratic structures. Any mechanism that makes large-coalition governance more efficient - cheaper education, easier information access, lower coordination costs - pushes toward democratic structures.

Which brings us to AI.

AI and Power

AI as part of the collective commons - a public good

AI is unquestionably a power amplifier. I contend that AI will enable us to 'do science faster'. It has other power boosting properties too. It makes surveillance easier, it makes targeted advertising easier, it reduces research costs in engineering companies.

AI is unprecedented. We have never before had an engine that can replace knowledge work at such scale. The Dictators Handbook's framing suggests we should care a great deal about whether it is a public good, or whether it is in the hands of a very few.

What we have had in the past is an engine, the steam engine, that could replace manufacturing work at scale. The steam engine powered the industrial revolution.

Factory Discipline

The industrial revolution required a transformation in human behavior. Before factories, most workers controlled their pace and timing. Sidney Pollard documented the resistance: workers "were considerably dissatisfied, because they could not go in and out as they pleased." One observer noted that highlanders could never sit easy at a loom - "it is like putting a deer in the plough."

Factories needed discipline. Workers had to arrive on time, work at machine pace, follow instructions precisely. E.P. Thompson showed how this required retraining human time-sense. Pre-industrial workers organized their days around tasks - you worked until the harvest was done, until the batch was finished. Factory work organized in units of time that could be bought and sold.

Schools emerged to solve this retraining problem. Thompson cited Powell, who saw education as training in the "habit of industry" - children should become "habituated, not to say naturalized to Labour and Fatigue" by age six or seven. Samuel Bowles and Herbert Gintis made the mechanism explicit in their 1976 analysis Schooling in Capitalist America . Common Schools, they found, "quite literally emerged from labor strife." Their purpose was "to instill obedience and a work-ethic conducive to undemocratic authoritarian factory production." The structure of schools deliberately mirrors workplace hierarchy: administrators as management, teachers as supervisors, students as workers. Schools train the personality traits factories need - obedience, punctuality, tolerance for routine, acceptance of hierarchy.

One question is what kind of discipline the AI revolution will require, and what kind of institutions will emerge to train it.

Who Will Get the AI Power?

The Dictator's Handbook on its own does not predict what kind of AI future lies ahead. The simple model, and the precedents, suggest we should consider ownership of AI power through the lens of steep pyramids or shallow ones.

Technologies shape institutions first. Institutions shape what comes next. The industrial revolution shaped schools, which shaped what humans became capable of thinking. AI will shape new institutions, which will shape what future humans become and think.

We do not have control over AI 'progress'. It is as inevitable as the steam engine. New Luddites will not stop it in its tracks. Attempts to stop it will drive it underground, so that only the most power hungry have access to it. Attempts to stop or slow AI hand more power to the few.

It is not the speed of AI progress that we need to modify, it is the direction. Specifically the question of who gains most from AI must be looked at with great care. What matters is who uses AI and how, whether it benefits all or just a few. What matters is how steep the pyramid is. We do have control over what AI is used for. We can choose to leave AI to those in power and let the power be controlled by a narrow coalition, or we can learn to use it ourselves, create things of beauty and of lasting societal value.

AI is Accessible

AI is conversational.

How to use it well is not out of reach. It can be learned.

We have a choice about how to use AI.

Let's use it to grow the public good.