In this hour and a half video, Robert Sapolsky surveys the weaknesses (and the strengths) of reductionism in science. We learn that biological systems are complex, nonlinear, aperiodic systems where reductionism breaks down. That doesn't mean that reductionism is useless, just that we need to understand its limitations and we need better models for understanding scale-free and fractal systems in nature. The video discusses the butterfly effect, the Lorenzian waterwheel and other models of such complex systems.

This lecture is based on ideas from James Gleick's book "Chaos: Making a New Science" which is required reading for the course "Human Behavioral Biology" of which this is lecture 21.


I largely disagree with Sapolsky's caricature of the so-called "dark ages".  Learning was not nonexistent in the medieval period. Scholarship has survived from the Carolingian and Ottonian renaissances. The 12th century renaissance and the conquest of Toledo in 1085 shows that civilization in Europe was very actively engaged in the constant flux we call history. Sapolsky is correct that there was incredible ignorance about explaining causality in the world. Even though science has improved its approaches and become ever increasingly incisive, it is probable that Enlightenment and Modern science are only a few epsilon ahead of medieval science (except in terms of arrogance of which they indubitably excel). Look up Johannes Scotus Eriugena (c.815-c.877) and Lanfranc (c.1005-1089). Minor nit: the Alhambra is in Grenada, not Toledo.

Also, We should not forget Alhazen's pathbreaking contribution early in the 1000s of recommending systematic observation and experimental tests for science (which Dante (c.1265–1321) in his great "Comedìa" seemed to intimate). Finally, I'll note that the Islamic world (and China's great Tang dynasty) was triumphant and ascending in the period sometimes referred to as the "dark ages" which further points out the dubiousness of that designation.

Sapolsky claims that reductionism is "the single most important concept in all of science in the last 500 years". To make further progress, must science revise it basic approach to understanding?

Reductionism: the idea one can understand complex things by breaking them into their component parts. The concept of linearity or additivity suggests that the parts combine to form and explain the complex whole by simple addition.

Corollary: if you know the starting state of a system, you have 100% predictability of its future states. Conversely if you know the whole system and its parts, you can deduce its starting state.

So under the assumption of reductionism extrapolation and prediction is possible and powerful for understanding. A blueprint or road map is needed for the extrapolations: you have to know the rules or laws of addition or combination of parts into the whole.

Measurements have variability which the reductive world view interprets as noise in the system which needs to be controlled. That is, noise represents instrument error: it isn't "real". The reductive world view suggests that by looking into the component parts with a better and better microscope (breaking the component parts into their components and so forth), you can reduce the noise (less variability should be observed) and improve predictions.

"Sitting way down at the bottom of all these reductive processes there is an iconic and absolute idealized norm as to what the answer is ... Variability is discrepancy from seeing what the actual true measure is."

But reductionism breaks down in biological systems.

1st example: Hubel & Wiesel (1981 Nobel prize) discovered the functionality of the layered visual system. Neurons in the retina corresponded to the simplest parts of the visual cortex (a 1-1 mapping: they could recognize dots). Neurons in the second layer of the visual cortex can recognize straight lines. A third layer can respond to curves. A triumph of reductive science. But the logical reductive implication is that still higher layers of the visual cortex would recognize more complex patterns which were referred to as "grandmother neurons". Although a few such neurons have been found (you can read about "sparse" neurons such as the Jennifer Aniston neuron here:, because of the combinatorial explosion of possibilities it is impossible for there to be many of them.

Experimental efforts to identify these higher layers in the brain have failed. "There aren't enough neurons in the brain to do facial recognition in a point-to-point reductive manner." There aren't enough neurons to recognize the faces of everyone you know from every possible angle. Current research focuses on the non-reductive approach of neural networks which attempts to identify how patterns of information are encoded in large networks of neurons.

Bifurcating systems are scale-free. They model dendrites of neurons (dendritic trees), the circulatory and pulmonary system. You can't code for such bifurcating systems in a point-for-point reductive way wherein each gene specifies each bifurcation.

The importance of randomness, such as Brownian motion leading to unequal distributions of mitochondria (background information on transcription factors: transposons: in daughter cells, is not accounted for in a perfect prediction system based on reductive parts analysis. "Chance is throwing off this ability to know the starting state and know what this complex system is going to be."

Ivan Chase ( studied dominance hierarchies in fish and found that randomness plays a role in individual pairings which makes transitivity fail in group dynamics. "The dyadic pairing dominance outcomes has zero predictability over what the actual dominance hierarchy is going to be like. ... Chance plays a role as well: chance interactions of individuals affects their behavior. Reductionism breaks down again.

The most interesting biological systems cannot be explained with classical reductionism.


Non-linear systems or non-additive systems are those which cannot be explained by a reductive science of breaking them into their component parts.

Aperiodic deterministic systems: you need to fully compute each step in the system to predict its behavior: you cannot skip steps or you lose predictability because the behavior of the system does not follow a clear periodic pattern. Periodicity implies reductionism.

Nondeterministic systems are random: there is no rule to determine the next step in the system. They are not chaotic.

On page 27 of James Gleick's book "Chaos: Making a New Science", the Lorenzian waterwheel ( is described. A waterwheel with holes in the bottom to release water. As the flow of water increases the system progresses from boring (all the water flows out before any movement can begin), to periodic (perfectly predictable), to an increasingly complex period doubling behavior, to chaotic (completely unpredictable: "a pattern that never repeats") behavior. That is, the direction of the wheel can change at unpredictable times in the process. Heat/convection systems can have behavior akin to the waterwheel model. James York's famous paper "Period Three Implies Chaos" showed that odd periodicity leads to chaos (Gleick, p. 73).

Sapolsky explains that previous generations of scientists observed the turbulent/chaotic regime, but focused on the aspects that were periodic, predictable and reductive. That is, they broke off their studies just where reductionism broke down leading to a biased and incorrect view that the world is reductive when in fact almost all of the interesting cases live in the chaotic regime.

This is a profound point about science: when you control for variables in your experiments, you may miss the real complexity in the system by designing experiments that only look at the reductive, predictable parts of the system.

When periodic systems are perturbed, they eventually return to their periodic behavior. An attractor is a point in a representation of behavior that equilibrates toward that center. Strange attractors in complex systems never return to the same point but oscillate around a center in a butterfly-like pattern. In periodic systems, variability from the attractor is noise. In chaotic systems, variability is the essence and nature of the system: there is no noise, just uniqueness.

The butterfly effect: when tiny differences in initial conditions produce large and unpredictable effects.

The variability in the strange attractors entail the butterfly effect. But the profound conclusion is that the system is really inherently variable in its nature and essence. So strange attractors have no predictability: no matter how accurate the instrumentation, the variability and noise stays prevalent (it is a scale-free system). Philosophically we conclude that there is no correct idealized answer. If there is an idealized answer, it is that noise and unpredictability are fundamental and inherent. The noise and variability is the system.

Complex systems in our world, especially biological ones, are inherently complex and are not subject to reductive analysis. Their "noise" is the system itself!

A fractal is a complex pattern that is scale-free (the idea of self-similarity may be clearer). The appearance and complexity of a fractal is the same no matter how closely you examine it. More formally, a fractal represents fractional dimensionality. In a fractal the amount of variability is the same no matter the resolution of the measurement.

The circulatory system, pulmonary system, neural dendrites, and the branches of a tree are all biological fractals. Fractal genes: genes giving instructions independent of scale.

Sapolsky discusses a study he did with undergraduate Steven Balt "Reductionism and Variability in Data: A Meta-Analysis" published in 1996 in "Perspectives in Biology and Medicine". The paper was an analysis of papers on testosterone on behavior at the organismal, organ system, single organ, multicellular, single cell, and subcellular levels. They determined that the coefficient of variability does not get smaller as the resolution of analysis increases. That is, a fractal instead of a reductive model appears to be more apt.

"Variability is the system rather than discrepancy from the system."

Reductionism is still very useful for biological systems "if you are not very picky, if you are not very precise". On the average, science and medicine can identify incisive results. But they do not apply to every case because then you enter the world of the nonperiodic, nonlinear, fractal systems. For example, reductive science can accurately show that its warmer in June than January even though we cannot predict the temperature three years out on any given day. If you only care about what is happening on the average to compare therapies or understand general behaviors of Universe, then reductionism gives useful results. Reductionism works for a general predictive sense. But if "you really want to understand the systems, it is anything but reductionism"!