A Formulation of Why People Dogmatically Embrace Reductionism

Reductionism is essential to life. Your brain works on 7 Watts of energy or something of the sort. That’s an incredibly small amount of energy. Loosely, this is possible because of reductionism and the centers in the brain that allow us to focus on things of (causal) interest. That is how magic tricks work. But the sheer fact that magic tricks DO work should alert us to something very important indeed– that reductionism is not the best friend of all reality, in this case because its accuracy is dependent upon the choice of what to generalize about. There are a few ways people define reductionism, and the very most dogmatic version actually insists that a system is nothing more than the sum of its parts, which flies in the face of nonlinear dynamics.

Reductionist models do not contain reality, but describe necessarily a subset, and often a very small subset of all parameters which actually affect a real situation. If we examine the complexity in order of greatest complexity to least complexity, we will find that first comes reality, and second comes first principles models like Finite Element Methods (FEM) or NetLogo agent-based models, and third comes other reductionist models such as Newtonian dynamics. This is because reality is quantized and obviously contains all information, FEM models are quantized but leave out information either by ignoring parameters or reducing scope, and more classical reductionist models are continuous and linear, and these sorts of calculations are relatively lacking in complexity and taken care of by a calculator. Note that reality and FEM are not broken down into smaller and smaller pieces like reductionism holds up as its ideal– like is realized in calculus. Further, FEM can accommodate nonlinear dynamics where other reductionist models cannot, and nonlinear behavior is usually more complex and would require more bits of information to describe. A subset of nonlinear dynamics is chaos and it is not predictable. A good understanding of anything should allow us to imagine the solutions in some way and to get to know them, to relate to them. We may be able to write down systems of differential equations in the language of calculus, but that does not meet we necessarily understand the solutions. Other reductionist models like Newtonian dynamics deal with linear differential equations and can be expressed in neat packages of calculus symbols that can be interpreted and understood pretty well just based on looking at them. It doesn’t take as many bits of information to express these solutions just as the behavior of these systems is also more simple. These are subject to Laplace’s demon (predictable). They are easily hard-deterministic.

A good example of first principles models like FEM is the simulation that Stanford’s Kavli Institute for Particle Astrophysics and Cosmology paid millions of dollars for, to buy time on a supercomputer, in order to explain plasma shear effects. They defined small cells of plasma, and they defined small steps in time, and they recorded the state of each cell and how it affected cells around it. They dealt with boundary conditions and they evolved the cells in time following known laws of physics. Out popped plasma shear effects, a phenomenon that had been long unexplainable by typical reductionist models. This simulation is different because we cannot solve for the system at a given time, even using differential equations, but we have to estimate the values of the solution at that point. This system is characterized by LaPlace’s demon, but it calls hard determinism into question, and that is what is important since the Demon has been disproved. After all, given the order of magnitude of the cells, the times, and the energy required to do the calculation using the supercomputer, there wasn’t much difference between the plasma itself and the simulation. Which stirs of one of my favorite questions– can the universe be modeled perfectly by anything smaller than the universe? I believe the answer is no. That is a decidedly systems way of thinking. It is understood to be anti-reductionist because it embraces emergence, which is really just a way of saying that organization in physical systems is not vertical; that derivative phenomena sometimes feed back onto their own constituent parts and change the parts themselves; that, when scaled up, systems gain new properties that do not have a one-to-one or even finite correspondence with their old properties. The simulation had shown that plasma shear effects, like the spread of forest fires or the fluctuation in salmon runs each year, were an emergent property. It showed that more stringently reductionist physical models could not account for shear effects, but less reductionist discrete models with more complexity could. All of this emphasizes the importance of large-scale phenomena and its interconnectedness with the very nature of its own elements.

The bottom line is reductionist models work in certain contexts, and we should be thankful for that, otherwise we would have to be the universe to know the universe. Instead, we have brains, and we are individuals, because reductionist models work in a certain regime of phenomena and allow us to operate on such a small amount of energy. Reductionist models have a certain domain of validity, just like quantum mechanics and classical mechanics, even though reductionism and emergence are at odds in much the same way. We can know all the important details about a cat without being the size and state of the universe that produced cats. It’s a wondrous thing we should not write off. But the bottom line is that reductionist models, by their very definition, always leave something out. And the thing is, those little left-out bits like downward causation often conspire to create huge unexpected phenomena. This is supported by the ability of agent-based models to predict previously unpredictable phenomena, and these models are now gaining a lot of popularity in ecological studies for that reason. Check out NetLogo or the Game of Life and see what I mean. You won’t find any calculus type equations in that code.

However, some people are die-hard reductionists. They think there is nothing in the world that is needed except reductionism. There is no need to add other ways of looking at the world to reductionism to get a complete picture. Reductionism can always give a complete picture, they hold. Personally, I think we should think about being skeptical of people who use words like always and never (although I acknowledge it is a hard habit to break as I sometimes catch myself doing it, and obviously sometimes these words actually are appropriate). Reductionism cannot give a complete picture just as classical mechanics cannot give a complete picture. The major questions in life are residuals of reductionist models: do we have free will (the ability to have done something differently than we did)? What is consciousness? How does life begin? These questions are not answerable at present by reductionist models, and it should be telling that they all involve the failure to propose a mechanics that connects complex phenomena to the reductionist models of their constituent parts! Dogmatic reductionists will tell you, though, that it is just a matter of time before reductionism answers these questions. I think this is foolhardy. Consciousness is a conspiracy of all the little interactions we left out of our reductionist models. Those parts come together and conspire, if you will, to create consciousness, unbeknownst to our reductionist models. The little pieces of we thought were not important enough to include, or which were too complex to include, got together and did something we didn’t expect. Well, the universe is like that, always throwing us little surprises. This surprise is called emergence, and you can read about it on Sean Carroll’s website, or in any frontier paper on ecology. A lot of people are talking about emergence and emergent properties nowadays, and this concept is a core of systems science. It’s decidedly anti-reductionist, and reductionists reject the need to recognize emergence, often calling it pseudoscience.

I think some people so dogmatically cling to reductionism because they are having a hard time adjusting to life, as even I am. Life is brief, and we are ignorant, with little hope of ever finding a grand understanding. We are small in comparison with the universe and the questions we ask. Our lives are not long enough to know the things we so desperately want to know. That is why we are transhumanists. We hope to fix this. But it is still heartbreaking that those of us working for a better world will never even know what ends up happening as a result of our endeavors. All life on the planet could be extinguished the moment after we died, just by chance; all evidence of our efforts, all records of knowledge and research gained, could be burned up, never to be known by anyone ever again, for all we know. Or a glorious society could be built, one with less suffering, more autonomy, more reason, more diversity and more exploration as a result of our efforts. It’s like having the power go out right at the end of a viscerally engaging film, to never have the power come back on again. No resolution.

Now, that is hard to handle. So some people cling to reductionism in hopes that they can find some equation, some set of equations, or even some method to stick in their pockets and cling to. Some way of knowing the truth, of having a little memoir of the truth, like a lock of hair from a long-lost love tucked away in a secret box of keepsakes. Something to keep you going in the worst of times, to inspire you in times of utter complacency. I understand why they want that. I want that. As Lee Smolin wrote in Time Reborn, “I used to believe my job as a theoretical physicist was to find that formula [for all the future]; I now see my faith in its existence as more mysticism than science.”

If the box contains a diamond, I desire to believe that the box contains a diamond, and if not…