Complexity Theory: a history….
To view printer friendly version click here Complexity Theory history
“I would not give a fig for the simplicity this side of complexity, but I would give my life for the simplicity on the other side of complexity.”
Oliver Wendell Holmes
Although the bulk of the development has been in the last 30 years or so, there are people whose work foreshadowed the understandings we are now developing.
Back around 1900 the King of Sweden announced a mathematical competition offering a prize for the person who could calculate the three body problem. When two celestial bodies are in motion with one in orbit around the other, we simply need to use Newton’s Laws of motion to understand and predict their motion. When a third body is added, so one body orbits around a central body and the third body orbits the second body as in the case of the moon, earth and sun, then calculating where the bodies will be becomes far more complicated. Newton’s Laws have been sufficient to enable us to get humans to the moon, but a fully accurate solution to the three body problem is not as straight forward.
In fact, Henri Poincaré (1854 -1912) was able to prove that the three body problem could in fact not be solved. As soon as the earth moves, it changes the distances between the other bodies, which alters the gravitational forces. All three bodies interact with each other in such complicated ways as to defy calculation. If we cannot even calculate the motions of three bodies, how can we possibly predict the outcome of systems we see about us everyday with millions, trillions or more of intensely interacting parts?
In the 1960s Meteorologist Ed Lorenz was using an early computer to run a simulation of the weather. One day, when he was rushed for time, he set the computer to round off the numbers to be calculated so a result would be found sooner. He was expecting that the rounding off would have little or no effect on the final results. However, surprisingly, what he found was that the final results were dramatically different. He found small changes in the state of a system can cause major changes in the final output (sensitivity to initial conditions). We had been used to thinking large changes need large forces. He found that small forces could have large effects. He also found that a kind of pattern emerged from what originally appeared to be random changes, and he called this pattern an “attractor”. Lorenz presented a paper at a session of the American Association for the Advancement of Science in December 1972. The title of the paper: “Predictability – Does the Flap of a Butterfly’s Wings in Brazil Set Off a Tornado in Texas?”.
Lorenz admitted that while one butterfly’s flapping wings could trigger off a tornado in the Lone Star state, another butterfly’s flapping wings could prevent it. The picture above is the mathematical depiction of the attractor he found and has become known as the butterfly attractor.
If small changes in the initial state of a complex system can drastically alter the final outcome, then long-term weather prediction is impossible as there is no way to perfectly measure and describe the weather at any one point in time. There is always a further level of accuracy to be measured.
In the early 1970s, Robert May was working on how insect birth-rates varied according to levels of food supply and came up with results similar to those of Ed Lorenz. He found that at critical values, the system took twice as long to settle back into a stable pattern. After several period doubling cycles the system became unpredictable. Period doubling proved to be an important concept in many branches of complexity.
In 1971, David Ruelle and Floris Takens discovered strange attractors (also known as chaotic attractors) They mapped these mathematically onto a phase space, where each dimension corresponds to a variable of the system. This enabled accurate mapping of a system and its dynamics.
In the 1980s, Benoit Mandelbrot used a home computer to mathematically create what he was to call fractals. He found the Mandelbrot Set in 1980. A fractal is a shape that is self similar, that is that repeats the same basic shape at smaller levels within the same structure. For example look at a fern, or broccoli, and you will find that the sub branches have the same basic shape as the whole and the sub branches off the sub branches also have the same basic shape.
Illya Progogine worked in the area of dissipative systems. He won the Nobel Prize for his work in this area. A dissipative system is one that maintains an ongoing shape or identity because a flow of energy through the system is maintained. Our human body is a dissipative system because it is maintained by a number of energy flows, such as food, water, air, and even environmental stimuli and cognitive processes. Dissipative systems operate far from equilibrium and not at an equilibrium point as had been thought. Prigogine found chemical dissipative systems that could exhibit strange behaviour such as a chemical changing colour rhythmically. How do the molecules in the mix know when it is time to change colour?
Mitchel Feigenbaum was working in the late 1970s looking at period doubling. He showed that this was the normal way for order to break down into chaos. He found recurring ratios in the period doubling, now called Feigenbaum Numbers. It was found for example that the Feigenbaum Numbers were found in the period doubling that leads to heart attacks.
Rene Thom developed Catastrophe Theory based on how a complex system bifurcates or branches out. The system will reach a critical point through period doubling and must either collapse into chaos or self organise to a new level of complexity. Thom examined the lapse into chaos and the conditions under which it happens.
The Santa Fe Institute was founded in 1984 as a private and independent research and education centre, which has remained in the forefront of research into chaos and complexity. Two of its prominent members were Chris Langton and Stuart Kauffman. Chris Langton did research regarding the Edge of Chaos, the point where systems have enough order to maintain an ongoing identity, while also having enough chaos to allow for novelty and learning. At the Edge of Chaos self organisation and emergence can appear.
Stuart Kauffmann worked on connected networks of automata made up of small computer programmes. When they interacted in the network, some unexpected results were seen. Often the results were reasonably predictable, but at critical levels the systems optimised their effectiveness through co-adaption. His work has had particular importance in the field of evolutionary biology.
Prigogine emphasizes that the mechanistic and deterministic Newtonian world-view – emphasizing stability, order, uniformity, equilibrium and linear relationships between or within closed systems – is being replaced by a new paradigm. This new paradigm is more in line with today’s accelerated social change and stresses disorder, instability, diversity, disequilibrium, non-linear relationships between open systems and temporality.
Backed by Kauffman’s work on co-evolution, Wolfram’s cellular automata studies, and Bak’s investigations of self-organized criticality, Langton (1990) has proposed the general thesis that complex systems emerge and maintain on the edge of chaos, the narrow domain between frozen constancy and chaotic turbulence. The “edge of chaos” idea is another step towards an elusive general definition of complexity.
“Good order results spontaneously when things are let alone”
Chuang Tzu (369c-286 BC)
Complex adaptive systems
One of the principal foci of complexity study is complex adaptive systems.
Holland considers the main features of complex adaptive systems to be:
- many agents acting in parallel in an environment produced by its interactions with other agents in the system; because the agent is constantly acting and reacting to the other agents’ actions, nothing in its environment is fixed
- control is highly dispersed, therefore any coherent behaviour there might be in the system has to arise from competition and co-operation among the agents themselves
- many levels of organization: agents at one level serving as building blocks for the next level up
- constant rearrangement of the building blocks as a result of learning, experience, evolution, adaptation
- all anticipate the future to some degree, making attempts at prediction on the basis of models of their environment
- all have niches they can exploit, filling up one niche often opening up new ones that can be exploited
- they never reach equilibrium
- they can improve on some dimensions, but never optimize
- the richness of the interactions within the system allows the system as a whole to undergo spontaneous self-organization
Holland J. H. (1992) Adaptation in Natural and Artificial Systems: An Introductory Analysis with Applications to Biology, Control and Artificial Intelligence, MIT Press, Cambridge MA.
Kauffman S. A. (1993) The Origins of Order: Self-Organization and Selection in Evolution, Oxford University Press, New York.
Mandelbrot B. B. (1983) The Fractal Geometry of Nature. Freeman, New York.
Prigogine, I. and Stengers, I. (1984) Order Out of Chaos, Bantam Books, New York.
Langton C. G. (1990) Computation at the Edge of Chaos: phase transitions and emergent computation, Physica D, 42, 1-3, pp. 12-37
Sources for this historical summary include: