Measuring Complexity

Some things clearly are more complicated than others. By this account, it ought to be possible to provide a quantitative measure of complexity, so that the extent to which 'this' is more complicated than 'that' can be expressed accurately. Many authors believe that no measure of complexity has yet been discovered that has any useful meaning. They believe no such measure has ever been related to any other variable in any meaningful way, and that therefore it is useless in terms of theory building. However, there are some measures which attract intellectual curiosity, and perhaps in the future some significance will be derived from them. The most cited is the Solomonoff–Chaitin measure of algorithmic complexity. This says that the degree of complexity is defined by the shortest possible algorithm that can express the observed behavior. Formally this is defined in terms of the length of a program in a universal Turing machine (the most basic theoretical computer) required to print any given output. To go back to the numerical examples, a single decimal digit like 4 or 6 requires 3.322 bits of information to be expressed. Suppose the behavior of some system is represented by the string .333333333333333333333… one hundred decimal numbers long. To express 100 such numbers will take 332.2 bits. But we can also express the number as n=1/3. This expression requires 2х3.332 bits, plus a small length of program for calculating the division. The number is clearly not very complex. By contrast, a string of 100 numbers from a chaotic system cannot be summarized by an algorithm that is shorter than the 100 numbers, and it has therefore 'complete' complexity. Some fairly simple reasoning leads to the conclusion that the proportion of numbers that are not complex becomes vanishingly small, implying that stable solutions are extremely rare.

In statistical terms, if some sample of numbers are randomly drawn from a perfectly normal population, again we need not know all the numbers; we can use an algorithm to specify the distribution in terms of a few parameters such as mean and standard deviation. We need not know all the details of each number. Apparently complexity has been reduced. The use of statistical analysis in social science has thus been based on the idea that it is the aggregates, not the individuals, which matter. However, no two nontrivially sized samples will contain the same numbers (hence the idea of the standard error of the sample mean). Now, suppose the behavior of the whole is not an aggregate, but the emergent effect of the individuals interacting with each other. Then clearly, the initial conditions in the two cases are different, even if the sample characteristics are the same. So if we are interested in emergent dynamics, the statistical approach will not work. As an example, suppose people have an individual and static propensity to riot. Then a statistical analysis of a sample of football fans will work out quite easily the likelihood of a riot of intensity x. However, if the propensity of each individual to riot is changed by the behavior of a neighbor (and worst of all it is a positive feedback loop), then what matters most is being able to predict a single trigger point somewhere. Such a trigger point may be the simple adjacency of two people who have poor self control and competing loyalties. Then there are further adjacencies to consider – whether those surrounding the flash point pair are likely to suppress or aggravate the situation. Even the same crowd of people may yield completely different behavior depending on who is adjacent to whom.

The four qualities of complexitya

There is really only one possible conclusion from this about the state of our knowledge of our own societies. They are completely complex, and all models of them introduce simplifications that render the model predictions unreliable.