Complexity Theory, Nonlinear Dynamic Spatial Systems
System complexity refers to structural and dynamic behavioral characteristics of systems that are considered qualitatively different from the systems studied in classical science. Classical systems consist of either small numbers of interacting elements, with interactions that are linear, so that small changes in the inputs lead to small changes in the outputs; or, very large numbers of elements (in the millions or more) where statistical approaches can account for system level behavior. These constraints on the systems studied in classical science result from the intractability of the mathematics required to represent systems of intermediate size with nonlinear interactions among elements. In recent decades, complex systems consisting of tens to millions of elements, with intricately structured, usually localized interactions among elements have been widely studied predominantly using computational models. The resulting complexity theory posits a number of structural and dynamic properties of complex systems, such as emergence, self organization, path dependence, and tipping points. These concepts have risen to prominence in many disciplines in recent decades, popularized not only by scientific inter est, but by the accompanying promise of a ‘holistic’ science, with broad cross disciplinary appeal. It remains unclear whether the implicit promise of complexity theory to bring greater unity across physical and social sciences can be realized.
The roots of complexity theory are widespread. It can be regarded as a recasting of general systems theory as advocated by von Bertalanffy in the late 1960s. Other important jumping off points include artificial intelligence, where the work of Herbert Simon has been influential, and game theory in economics. Another source is non mainstream mathematical biology early in the twentieth century, particularly D’Arcy Thomson’s On Growth and Form. More recently, various schools of thought in physics, such as Prigogine’s nonlinear dynamical perspective on the second law of thermodynamics, and Haken’s synergetics have also been widely taken up.
The origins of now dominant North American conceptualizations of complexity lie in optimistic commentary after World War II about the prospect of tackling social and economic problems by coordinated scientific effort. In the wake of the achievements of the scientific expertise brought to bear on wartime challenges (leading to the development of not only nuclear energy and the jet engine, but significantly in the current context, the first electronic computers), it was suggested that the social and economic problems facing the post war world might be tackled by similar scientific efforts. This view was most cogently expressed in a prescient piece entitled ‘Science and complexity’ by Warren Weaver, who distinguishes between the simple small systems of classical science, systems of ‘disorganized complexity’ such as gases, which are amenable to statistical explanation, and systems exhibiting ‘organized complexity’ consisting of many elements interacting in nonlinear ways. Weaver referred directly to the importance of computers in providing the capability to develop understanding of such complex systems, and suggested that operations research, systems theory, and cybernetics would be critical to the study of complex systems. Much of Weaver’s vision has come to pass under the rubric of ‘complexity science’ particularly following the widespread availability of desktop computers accessible to most working
Structure and Dynamics
A complex system is a ‘medium sized’ collection of elements interacting with one another in more or less simple ways. In this context, ‘medium sized’ ranges widely from more than two to hundreds of thousands, even millions of elements. Elements are themselves simple, or are treated as such at the current level of analysis. Interactions between elements are not restricted to particular functional forms and may be governed by linear equations or by nonlinear interactions such that levels of interaction and their effects may in turn depend on other features of the elements interacting. In most accounts interactions among elements are localized, often but not necessarily spatially, but in highly structured ways, so that every element does not interact with every other at all times. This means that interactions among elements vary depending on location.
Systems considered ‘complex’ range widely from ant hills and brains, to cities, the Internet, ecosystems, and the global economy. Such systems have been studied using computational simulations, and the resulting knowledge about typical structural and dynamic characteristics may be loosely termed complexity theory. It is a central tenet of complexity theory that in spite of their diversity, complex systems share common characteristics and behaviors due to their similar structural features, and that these characteristics apply regardless of the particular details of the complex system in question. Thus, structural similarities between the Internet, the global economy, and the human brain in terms of interactions between the constituent elements of each – whether web servers, corporations, firms and governments, or neurons – matter as much, or even more, than obvious differences between those elements for understanding the systems and their behaviors.
An important theme in studies of complex systems is thus the structure of patterns of interactions among system elements. Structure may be measured and understood in terms of a network of relations among elements, which can be analyzed using graph theory. Some specific forms of structural organization are considered more interesting due to their prevalence in complex systems. Hierarchically organized systems consisting of one or more levels of nested subsystems may be considered complex. This perspective is prevalent in biological and engineering sciences where functional separation of elements into cooperating subsystems at a number of levels is a key feature. Biologically, interest focuses on how functional subsystems within an organism are organized to interact with one another. From an engineering perspective, the interest is in what principles for the design of robust engineered systems can be learned from naturally occurring hierarchical systems. John Holland and Herbert Simon’s work in particular has focused attention on hierarchically organized systems. Other network structures have been the subject of considerable more recent interest. First is the apparently paradoxical ‘small world’, which combines strong localized interactions with efficient communication from any element in the structure to any other. The term derives from the ‘small world’ phenomenon in social settings where, anecdotally, it is commonplace for apparently unrelated people to discover that they have acquaint ances in common. The operation of diffusion or similar processes is different on a small world network from a random or spatially structured network, and this has been a subject of considerable research.
Second is the scale free network where the numbers of neighbors associated with each node in a network conform to a power law distribution, meaning that there are few highly connected nodes and many peripheral or weakly connected nodes. Of particular interest is the nature of growth processes that lead to networks of this type, and the robustness of such systems when some links are lost. This research is closely related to work on city or firm size distributions. Arguably, the structure of the interaction networks that might emerge between geographically embedded entities in a complex system is underexplored relative to these network structures. How geographically embedded networks (e.g., national or regional scale producers and distributors) and unembedded networks (e.g., global finance and multinational corporate networks) might interact is also a neglected area.
Consideration of processes yielding different interaction structures leads naturally to the concepts of emergence and self organization. Emergence is the simple idea that complex aggregate outcomes may result from simple interactions among the elements of a complex system. In spite of its apparent simplicity, emergence is problematic. Ontologically, it presents difficulties because it is not at all clear that any mystical notion of emergence is necessary if all that it entails is the existence of aggregate system level properties based on the states of constituent elements. Emergence remains problematic where it implies more than this, particularly when it is taken to mean that aggregate level properties are not predictable based on an understanding of element level properties and interactions. In this case, emergence is an observer dependent concept, which is not robust, be cause, having once observed an ‘emergent’ aggregate outcome in a system it can no longer be unexpected. It may be more useful in such cases to think of emergence as a rhetorical device that enables claims about the explanatory efficacy of the interactions proposed for low level elements, based on their ability to account for particular aggregate outcomes.
An alternative understanding of emergence is a broader one that defines a series of ontological levels of reality, such as subatomic particles, atoms, molecules, proteins, and so on, where elements observed at one level emerge from interactions among elements at the next level down. Each level can be related to a scientific discipline. This scheme recognizes different disciplines as a way of dealing with the different types of entities and interactions observed at each level. While this approach is attractive for its neatness, at least three problems arise. First, this notion of emergence is rather different from more mundane notions expressed in terms of aggregate outcomes, so that care is required in using this framework to justify claims for the more workaday concept. Second, it is ironic that some would advocate such an understanding while at the same time asserting, as complexity theory does, the efficacy of abstract models of interacting elements as providing explanations of any transition from one level to another, independent of the disciplinary context. If the point is that entities and interactions different in kind emerge at each level, then surely different models are required for each level. Third, and most critically for human geography, the unambiguous series of levels proposed by such a scheme, if it is true of any phenomena (a moot point) appears untenable when we consider the ‘levels’ of existence in the social world. In the social realm, individual elements (human beings?) do not simply interact to produce a single higher level (society?) but instead act and interact in many diverse and interconnected settings (households, families, nations, ethnicities, pressure groups, workplaces, churches, unions, etc.) across a range of social and spatial scales.
Self organization imposes on top of emergence the idea that the emergent outcomes a system exhibits are often structured so that the system can be considered to be organized into a number of interacting subsystems. Self organization, partly because it demands more of the world than emergence, may be a more useful concept. The idea that it is possible for functionally distinct sub systems to develop in a complex system seems inherently geographic. Self organization presupposes spatial differentiation in a system and in a geographical context evokes concepts such as regions, or the functional areas of a city.
A further concept often overlaid on to self organization is adaptivity. A self organized system that is also considered adaptive is termed a complex adaptive system. Such systems are widely considered to be the paradigmatic objects of interest in complexity theory. How systems adapt to their environment in order to maintain their continuing function is a question with relevance in biology (where it is applied to organisms), but equally to social sciences where it may be applied to ‘systems’ such as language, society itself, or indeed any collective social structure.
Chaotic Dynamics and the Edge of Chaos
A chaotic system is one that, while appearing to behave randomly, is actually governed by simple deterministic rules, so that if its initial state were known precisely then all future states could be accurately predicted. However, because very small differences in the initial conditions of a system can lead to arbitrarily large differences in state at some subsequent time, it is impossible for the initial state of a system to be known precisely enough, and long term prediction is impossible. This is referred to as sensitive dependence on initial conditions. While many simple mathematical systems have been shown to exhibit chaos, it is unclear that such dynamics are a good model of any sociospatial system, nor are chaotic dynamics a necessary feature of complex systems. Indeed, from some perspectives, much of the interest in complex systems is in their avoidance of chaos: how does the relative order exhibited by complex systems emerge, in spite of the disordered environments in which they exist? An interesting notion relating complexity and chaos to one another is the idea that complexity exists at the ‘edge of chaos’. This idea suggests that the inherent dynamism of chaotic systems arising from their capacity for rapid change is a feature that makes for greateradaptivity in complex systems, and that for this reason many complex systems are poised on the ‘edge of chaos’. Related, and highly ambitious, claims are made for ‘self organized criticality’, which suggests that many systems spontaneously evolve to a point where they are prone to rapid change at a very large range of scales. While both concepts have garnered considerable attention, particularly self organized criticality, it is not clear that either framework can be readily attached to sociospatial systems.
Positive Feedback and Path Dependence
A different type of sensitive dependence on initial conditions arises from the twin concepts of positive feedback and path dependence. Study of the dynamics of classical systems emphasizes negative feedback processes which tend to restore a system to equilibrium. Potentially disruptive positive feedback effects are of more interest in complexity science. A positive feedback is any effect that tends to reinforce a system’s changes along whatever development path it is already pursuing. The most familiar example in geography is urban growth in capitalist economies where rapid growth tends to create the conditions for further rapid growth. Positive feedback effects create the potential for small initial changes to lead to rapid large scale changes in a system. Small initial competitive advantages for a region may, through positive feedback, lead to enormous differences in outcomes be tween regions. Silicon Valley’s dominance of turn of the century information technologies provides a canonical example.
Positive feedback leads directly to the notion of path dependence, and in parallel, or equivalently in a spatially distributed system, place dependence. If small effects can be magnified via positive feedback into large differences, then the particular sequence of historical and geographical events across a system (or at localized places within it) is important in developing explanations. This perspective emphasizes the importance of historical and geographical accounts of complex systems, and draws attention to the importance of contingency in explanation.
Tipping Points and Critical Mass
Another frequently identified feature of complex dynamics is the existence of tipping points. A geographical example is urban residential change when slow shifts in a neighborhood’s demographics may reach a tipping point as a result of a critical mass of (say) immigrants entering the neighborhood. Beyond the tipping point rapid neighborhood change occurs, leading quickly to complete demographic transition. The same concept might apply equally to incominggentrifiers. Similar dynamics are posited for adoption of innovations such as mobile phones or fax machines. In most cases, such dynamics are driven by positive feedback. In a social context, if or when tipping points are perceived by the society or group in question is a key issue.
Issues Arising from Structure and Dynamics
Some general points can be drawn from the foregoing discussion:
- The ambitious generality of claims made in complexity theory – the claim to generality, together with a degree of intuitive obviousness – insofar as concepts such as path dependence and emergence seem familiar – explains why complexity science has successfully established currency in so many disciplines. Complexity theory moves systems based science into a position where social phenomena may now be a viable object of study using these methods, because the models used are no longer restricted to idealized and unrealistic behaviors (such as the market equilibria of economics). The downside of course, is that the very generality of the concepts promises a great deal, but struggles to deliver much in the way of satisfying explanations of anything very specific. When brought into any particular context, the explanatory power of ideas from complexity theory often proves rather limited.
- Often, in the application of complexity theory to socialphenomena, emergence of aggregate behaviors from individual level interactions is emphasized over any downward causation effects operating in the opposite direction. While downward effects are implicitly considered insofar as individual behavior is affected by the localized characteristics of a system, in general, the overall aggregate state of the system is not a factor in the behavior of individuals. For sociospatial systems this seems a limitation.
- A focus on the relationships between the (often spatial) structure of a system and its dynamics is a core theme in geography that finds strong support from complexity theory. A closely related observation is the inherently spatial nature of complex systems. The localized context of the actions of individual elements in a system is recognized as a key aspect of how system behavior unfolds, and spatial differentiation of a system is both expected and an outcome of interest. The self organization and path (or place) dependence concepts in particular, bring these aspects of complex systems to the fore.
The Role of Computer Models and Simulation
The foregoing discussion summarizes key phenomeno logical findings of complexity theory. At least, as interesting as these ideas are their epistemological underpinnings. As has been noted, systems theory, computer science, artificial intelligence, and cybernetics are important sources of complexity science, and in keeping with this ancestry, computer models are the dominant research practice in complexity science. This approach involves development of a computational model of the phenomenon (i.e., the complex system) of interest, and subsequent exploration of its behavior. Findings from the model are mapped back onto the real world as claims about possible behaviors of the original phenomenon. This bald description hides a very wide range of ac tual research. Two particular model types have been of particular interest in explicitly spatial settings: cellularautomata and agent based models. More generally, models vary in style from highly abstract and simplified thought experiments, to elaborate, realistic simulations.
At the simple abstract end of the scale, models may simply be a way to sketch out broad characteristics of a system, often in terms of the features discussed above. At the realistic end of the scale, research may be more geared toward exploring scenarios in contexts closely related to policy formulation, with little concern for the broad system characteristics. Arguably, this approach is not complexity science so much as a pragmatic response to increasingly complicated decision making problems. Many aspects of model based research are not scrutinized closely from a philosophical perspective, nor are they always explicitly reported in publications. Precisely how a researcher chooses to represent a system in a model is critical to all subsequent findings. Given the theoretical expectation of at least two ‘levels’ in a system, one the basic elements and their interactions, the other the system level itself, definition of elements and their interactions is crucial. Equally, which elements are omitted is critical. By their nature, these decisions are difficult to report adequately, when the focus is on the model that was developed, rather than on the infinite number of other models that might have been developed. Thus the representational adequacy of models is not often considered.
One strategy for sidestepping this problem is to argue that models are simply thought experiments or explorations of the implications of (usually) simplified assumptions about how the world is. While this position is intellectually defensible, in the absence of empirically grounded follow up, it is hard to justify such research as an end in itself. Such follow up remains rare, in part because of weak links between researchers using model based methods, and more mainstream social scientists. Without supportive empirically grounded research, assessing how well a model works as a representation of the phenomena it is purported to represent remains a matter for debate and argument.
In the absence of verification, many modelers turn to some form of validation. Validation can take two forms. Either a model is checked for internal consistency, or its outputs are validated against empirical data. While the former is necessary, consistency checking cannot establish the representational accuracy of a model. Validation against empirical data appears a more promising procedure, but also fails to establish the representational merits of a model: a failure to reproduce observed phenomena may suggest that a model is a poor representation, but the converse is not true, and in any case producing a good fit to observational data does not rule out the possibility that an entirely different model could produce a similarly good fit.
These methodological issues represent significant challenges to any research program substantially founded on computational modeling. Partly as a result, finding appropriate ways to report research based on computational models has become an important (if minor) theme in geography and elsewhere, with some research focusing on how model results are interpreted and deployed in large scale decision making (e.g., in the context of climate change), and others suggesting that it is important for model based efforts to be connected to other approaches more fully.
Impacts in Human Geography
Examples of self conscious complexity based research in human geography are unusual. Complexity science has risen to prominence during a period when human geography has been dominated by the humanistic, structuralist/Marxist, and post structural/cultural perspectives and the accompanying philosophical and methodological prescriptions are unsympathetic to the computational modeling approach.
The only field where complexity has had a direct impact is at the interface between urban planning and geography where researchers have developed model based approaches to understanding urban growth, sprawl, transportation, and other related phenomena. This area is fertile ground for the development of new variants of both the cellularautomata and agent based model types. Another focus of attention is in studies of land use change where model based approaches, occasionally in formed by complexity theory, have become common place. Another connection exists to the extent that world systems theory is influential in some areas of human geography, and is also consistent with complexity theory. The impact of these strands of work on the discipline more widely has been limited, and other disciplines (particularly sociology, economics, planning, and architecture) have often been a more receptive audience. The geographic information science community has been a receptive audience within geography, and methodological issues relating to the integration of models with geospatialdata and to the challenges of exploring the large data sets generated by models are of particular interest.
The evident isomorphism of many key themes in geography with those of complexity theory, together with the broad disciplinary reach of complexity, has more recently witnessed a greater interest from the geographical mainstream. For some, the isomorphism provides a new language to draw on in theorizing the global economy, a language that may prove more durable than earlier, rather loose analogies with chaos and fractals. Attention has also focused on the historical geography of complexity and on how its metaphors have successfully crossed disciplinary boundaries to permeate not only science but the culture more widely.
The overlaps and connections between post World War II technocratic science, complexity as a new scientific paradigm, new age perspectives on complexity as ‘holistic’ science, and the currency of complexity as a buzzword in business certainly suggest that complexity itself is a suitable topic for economic or cultural geographical study, given the popularity of actor network theory and other approaches from the sociology of scientific knowledge tradition. Whether or not complexity science simply represents a rebranding of social physics, its logical conclusion, or something more significant also remains an open question.
Such issues are critical to the long term potential of complexity as a bridge between quantitative and qualitative traditions in human geography. While complexity science brings to the fore a recognition of the intricately structured interconnectedness of the world in a way that previous quantitative perspectives arguably have not, properly understood, it also holds a place for concrete, empirically grounded research, both qualitative and quantitative. In addition, the role of computational models opens up avenues for dialog, at the same time that it makes for an explicit acknowledgement of the act of representation in quantitative work. These are aspects of complexity theory where human geography appears well placed to make significant contributions.