Computational Human Geography

Introduction

Origins of the Notion

Computational human geography is part of a broad spectrum of fields in both the natural and the social sciences that have adopted the qualifier ‘computational’ to describe their methodological and epistemological approach to their subject matter. Thus we have, on the natural science side, computational physics, chemistry, biology, and so on, and even computational mathematics. Similarly, in the social sciences, the terms computational economics, political science, anthropology, linguistics, etc., have appeared in the literature in recent decades, sometimes also as titles of professional journals (e.g., Computational Economics, Computational Linguistics). All these relatively new fields are part of computational natural or social science, or computational science more generally. The term computational science is sometimes used interchangeably with scientific computing, though the latter may have a narrower, more applied meaning. The rapidly increasing number of journals with related titles (e.g., Scientific Computing World, SIAM Journal on Scientific Computing) and of computational science degree programs, from BS to PhD, at major universities world wide, attest to the growing interest in computational approaches across the sciences. In geography, the terms Geocomputation (the title of a successful international conference series), geoinformatics, or geomatics (the names of several engineering departments specializing ingeospatial computing), and Computational Geography (the name of a research center at the University of Leeds) are closely related to computational human geography, though they are not restricted to the social science side of the discipline.

Computational Science and the Theory of Computation

Computational science has its origins in the first digital computers that became available for use by scientists in the mid to late 1940s. Even in their early, primitive incarnations these machines could tackle the massive calculations needed to solve complicated problems much more efficiently than professional humans could. But scientific computation proper began when the use of computers moved beyond mere number crunching to helping derive approximate numerical solutions to problems that were too large or difficult to solve analytically. In academia these computational approaches to problem solving were at first considered very much second best to traditional methods, especially in the formal sciences such as mathematics, theoretical physics, and economics. A turning point in the status of computation as a legitimate scientific approach was reached in the 1970s when the four color conjecture, one of the famous unsolved problems in mathematics, was proved; or rather, it was demonstrated to be true by two mathematicians who used a computer to perform an exhaustive search through some 1500 configurations representing all possible classes of four color maps. This caused quite a stir in the discipline. At first many mathematicians refused to accept the computer aided solution as a valid proof but eventually the work was verified independently, while other similarly derived proofs started appearing in prestigious mathematical journals. Yet, what boosted computation to the prominent position it occupies today is above all the rise of the complex systems paradigm in the natural and social sciences, which occurred around the same time. Research issues framed as complex systems problems are almost by definition not analytically tractable and require the development and implementation of numerical algorithms for their solution. More importantly, complex systems are typically dynamic systems undergoing some process, and algorithms themselves describe a complex process. Thus, algorithms can be used not only to solve problems for which mathematical formulations already exist, but also to formulate complex system problems from scratch as complex processes.

The power of algorithms in representing all manner of process is established in the formal theory of computation, which is due in large part to the British mathematician Alan Turing. The theory of computation is a branch of modern (discrete) mathematics, which has its roots in set theory. A central idea in Turing’s theory is that of an effective procedure, that is, a set of unambiguous step by step instructions for carrying out a task. Algorithms embody the notion of effective procedure. A still unproved conjecture known as the Church–Turing thesis states that any problem that can be represented and solved in any other way can also be represented as an effective procedure and solved by a corresponding algorithm. In other words, all of science – past, present, and future – may be recast as computational science. A well known recent exponent of this thesis is the mathematician Stephen Wolf ram, who claims that the computational structures known as cellular automata (CA) can provide models for all of science. These are part of automata theory, one of three branches of the theory of computation. Automata are formally specified classes of effective procedures with well defined properties. Their dynamic behavior is governed by sets of if–then rules that operate on the automaton’s current and possible states. CA consist of one , two , or higher dimensional arrays of automata, each of which updates its current state by also taking into ac count the states of neighboring automata in the array.

Both automata and CA have become an integral part of computational human geography (see below), though their formal origins in the theory of computation – and thus their analytic power – are not always recognized. CA are of particular interest to computational human geography because they are inherently spatial. They are used extensively to represent urban, regional, and environmental dynamics and other phenomena of spatial change. Automata also underlie agent based models (ABM), which have become increasingly popular in recent years in geography and in the social sciences more generally. Both these very important classes of applications involve the development of simulation models, that is, algorithms embodying specific computational processes that are assumed to mimic the empirical processes of interest. Beyond these process oriented applications there is a wide array of other methods and techniques that use computation to generate problem solving procedures for statistical, graphical, and other forms of data processing. The next section discusses some of the very diverse uses of computation in human geography.

Computation in Human Geography

Historical Overview

Sometimes computation in (human) geography is thought to be practically synonymous with the use of geographic information systems (GIS), but in actual fact the history of the field’s involvement with computation is both much longer and broader than its association with GIS. From the 1950s on, the rise of the quantitative revolution offered fertile ground for experimentation with computational techniques in the context of spatial analysis, optimization problems, and early models of spatial behavior and decision. Ha¨gerstrand’s pioneering work in spatial diffusion demonstrated two major notions related to computation: (1) the idea that spatial diffusion is a process that may be described as an effective procedure, and (2) the utility of a computational stochastic sampling technique from the class known as Monte Carlo simulations. Ha¨gerstrand’s work thus touched on both major aspects of scientific computation: as direct process representation, and as data handling methodology.

Human geography had another significant early en counter with computation in the form of the first generation of urban models that appeared in the 1960s. In practical terms these models turned out to be over ambitious, expensive, heavily criticized and ridiculed failures, especially in the USA, but the seeds for the computational solution of large empirical problems were planted in the discipline at that time. These early computer models consisted of very large systems of simultaneous equations that would have been extremely difficult to solve manually. Approximate solutions were generated by iterative algorithms that assigned values to the different model parameters until an acceptable point of convergence toward equilibrium was reached. The failure of these models had less to do with computation perseas with certain immature conceptualizations of cities as static equilibrium systems.

Partly in reaction to such mechanistic representations of human phenomena as aggregate systems of statistical regularities, behavioral geography emerged as a movement emphasizing the perceiving, thinking individual who makes and carries out decisions in geographic space. Work in that area has been quite diverse, ranging from humanistic perspectives on values, attachments, and emotions relating to place, to theoretical models of spatial decision inspired from microeconomics or game theory, to empirical observations of individuals or groups making housing, transportation, shopping, land use, and other choices in urban or rural environments. In addition to introducing human geography to experimental psychology and cognitive science, this latter thrust in behavioral geography was particularly amenable to approaches from another major development to come out of Turing’s theory of computation: artificial intelligence (AI). Computational models of housing search, of orientation and route finding, of directional biases in spatial behavior, and so on, began appearing in the literature by the late 1970s. These AI inspired models emphasized the unobservable cognitive processes thought to underlie spatial decision and behavior, including the generation of cognitive maps. Beyond the modeling of spatial processes, and often in conjunction with it, is another major use of computation in human geography. This is the more standard use of the computer as a data handling and numerical problem solving device, providing solutions to mathematical and statistical problems such as those found in spatial analysis and spatial optimization. Such problems often can be solved only approximately in practice because an exact solution would take far too long to calculate, even on the fastest computers. In this case heuristics are used, that is, computational shortcuts that can produce good enough solutions. Heuristics are also used by default in the numerous instances of problems in human geography that have no formal expression but are directly translated from the conceptual to the computational stage. These forms of computing as numerical problem solving also have a long and distinguished history in the field. Methods for searching large solution spaces, for model calibration, for function optimization, for spatial cluster detection, for dealing with data uncertainty, and many others were routinely used by quantitative geographers already in the 1970s. By the 1980s the term automated geography appeared in the literature, along with a number of sophisticated pattern recognition, classification, optimization, and other heuristics derived from AI (e.g., neural network analysis, genetic algorithms, simulated annealing, and self organizing maps). Interest in these techniques has grown along with the spread of computation in human geography.

While not directly part of computational human geography, computer cartography and GIS have affected the field profoundly. Computer aided cartography appeared as soon as computers were able to produce some primitive graphics, and around the same time the ancestor of GIS took shape in the guise of the computer basedcadastre for Canada. Both fields quickly transcended their origins as the computerized versions of traditional ways of doing things to move in unanticipated, vibrant new directions. Computer cartography soon extended into animated cartography, digital terrain modeling, and three dimensional (3 D) visualization, the dynamic mapping of abstract flows (e.g., of migration or money flows), the integration of field and remotely sensed data (also from urban areas), and so on, and even took care of stylistic problems such as of the correct placement of lettering on maps. As for GIS, its explosive growth into a technology affecting a host of domains well beyond geography – human or other – has been amply documented. While being the most prominent piece of the computational revolution in (human) geography, GIS is clearly not the only one.

The Present – and beyond

All these early involvements of human geography with computation continue to flourish today. There are by now few efforts in human geography that are not touched by some development or other coming out of the computational turn in the discipline – be it a computer generated map, the visualization of an environment, a simple GIS analysis, or a tool for spatial data exploration. Current work in computational human geography may be briefly discussed under three headings: computation as spatial process representation, computation as problem solving procedure and for visualization, and mobile and pervasive computation.

Models describing spatial processes have by now largely replaced static models of spatial organization in many areas of quantitative human geography, and in particular in the study of urban and regional growth and change. Considering the intuitive understanding of these phenomena as being essentially dynamic and complex, much of the work in these areas since the late 1980s has looked to the complex systems paradigm for methodologies as well as for conceptual inspiration. Of the complex systems models available in the physicomathematical and computational literature, CA stood out as being directly applicable to geographical problems. Indeed, CA have several desirable properties: they are inherently spatial; because they generate complexmacroscale patterns as the result of micro scale interactions, they may be seen as metaphors for urban and regional organization resulting from a myriad of interdependent decisions by individual actors; they occasionally produce surprising outcomes that stimulate new thinking about the real systems modeled; they are relatively easy to program and implement; and they produce striking dynamic visualizations. CA based models are used in human geography in two different ways: as simulations of actual urban, regional, and other land use change processes; and as instructive, intuition sharpening metaphors, providing opportunities for novel thought experiments illuminating the properties of spatial processes viewed as complex processes. In the first case, the rigid structure of formal CA must be substantially modified in order to provide credible empirical models of spatial change. While these generalized CA models can be calibrated to fit actual data very closely, many of their interesting dynamic properties established in theoretical computer science are thereby lost. In the second case, whereby simple CA models are developed as instructive metaphors, there is little attempt to make these look realistic, focusing instead on isolating and exploring specific properties of interest. One of the earliest and best known examples of this second kind was developed not by a human geographer but by a Nobel laureate economist. Schelling’s model of spatial segregation demonstrates that even mild individual preferences for being close to one’s ‘kind’ may lead to starkly segregated spatial patterns. As for the processes of spatial decision and behavior, the AI inspired, custom built models of earlier behavioral geography have been largely superseded by more generic types of agent based models. Agents are classes ofautomatasimulating decision making entities that can construct representations of their environment, interact with and act on their environment and other agents, and pursue goals. Simpler forms of agents have more limited sets of properties. Agents may be spatially mobile or not, and they may represent individuals, groups, institutional decision makers, etc. Interest in this class of models is growing rapidly in human geography, fueled by both the ability to represent decision and behavior with considerable realism, and by the availability of software that greatly facilitates implementation. Increasingly, CA and ABM are being integrated into more complex structures, with the former representing processes in the spatial environment, and the latter the decision making entities affecting, and being affected by these processes.

The use of computing for data processing and numerical problem solving has also exploded in parts of human geography, in parallel with the trend for computation to become ever more accessible and less technically demanding. Most of the techniques of earlier periods are still being used and refined today, with an emphasis on increasingly powerful and easy to use soft ware packages as well as good quality data, both of which are now often available free to researchers over the web. In conjunction with increasingly sophisticated possibilities for data visualization there is also growing interest in exploratory data analysis and data mining, which are classes of techniques that facilitate the discovery of interesting patterns in large datasets. It is now standard for software to come with interactive graphic interfaces that blur the distinction between confirmatory and ex ploratory analysis. This fusion of computational data handling, analysis and visualization is a very powerful development because it capitalizes on both the deliberative and the intuitive capacities of the human mind, creating fruitful right brain–left brain synergies. Visualization is also used extensively to communicate ideas among researchers and especially between researchers and the public, where the proverbial picture that is worth a thousand words can help break down the notorious barriers separating experts andnonexperts.

Next to these evolutionary computational developments there are also some fairly revolutionary ones that are beginning to affect parts of human geography. These derive from a number of new digital technologies that support distributed, wearable, and mobile computing. Part of this broader trend toward pervasive computing involves the convergence of existing technologies (e.g., cell phones and wireless Internet, or Global Positioning Systems (GPS) and GIS), while others are early applications of genuinely novel technologies, such as nano technology. Examples of new research areas in human geography that are beginning to capitalize on these developments are location based services (LBS) and augmented environments. LBS allow subjects on the move to receive GIS generated maps on hand held devices such as cell phones that display the location of desired nearby services (commercial or other), or that show good routes from the subject’s current position to a destination. Augmented environments are urban (usually) environments outfitted with built in sensors, wireless transmitters, and receivers that can provide to suitably equipped users local information that would normally not be directly accessible to them. Combinations of sonic, visual, or virtual reality (VR) interfaces may be used. Applications can range from narratives describing historical buildings or districts in a subject’s immediate environment, to spontaneous commentaries by visitors in public places that subsequent visitors can access, to auditory signals allowing blind subjects to construct effective mental maps of their surroundings. These research areas are at the frontier of computational human geography, eventually requiring a radical rethinking of traditional notions of space and place and of the relationship between humans and their environment.

Issues and Critiques

Over the years computational human geography has had its strongly committed, sometimes strident champions as well as its highly skeptical, often dismissive critics. Critiques of computational human geography originate from both within the post positivist perspective of geography as spatial science as well as from the humanistic and social theory perspectives. The internal critiques parallel those of all computational (social) science and tend to center around epistemological and methodological issues, whereas the external critiques are more likely to contest the ontological assumptions underlying many uses of computation in human geography. Critiques of computational approaches in human geography should be distinguished from those, also found in the geographical literature, that examine the role of computers in society more generally. These focus on issues of empowerment, access, privacy, surveillance, digital divides, and other societal dilemmas, often in the context of discussions of GIS and its potential uses and misuses.

Internal Critiques

The most widely accepted definition of computational science as the use of computer based methods and techniques to solve scientific problems does not do justice to the extremely significant role of computation in not just solving but also in representing problems. This is an important distinction because there is little controversy surrounding the problem solving, numerical computing aspect of computational human geography. On occasion techniques may be used incorrectly or inappropriately but beyond that, no one misses the days of the slide rule. The representational aspect of computation, on the other hand, raises a number of important issues that have yet to be properly addressed, let alone resolved.

Computation has often been discussed as the third way of doing science, lying somewhere between theory development and experimentation. This implies a new approach to knowledge production and the need for a new kind of research methodology different from either the mostly deductive mode of theoretical work or the mostly inductive mode of experimental science. That third way centers on the construction of complex simulated worlds, within which experiments may be run that would have been difficult or impossible to conduct in the real world. Examples from computational human geography would be process oriented models of people interacting with their environment, often expressed as coupled agent based and CA models. These may simulate farmer communities dynamically affecting land cover through adaptive land use decisions, households seeking suitable housing in changing urban neighbor hoods, or trip makers responding to various congestion pricing schemes on transportation networks. Such models are built to incorporate considerable empirical and intuitive understanding of the complex processes of interest and when calibrated to actual data they are often presented as suitable for policy analysis.

The epistemological problem is that models of complex open systems with deep uncertainties, as social systems nearly always are, cannot be used for prediction. Predictive models belong in the traditional scientific paradigm of theoretical closed system descriptions sup ported by experimental evidence, or at least of well established empirical generalizations such as human geography’s spatial interaction models. Because this fact is not always appreciated, many computational modelers understand progress in the field to mean building simulations that are increasingly more detailed and realistic, though increased detail can actually decrease any predictive value such models may have by introducing additional ‘noise’. The meaning of model validation in this new world of computational process models thus remains open, and so does the question of how to derive valid insights that may be useful for both theory development and for policy guidance.

Technically, the main reason why models of complex open systems cannot yield reliable predictions is that many (in some cases infinitely many) different models can provide acceptable fits to the data; in other words, any particular model is but one realization out of a large space of potential models, few or none of which may be correct by whatever definition of the term. This issue is sometimes addressed with Monte Carlo simulations that generate many versions of a particular model by systematically varying the parameters; model outcomes are then considered reliable to the extent that they are re produced by large numbers of different parameter sets. This methodology may take care of parametric uncertainty but cannot address structural uncertainty, that is, the degree of confidence one may have in the structure of the model. Researchers in both the social and the natural sciences have suggested methods for generating large numbers of different model structures in a manner analogous to generating versions of the same model through Monte Carlo simulations. The idea is that exploring the properties of entire classes (ensembles) of models, even relatively simple ones, may yield more robust insights into complex processes than the study of even the more realistic looking individual model. There is fertile ground here for the more theoretically inclined computational human geographers to make contributions that will also benefit researchers in many other fields.

External Critiques

Computational human geography has evolved out of the discipline’s quantitative revolution, and the external crtique has mirrored that development. The basic onto logical objections to representing the human world in terms of (x, y) coordinates and measurable attributes have had little reason to change since the 1960s. For much of social science human and social phenomena are fundamentally irreducible; quantitative methods can only scratch the surface of what really matters. Thus, for many human geographers computational human geography is social physics that has evolved from the age of celestial mechanics to that of growing sand piles (an oft cited example of complex system behavior in physics). If anything, the picture has only gotten worse, considering that the theory of computation is also known as the theory of abstract machines. It is understandable that those who have objections of principle to the use ofphysicalist and mechanistic representations of human beings may feel even more threatened by today’s clever and capable computational agents than by the point particles of the quantitative revolution. Indeed, in our technology obsessed age, computation as form of power and as alluring rhetoric may present new and troubling societal risks.

Beyond the evident inadequacy of computational models as representations of individual human consciousness is their apparent total inability to represent central social theory constructs such as gender, race, class, and society, or more elaborate notions such as the social construction of space or social/spatial justice. There seems to be no language that can effectively translate between these two domains of human geography, the computational and that of social theory. Cross fence critiques have substantially waned by the early 2000s, the two sides appearing to have resigned themselves to co exist in mutual disregard. Glimmers of a rapprochement are evident however in parts of the field under the banner of mixed research methods (quantitative/computational and qualitative). A particularly noteworthy development is the adaptation of GIS tools to socially engaged applications, as spearheaded, for example, by the public participation GIS (PPGIS) agenda. Persistent major differences in philosophy make it highly unlikely that computational human geography will be accepted by more than a fraction of the discipline in the foreseeable future. On the other hand, increasing numbers of scholars from other social sciences are already adopting approaches and methods from computational human geography in their own research.