Copyright Sociological Research Online, 1997

 

Hanneman, R. and Patrick, S. (1997) 'On the Uses of Computer-Assisted Simulation Modeling in the Social Sciences'
Sociological Research Online, vol. 2, no. 2, <http://www.socresonline.org.uk/2/2/5.html>

To cite articles published in Sociological Research Online, please reference the above information and include paragraph numbers if necessary

Received: 2/6/97      Accepted: 24/6/97      Published: 30/6/97

Introduction

1.1
Computer-assisted simulation modeling has become somewhat more common as a method of inquiry in the social sciences over the last quarter century. Substantial numbers of researchers, concerned with both abstract theorizing and concrete prediction, have adopted variations of the methodology. In some disciplinary areas the use of computer assisted simulation modeling is a routine and highly accepted research method (eg. management science, economics), as it is in most areas of physical and life sciences. In other areas of the social sciences, the use of computer assisted simulation methods is still rather exotic, or practiced by small groups of specialists.

1.2
The current issue of this journal has framed the discussion of simulation methodologies in the social sciences as a 'debate' over the issue of the utility of the method. The current authors see no need to debate the issue in the abstract; it will ultimately be settled by whether the approach produces pragmatically useful results - as it has in the physical, life, and many applied social science areas. We are pleased to have the opportunity, however, to briefly elucidate the nature of the method, its range, and some current trends; and, to discuss these questions with our colleagues.

1.3
Computer-assisted simulation research centers on formal models. We will begin with brief definitions of models and of the simulation method. We will then explain the general logic of the study of models by means of simulation, and the method by which this method provides useful knowledge about the naturally occurring phenomena that concern social scientists. We then briefly discuss some major variations of modeling approaches that speak to issues of 'macro-micro linkages' and 'structure versus agency', and contain some useful approaches for dealing with these difficult issues. Lastly, we will look at some of the current trends in simulation modeling that may be expected to make important contributions to social science applications.

Models and Simulation Methods

2.1
Simulation methods involve the creation of models, understanding the behaviour of the models by means of experimentation, and evaluation of the extent to which the behaviour of the models provides a plausible account of the behaviour of observed 'natural' systems (Hanneman, 1995). Central to the approach is the 'model'. For our purposes here, we may think of a model is an artificial object (specifically, a computer program) that is hypothesized by the researcher to provide an abstract representation of some aspects of social structures and processes.

2.2
Social scientists (and everyone else), use models all the time in efforts to 'understand'. Linguistic and graphical representations (thick descriptions, definitions, conceptual schemes, theories, causal and network diagrams, etc.) of social structures and processes are models. They are artificial objects that are used by researchers to provide representations of social structures and processes. Still, all 'models' are not the same. In comparing an ethnographer's description of a social setting and the actors and actions that occur within it to the output of an iterated n-actor game simulated on a computer, we might agree that both are results of the application of 'models'. Yet, the computer code seems to be a considerably more 'artificial' object than the ethnographer's description.

2.3
A 'simulation' is the act of subjecting the model to an experimental stimulus and observing its behaviour. Again, social scientists (and everyone else) fairly routinely use 'simulations', in this broad sense. When an observer records how an actor responded to an event, or when an interviewer asks a question and records the answer, or when a psychologist places subjects in a lab and manipulates the setting; an experiment is being conducted to understand an object by means of observing it's response to a stimulus. Still, these differing methods of inquiry do not strike us as being identical, except in the most abstract way. Clearly, some of these methods seem more 'artificial' than others (with the observation of 'natural experiments' being least so, surveys and experiments with human subjects being the intermediate case, and runs of a computer program the most 'artificial' of the lot).

2.4
What is distinctive about computer models and simulation methods, then, is neither their use of abstract representations (models), nor their use of manipulation and observation as ways of understanding the objects of study. But, seeking to understand and account for regularities of social behaviour by the simulation of computer models is 'artificial' to a greater degree than many of our methodological approaches. This artificiality, we will next argue, is good and useful.

The Logic of the 'Science of the Artificial': Understanding by Simulation Experiments on Computer Models

3.1
Practitioners of simulation experimentation, and particularly those engaged in building programs that are intended to mimic closely aspects of human behaviour (e.g. 'artificial intelligence'), have worried a good deal about the logic, strengths, and limitations of their methodology. One of the classic statements on this subject, by Herbert A. Simon (1981), coined the useful phrase that described these approaches as the 'sciences of the artificial'. Computer-assisted simulation models are quite, though not uniquely, artificial objects of study. The scientific method for their use, however, is remarkably conventional.

3.2
One begins with observations and generalizations about some pattern of social behaviour, usually stated in 'natural language'. Randall Collins, for example, notes that there appear to be patterns of rise and fall in the amount of coordination and emotional energy generated in face-to-face groups of people (Collins and Hanneman, forthcoming). It is decided to build a machine that simulates these dynamics as one method for understanding them.

3.3
The construction of the machine (ie. the model) requires a number of explicit choices about architecture (eg. does one model the aggregate state of a group, or of individual actors?). More importantly, it requires that the analysts define the traits and relations among traits of the individuals or group that are supposed to give rise to the rise and fall of group coordination and emotional arousal. That is, the model is one concrete realization of the prior theory. The model must also be 'parameterized'. That is, choices must be made about the forms and magnitudes of effects, distributions and initial scores on traits, and probabilities of events. In making these choices, the analyst again (usually) uses empirical data. With a particular set of choices about parameters and initial values, the machine has now become a model of a particular scenario chosen from the range of possible scenarios (parameterizations) that could be represented within the limitations of the architecture of the model.

3.4
Once the machine is constructed, and a particular scenario chosen by way of selecting parameter values and initial conditions, the model is 'simulated'. In the case of dynamic models (ie. those where the algorithms describe how change in states occurs), the simulation is a series of iterative calculations which mimic the passage of time. The values of the states of the system are normally recorded as 'time' passes, and act as the 'data' for further analysis. Generally the analyst has designed a program of research that involves a number (sometimes a very large number) of scenarios that vary initial conditions and parameters; and data are collected on each for comparative analysis. Some research programs utilize explicit experimental design principles to select scenarios, other research programs are more 'exploratory'. We might, to continue our example, design two scenarios of group interaction - one where actors are initially very similar to one another, and one where they are strangers - and collect information on the predicted levels of emotional arousal and rhythmic coordination of interaction in the simulated groups over a period of 'time'.

3.5
We now have 'artificial' data generated by an 'artificial' object. What have we learned? To this point, we have been doing research on the theory - not research about the 'real' world. If we have done our work well, we now have a fuller understanding of the limitations and implications of the theory itself. Ideally, we know the conditions under which (ie. what initial conditions and parameters) the theory implies certain patterns of results (eg. what the theory implies about the conditions under which interaction within the group rises to a frenzied peak and collapses, or interaction continues at a stable low level, etc). Analyses may be highly formal and complex, involving the statistical modeling of large quantities of data; or they may be quite informal and intuitive. In any case, the results tell us about our theory - as viewed through the behaviour of a model that implements it.

3.6
The last step in the research process is model 'validation'. Having built a machine that embodies the principles of a theory in some understandable way, we now use it to make specific predictions about scenarios for which we are able to collect data by other means. Initial conditions within the model are now set to be identical to those in naturally occurring cases of the phenomenon under study and the model is simulated. The 'simulated' data are compared to data arising from the observation of the natural system (naturally occurring, quasi-experimental, or experimental). If the fit is poor (as it often is), we must arrive at the conclusion that there has been a failure of: (1) reliable and valid observation of the natural system, and/or (2) the model's implementation of the theory, and/or (3) the theory itself. If the simulation produces results that provide a good predictive fit to the data that arise from naturalistic observation, then we tentatively accept the theory - as implemented in the model - as a pragmatically useful tool for making predictions, until a more accurate or simpler tool can be found.

3.7
There is nothing unusual about the logic of research using simulation models. Observation, insight, and prior work give rise to a tentative theory which is stated quite formally and specifically in a model; the model is studied in itself as a way of working out (logically deducing) the implications of the theory across a range of scenarios. Comparing the results of simulation predictions in particular empirical cases to empirical data tests the theory. Failure to make accurate predictions results in rejection of the model and/or theory; success in making predictions results in tentative acceptance of the theory and model.

3.8
Simulation models do differ from the models used in other research methodologies. While the differences are matters of degree rather than kind, they are important. Simulation researchers regard these differences as strengths; critics regard the differences as weaknesses. Simulation models are highly artificial in the sense that they are usually very simple and highly formal in expression. Simulation models vary in the degree to which they are stated in abstract quantities (e.g. the level of emotional energy in a group) versus 'directly' observable quantities (e.g. the levels of adrenaline of group members), but frequently they are 'artificial' in the sense of being stated in more abstract quantities. Practitioners of the method claim that these 'artificial' aspects of their work are important advantages of their approach.

3.9
The 'simplicity' of simulation models is often not apparent. Computer code, mathematics, and obscure technical jargon may hide the fact that most simulation models, at least conceptually, are quite straightforward. Simplicity is achieved by deliberate limitation of the number of actors and variables in models, and by use of (usually) simple algorithms that describe the rules of motion among them. Even if the models are, in these senses 'simple', they are not necessarily 'easy'. Design, implementation, experimentation and validation can be a great deal of work. Simulation modelers tend to value 'simplicity' and 'elegance' in their work, and to have a dislike and distrust for complication and specificity. These are natural preferences due to the amount of work that is involved in building complex machines. More importantly, simplicity in theory and model are necessary because we may be unable to comprehend the full implications of complex models. A model that is so complex that it cannot be comprehended in itself is of no general explanatory use. The simplicity is also intended to follow the principle of seeking to find the simplest and broadest possible explanations of social phenomena. A simple model that can, with differing parameterizations and initial conditions, produce a very broad range of realizations is preferred over a more complex model of the same range. Elegant and simple theories do not always turn out to be better theories, in the longer run of the development of scientific knowledge. But, until we can demonstrate the practical superiority of more complicated theories over simpler ones, simplicity is preferred.

3.10
In the search for simplicity and elegance, simulation modelers are often led toward developing models stated at very high levels of abstraction. Indeed, many interesting social science simulation models borrow quite directly from the form and content of models in physical and life sciences - which are usually stated in highly abstract terms (force, mass, density, etc). Again, the use of more abstract and general concepts in theories and models would generally be preferred over more concrete and specific, given equal pragmatic utility. Often, it must be admitted, we have not demonstrated pragmatic utility. To the extent that simulation modelers do not concern themselves sufficiently with validation, we are left with an abstract and elegant model that bears no known relationship to empirical realities. This is not good, but is not specifically a failing of simulation method.

3.11
Simulation models are 'artificial' in that they are stated in highly formal terms. They are often stated in high level computer languages, and consist of formal mathematical or logical operators applied to specific quantities. Simulation and mathematical modelers claim that these languages of expression, as opposed to everyday language, photographs, song, painting, or some other medium are useful because they allow for compact and precise, and inter- subjectively reliable communication of ideas among those who know the language. Practitioners feel that the formalisms are not restrictive (ie. that they are able to translate other mediums of expression into the languages of models), and that the specific structure and syntax of formal languages often result in new insights (Hanneman, 1989). As with any other language of expression, the formalisms of simulation modeling do frame the perceptions of the practitioner. The same, of course, could be said to be true for the ethnographer and his/her field notes. The modeler's biases, again, are toward the abstraction, elegance, and simplicity that most regard as desirable goals for scientific explanation.

Types of Models and Modeling Technologies

4.1
In the paragraphs above we have sought to explain what simulation modelers do, and why they feel that their approach has some distinctive strengths. The general logic, method, and goals of computer- assisted simulation research have much in common across the various disciplinary social sciences, ranging from very abstracted theoretical work in non-linear systems and iterated game theory to planning models for traffic flows and designing optimal queuing systems for businesses. Simulation modelers use a variety of architectures and languages, and it is not possible in the space available here to provide a full survey.

4.2
One major division of approaches within the field, however, should be mentioned. Some modelers, consistent with traditions arising from 'systems' theory, mathematics, and many of the physical and life sciences, build models as sets of equations describing the hypothesized relations among sets of variables. Other modelers, more consistent with 'game' theory, laboratory experimentation, and small-groups research build models as sets of interacting 'agents'. The distinction between 'agent' models and 'variables' models approaches is interesting because it speaks to some broader divisions among social scientists: Macro versus micro, and structure versus agency.

4.3
Generally, more 'macro' oriented researchers and theorists have thought about social action in terms of systems of abstracted variables. Most statistical models in the social sciences share this 'variables' oriented view. Such approaches, however, are often accused of being reifications, and of denying human actors' agency. That is, there is the tendency to see the abstracted system of variables as 'real', and the actions of human actors as 'realizations' or 'exemplars' of the more abstracted theory. More 'micro' researchers, in contrast, tend to describe the social world in terms of individual actors, rather than variables. Structure is seen as emergent from the interactions of individual actors. What is 'real' are the actors and their interactions; 'systems' or relations among variables are abstractions, unless they are institutionalized as emergent constraints on agents.

4.4
Reductionism of either the pure 'macro' or pure 'micro' types is an uncommon theoretical position in the social sciences these days. Rather, most theorists seek to develop explanations that operate at both (or multiple) levels. Most simulation models, however, have tended to operate at a single level. 'Systems' modelers (such as the current authors) have tended to model single entities composed of relatively large numbers of variables, and complex non-linear relations among them. We have been somewhat reluctant to create models of sub- systems, let alone multiple interacting actors. Agent or 'game' modelers have, conversely, been reluctant to add dynamic macro properties to their models, instead seeking to explain macro behaviour by more and more complex interactions of larger numbers of actors. Models of either type are likely to be regarded by critics as overly simplistic and reductionist. Much of both 'systems theory' and 'game theory' tend to reduction, and these general perspectives are common among social science simulation modelers.

4.5
Contemporary developments in computer capacity and programming languages are enabling considerable change in the architecture of simulation models. These changes hold considerable promise for being able to deal more rigorously with 'multi-level systems' or 'embeddedness'. The conceptual breakthrough is the concept of 'object orientation' that is the paradigm for the most recently emerging programming languages. These languages allow the creation and manipulation of data objects that are multi-level and embedded; operations on these objects (including their 'birth' and 'death') may be performed within and between levels. Contemporary programming languages provide and encourage the analyst to develop models composed of, for example, social roles and knowledge embedded in persons, embedded in intimate groups, embedded in communities, etc. The 'laws of motion' of the system can operate simultaneously within and between levels. Models of these types hold the promise of realistic complexity while maintaining the rigor and much of the analyzability of simpler models. Rather that attempting to reduce the complexity of social dynamics, we may now strive to understand it with the framework of rigorous formal models.

Some Current Trends: Chaos, Complexity, Fuzziness, Self-Organization, And Artificial Life

5.1
Simulation modeling in the social sciences is an adaptation of approaches that arose in the physical and life sciences. Many social science modelers track developments in fields as diverse as population ecology, meteorology, physics, and computer science, and seek to import ideas from these fields. Some of these ideas, which are beginning to find their way into the theories and models of simulation practitioners in the social sciences, may hold considerable promise for dealing with some classes of social phenomena that have been somewhat intractable.

5.2
Chaos, catastrophe, and other variations of non-linear systems theory have been of interest in the physical and life sciences for some time. Models utilizing these ideas are efforts to explain how small changes can sometimes have unexpectedly severe consequences, and how systems that seem remarkably stable may rather suddenly display large quantitative, or even qualitative shifts. They also emphasize how patterns of social action may have general patterns and tendencies that are realizations of general laws, without ever producing two events, groups, or interactions that are identical. Recently, several models displaying chaotic dynamics have appeared in the sociological literature (e.g. Hanneman et al, 1995; Patrick, 1995; Leik and Meeker, 1995). Models using these ideas have the intuitive appeal of providing formal ways of dealing with sudden shifts in social structures and dynamics, and non-repeatable events that are still deducible from general laws.

5.3
Chaos theory has attracted the attention of macro systems modelers; complexity theory has found an audience among micro agent modelers in the social sciences. Complexity theory, very roughly, seeks to provide accounts of how macro patterns of system behaviour may emerge from the aggregation of large numbers of local interactions. In the social sciences, such ideas have potentially enormous importance for explaining the emergence of macro properties such as solidarity and hierarchy may emerge from micro interaction (Mihata and Stine, 1997).

5.4
After reaching something of a standstill, the field of artificial intelligence is again moving forward and providing some interesting raw material for social scientists. Efforts to create machines that perceive and classify, using the architectures of neural networks and fuzzy logic have raised questions for modelers seeking to understand how social organizational structures (from individuals to large scale formal organizations) perceive, learn, remember and recall, and assign meaning to their environments (Bainbridge, 1995). Perhaps most exciting, and potentially most troubling, are models of self-organizing systems and 'artificial evolution', which are currently a major focus of concern in the physical and life sciences. Rather than accepting static initial conditions and parameters and iteratively calculating outcomes, artificial life systems may evolve by mutation and selection, and may learn from success or failure in their environments. The logic of such models is appealing, as they mimic a set of dynamics (mutation, selection, learning, and retention) that have proven to be very useful as explanations of many physical phenomena. Many social science theories (e.g. the population ecology of organizations) have the same essential arguments. For all of their promise, 'artificial life' models are also somewhat problematic. Because they change their own structures and learn, it may be increasingly difficult for analysts to understand the behaviour of their own models when genetic and other such algorithms are embedded into models of social structures.

Conclusion

6.1
We have tried briefly to explain the nature of computer assisted simulation models, and the logic of inquiry using them in the social sciences. We have also sought to provide, in even briefer form, a sketch of some of the major issues and directions in simulation modeling.

6.2
We have argued that simulation models perform much the same function in inquiry as other conceptual and theoretical approaches. A computer simulation model is a specific implementation of the ideas of more abstract theory, which enables us to deduce the implications of that theory under specific conditions. Like a typology or a statistical model, a computer simulation model is an effort to bridge the gap between more general theory and more specific empirical circumstance.

6.3
Simulation models are, and are intended to be, 'artificial'. We do not intend to build a machine model that is a social community; rather, we seek to build an artificial object (following the blueprint of theory), that looks, feels, and behaves (at least on paper, or on the computer screen) like a social community. If we are able to build a model that behaves (at least in some regards) like a 'real' community, then we can understand and explain the social community with our theory, at least until a better one comes along. Using formal languages for developing models and using computers to perform simulation experiments with the models are not logically essential to the enterprise of inquiry, although they are essential for practical reasons.

6.4
Computer simulation models have been (and still are, to a large degree) fairly crude objects, and defective in many ways for formalizing complex theories and describing complex empirical situations. Considerable progress is being made, however, with new languages and more powerful computers. The distinctions between variables and agents approaches, and between macro and micro models are being overcome to provide languages that are richer in their ability to implement the ideas of theory and make predictions across more complex empirical situations.

6.5
By our advocacy of the strengths and utility of simulation modeling, we do not mean to devalue the importance of other methodological tools. Simulation modeling is one method for connecting theory to data, but not the only form. As a language for models, it retains many of the strengths of mathematics and formal logic while being able to deal with more complex empirical realizations. Simulation modeling does not replace theory or empirical inquiry. It acts as one bridge between these two enterprises.

References

BAINBRIDGE, William Sims (1995) 'Neural Network Models of Religious Belief', Sociological Perspectives, vol. 38, no. 4, pp. 483 - 496.

COLLINS, Randall, and Robert HANNEMAN (Forthcoming) 'Modeling Interaction Ritual Theory of Solidarity' in Patrick Doreian and Thomas J. Fararo (Editors) The Problem of Solidarity: Theories and Models. London: Gordon and Breach.

HANNEMAN, Robert (1989) Computer-Assisted Theory-Building: Modeling Dynamic Social Systems. London: Sage Publications.

HANNEMAN, Robert (1995) 'Simulation Modeling and Theoretical Analysis in Sociology', Sociological Perspectives, vol. 38, no. 4, pp. 457 - 462.

HANNEMAN, Robert, Randall COLLINS, and Gabriele MORDT (1995) 'Discovering Theory Dynamics by Computer Simulation: Experiments on State Legitimacy and Imperialist Capitalism' in Peter V. Marsden (Editor) Sociological Methodology. Cambridge, MA: Blackwell.

LEIK, Robert K. and Barbara F. MEEKER (1995) 'Computer Simulation for Exploring Theories: Models of Interpersonal Cooperation and Competition,' Sociological Perspectives, vol. 38, no. 4, pp. 463 - 482.

MIHATA, Kevin and Howard STINE (1997) 'Complexity in Social Theory: The Leading Edge of an Epistemological Revolution?', paper presented at the meetings of the Pacific Sociological Association, San Diego, California.

PATRICK, Steven (1995) 'The Dynamic Simulation of Control and Compliance Processes in Material Organizations', Sociological Perspectives, vol. 38, no. 4, pp. 497 - 518.

SIMON, Herbert A. (1981) The Sciences of the Artificial (2nd edition). Cambridge, MA: MIT Press.

Copyright Sociological Research Online, 1997