Artificial Societies, Virtual Worlds and the Shared Problems and Possibilities of Emergence
Computer-mediated modeling in economics and political science, simulation in the natural sciences, computer science, particularly work on cellular automata and autonomous-agent and multiagent programming
Evolutionary economics and game theory, particularly Robert Axelrod’s research on cooperation and altruism.
•Major publications or outlets
Joshua Epstein and Robert Axtell, Growing Artificial Societies
Nigel Gilbert and Jim Doran, eds., Simulating Societies
The Journal of Artificial Societies and Social Simulation
Santa Fe Institute, Univ. of Michigan Center for the Study of Complex Systems, Brookings Institution
• Points of departure
Traditional “top-down” modeling and simulationist instruments in social sciences are tautological and static, and tells us much less about real-world social institutions, practices and structures than is commonly presupposed in mainstream social science.
Agent-based computing which is canonically “emergent” in its design allows for “bottom-up” simulations which have the potential to resemble social reality much more closely. Such simulations have the potential to be truly experimental instruments with much more rigorous falsifiability, and to deemphasize the selective or manipulative intrusions of a researcher into the scenario or situation being studied. Epstein and Axtell: artificial societies approaches may someday permit the creation of a “total” social science in which human agents and human societies are modeled at a nearly one-to-one correspondence with the actual real-world complexity of human actors and human institutions.
• Major research findings to date
Epstein and Axtell’s “Sugarscape” and similar environments by other researchers: relatively simple CA or other “emergent” simulations which deal with one or two major social dynamics as a very general level like trading behaviors, food acquisition, or residential segregation. Numerous connections to artificial-life research programs also highly generative and suggestive.
Applied models and simulations dealing with real-world phenomena that seem well-suited to an emergence-themed approach, such as the spread of the HIV pandemic or civilizational collapse (in this instance, of the Anasazi civilization of the American Southwest).
Mid-1970s computer science leading to the design of multi-user dungeons, or MUDs: persistent text-based computer-mediated environments accessed synchronously by a number of users. Later iterations and diversification of the MUD model by both computer scientists and hobbyists in the 1980s, with some MUDs being designed specifically for researching social dynamics and social psychology. Commercially-run graphical virtual worlds launching in the late 1990s with millions of users spurs scholarly interest on social, political and psychological dimensions of virtual worlds, both in terms of established questions within mainstream social science disciplines and in terms of applied insights about the design of future virtual worlds.
•Major publications or outlets
Richard Bartle, Designing Virtual Worlds
Edward Castronova, “Economics of Virtual Worlds”
Dan Hunter and Greg Lastowka, “The Laws of the Virtual Worlds”
Nick Yee, The Daedalus Project (website clearing-house for variety of research findings)
Julian Dibbell, “The Unreal Estate Boom”
Raph Koster, A Theory of Fun
Web sites: Grand Text Auto, Ludology, The Ludologist, Game Studies, Terra Nova
•Points of departure
Virtual worlds as fractured mirror of particular real-world forms of sociality, economics or psychology
V irtual worlds as organic, unplanned, uncontrolled but informative models or simulations of general social and psychological dynamics
Major areas of interest are social psychology and identity, economics (both the interior economics of virtual worlds and their economic interface with the real world), and the evolution of social, legal and political institutions within virtual societies.
•Major research findings to date
Psychological identification with virtual characters (“avatars”) can be deep, transformative and powerfully expressive of identity and desire, but is also strongly conditioned by the visual structure of the game interface and limited to particular kinds of social psychological domains, primarily those involving gender and sexuality.
Most existing virtual worlds are intensely economistic in their dynamics, and have complex internal economies which can be described and studied with considerable scholarly rigor. Virtual economies also generate staggering real-world wealth for some players: one current estimate holds that the annual worldwide trade in virtual items on auction sites like eBay may be as much as $880 million.
Virtual worlds are governed or socially structured both by the code or managerial interventions of their designers and by informal, repeated practices of their users—practices that vary significantly in their intentionality and self-awareness. Some of these practices cohere around formal social and economic institutions—the use of cartels or guilds to manipulate supply and demand of virtual items is common.
The Overlapping Possibilities and Problems of Emergence
Emergence and related phenomena are seen at the central underpinnings
of the functionality or usefulness of the entire approach. Emergence is taken
both as a sign of the resemblance between artificial society simulations and
the real world, and as a protection against tautological manipulation of the
simulation. If a given simulation can produce complex behaviors or patterns
from simple agent-based starting conditions, many artificial society researchers
take that as a reasonable confirmation that real-world complexities of a similar
kind have followed a similar process of evolutionary development.
a) The lack of contingent outcomes originating from later events in a system's evolution, which leads to the ability of researchers to search the possibility space of initial conditions in order to produce desired complex systems or results.
b) The issue of scales and levels: a few innovative ideas, but mostly no one in this field has hit on a meaningful strategy for simultaneously simulating emergent processes at multiple scales or levels of a complex system.
c) The epistemological particularity of methodological individualism, or an agent-based approach. This problem is most sharply illustrated by the difference between agents which are designed to have beliefs, desires and intentions (BDI) and therefore to act “irrationally” within the simulation vs. rational utility-maximizing agents. There are some interesting strategies for softening the edge of this distinction—say, for example, differentially supplying information about the simulation environment to heterogenous populations of Nash-equilibria rational agents in order to approximate unequal distributions of a “bounded rationality”.
d) The difficulty of measuring or quantifying end states of simulations for purposes of rigorous comparison. Human observers see what look like differences in at later stages of such simulations, but giving rigorous descriptions of such differences may be either impossible or very difficult.
e) All of this adds up to the practical and ontological problem of mimesis. It is hard to do directed simulations if you maintain that you necessarily don’t know what kinds of complex structures particular starting conditions will create.
If you could create an artificial society which contained all the complexity of real-world sociality, what would you learn from it that you cannot learn from simply observing the real world? You can’t subdivide the results of a hugely complex social simulation very easily (see problem d.) and thus cannot really have a “god’s eye view” of such a simulation. The one thing that maybe you could learn would come from iteration: if a fully mimetic artificial society always turned out the same, might that tell you something about social reality? But see problem d. (how do we know it’s the same) and problem a. (lack of contingency may be a way in which emergence in artificial society simulations fundamentally breaks its mimesis to the complexity of reality).
Emergence is proposed both as an explanation of social and economic behaviors within virtual worlds and as a problem-solving design strategy for making future virtual worlds more satisfying aesthetic experiences.
The latter is a particularly important area of work for both scholars and designers. It is understood that the more complex these virtual worlds grow, the less possible it is for even an extremely well-funded development team to directly author or create all aspects of the virtual environment, and even in relatively simple virtual worlds, the interactions of players with that environment must be automated. Events where a human “imagineer” directly manipulates the gameworld to provide an unscripted experience for players are extremely popular but pose essentially unsurmountable practical problems. Strategies which turn on emergence are seen as an important answer to these issues, as a way to provide a responsive virtual world which changes dynamically in response to the actions of users without the direct or controlling intervention of human authors. When I proposed a more extensively authored strategy for storytelling in virtual worlds--basically, a complex interactive design that resembles hypertext novels--many other virtual-worlds scholars objected strongly on the grounds that only emergent strategies of world design can provide aesthetically satisfying answers to the problems I described.
But emergence is also seen as the central explanation of the evolution of player behavior and activity within virtual worlds. The key scholarly question here is: are the complex social and economic structures within virtual worlds a result of prior orientations and motivations that players bring with them or a truly emergent result of the interactions of players with the underlying rules of the environment?
Many of the designers of virtual worlds are trying to engineer the Turing Test in reverse, to reduce complex human agents into fairly simple rules-driven software agents—and they are in turn being continually frustrated by the capacity of human players to turn the rules-constrained capacities of their avatars to unexpected ends—some of these ends fit the underlying social logics of the virtual world, others seem to break it altogether.They are also continually confronted by players who chafe at the reduction of their complexity to rules-driven simplicity and demand a virtual world whose interior complexity is more meaningfully expressive of and responsive the human complexity of the players.
If players find an unanticipated property of the ruleset and use it to more efficiently or rapidly satisfy some deeper structural drive within the gameworld, is that an example of the emergent properties of human beings breaking through.
Example: Warwolf hunting in City of Heroes or perching in Asheron’s Call
Players are extremely quick to identify heuristics of gameplay that produce superior risk-reward efficiencies, in particular those which involve unintended or accidental properties of the virtual environment. In Asheron’s Call, for example, within a month of the game going live, players found that there were several spots on the virtual landscape from which one could attack enemies using ranged weaponry such as a bow and arrow where the enemies typically found in that location could not respond in kind, which became known as “perching”. A more recent but similar case in the game City of Heroes involved players with the power to hover six feet or so off the ground attacking enemies called warwolves which happened to lack the ability to attack at range.
Both of these discoveries about the physics of the virtual world produced fairly profound structural consequences within the society of each virtual world. In Asheron’s Call, for example, many players junked their previously established avatar in order to create an archer. Competition for a limited number of perching spots created scarcity effects which had not previously existed. Players who were able to secure a perching spot increasingly used “macros”, automated routines that allowed their avatar to execute the same repeated sequence of attacks without the player actually having to supervise or observe the game, and so took up permanent residence at the perches. Characters using macros, permanently occupying perches, become disproportionately powerful at extremely disproportionate rates, and thus disproportionately capable of extracting resources and wealth from the virtual world at an accelerating rate of differentiation from the bulk of players. Something of the same thing happened in City of Heroes, though that game has much less of an internal economy: players selected the power to hover off the ground in increasing numbers and areas of the gameworld with warwolves became increasingly crowded, producing both scarcity effects and burdens on the underlying technology.
Designers of both games eventually intervened, eliminating “perching” spots and giving warwolves the ability to throw large rocks at hovering enemies. In the case of City of Heroes, the consequences were largely limited to having a population of players who became more powerful at slightly acclerated rates and the proliferation of the power to hover through the gameworld. In Asheron’s Call, the consequences were more structurally profound: a growth in the number of archers, for one, but also a permanent economic advantage for cartels or groups of players who discovered and then monopolized the perches, which in turn attracted more players to those cartels, which in turn made them more powerful still.
Simple initial conditions thus produced complex and persistent social structures which even developers could not undo through changing the initial condition. In fact, in an odd way, by removing perching spots, the developers actually solidified the complex social structures that their environment had helped to create. This is pretty much a general law with virtual worlds: within the first month of their existence, emergent structures of gameplay form which powerfully reproduce themselves many years later even though the conditions which produced those practices are changed and the entire population of the game turns over as well.
Example: Power-law distribution of wealth in Star Wars: Galaxies
Raph Koster, designer, proposes that this happened because human beings really are rational utility-maximizers and because the intrinsic accumulative drive of human beings naturally produces a power-law distribution of wealth when that drive enters into a new system. E.g., the simple initial condition of the game is players as agents who possess a fairly basic internal ruleset (accumulate); early differentiations in initial placement, amount of time played, efficiency of play, produces a complex structured economic distribution of wealth over time which eventually becomes a self-reproducing structure. Neither players nor designers intend to have a power-law distribution of wealth, and in fact it poses problems for the social state of the virtual world later on. The players at the top of the distribution curve are bored as their wealth is so extreme that there is really nothing they can do with it within the gameworld, while the bulk of players at the lower end of the curve may apathetically drop out of economic activity altogether as they cannot meaningfully compete against the players at the favorable end of the distributional curve.
I propose: because the game’s underlying rules vest persistence in the character and not the gameworld, and limit the character’s persistently meaningful interactions with the gameworld to accumulative activities, the designers have essentially constrained the local version of “human nature” to inevitably produce a power-law distribution of economic wealth. E.g., there’s nothing you can do which actually changes the dynamic state of the gameworld in a permanent or persistent way—it always returns to a more or less constant state. Moreover, almost the only thing you can do to change your own character or express the character’s goals is to accumulate, either “experience” (which progressively empowers the character to create or extract more wealth from the environment) or objects. Some objects are merely expressive or aesthetic, others are extractive tools, but all of them have exchange-value within the gameworld, and so even purely expressive objects are a form of wealth.
So here Koster is saying that the emergent structure which appears in the gameworld is a mimetic confirmation of a larger social or natural reality—closely paralleling one aspiration of the artificial society literature. I am saying that the emergent structure which appears in the gameworld is substantially a result of the initial conditions and internal ruleset of the agents as they are encoded within the virtual world itself, not of the players themselves. In my argument, if the ruleset allowed human agents to engage in persistently marked or meaningful activities which were not directed at individual accumulation--say, for example, if the aggregated efforts of many players permanently altered the physical laws or terrain of the virtual world--that this difference would produce entirely different emergent complexities. Koster's claim is that players would attempt to carry out individually accumulative gameplay regardless and produce power-law distributions of wealth as a result.
Comparisons: Artificial societies and virtual worlds
What artificial societies researchers want to achieve with software agents, virtual world designers have already achieved with human beings constrained to act like software agents. The lack of clarity that the situation in virtual worlds produces about the ultimate causation of emergent complexities—do they come from the prior complexity of the humanity of players or from the unintended and undersigned interactions of the rulesets to which human players are limited—should serve as a warning to artificial society simulators that the total or rigorous social science they are seeking may not be achievable.
The basic property of emergence that poses both opportunities and possibilities to both sets of scholars is that it both links and obscures the causal relationship between simple initial conditions and complex structural results. Both groups want to be able to make fairly definitive statements this relationship, but emergence as a phenomenon appears to function as a kind of “inscribing erasure”. It makes clear that there is a dynamic and linear causal relation between initial conditions and complex results—something that many other time-dynamical approaches in social science do not assume or demonstrate, or that non-dynamic modeling simply avoids. But emergence also forbids one-to-one correspondences between particular intial conditions and particular complex structures.
Artificial societies researchers need to be able to make that correspondence in order to make the kinds of findings that they ultimately envision their work as delivering. Virtual worlds researchers need to be able to meaningfully distinguish between the emergent causality of game code or rules and the causality (which might also be described as emergent) of social practice in virtual worlds which derives from the predicates that players bring to each new world. In both cases, the very things that make emergence both interesting and empirically important in these two contexts may also be what prevents these groups of researchers from definitively knowing what they want to know.
That being said, it seems to me that the virtual worlds research might help some of the artificial societies researchers understand the flaws in their drive towards simulation as a controlled experiment, while the work on artificial societies might help some virtual worlds researchers and designers understand how narrow the range of models and dynamics actually is within their field. In many ways, the interests of the two groups are closely linked.