PHYSICAL REVIEW E 90, 042134 (2014)

Evolutionary dynamics of the traveler’s dilemma and minimum-effort coordination games on complex networks Swami Iyer Department of Physics, University of Massachusetts, Boston, Massachusetts 02125-3393, USA

Timothy Killingback* Department of Mathematics, University of Massachusetts, Boston, Massachusetts 02125-3393, USA (Received 3 March 2014; published 22 October 2014) The traveler’s dilemma game and the minimum-effort coordination game are social dilemmas that have received significant attention resulting from the fact that the predictions of classical game theory are inconsistent with the results found when the games are studied experimentally. Moreover, both the traveler’s dilemma and the minimum-effort coordination games have potentially important applications in evolutionary biology. Interestingly, standard deterministic evolutionary game theory, as represented by the replicator dynamics in a well-mixed population, is also inadequate to account for the behavior observed in these games. Here we study the evolutionary dynamics of both these games in populations with interaction patterns described by a variety of complex network topologies. We investigate the evolutionary dynamics of these games through agent-based simulations on both model and empirical networks. In particular, we study the effects of network clustering and assortativity on the evolutionary dynamics of both games. In general, we show that the evolutionary behavior of the traveler’s dilemma and minimum-effort coordination games on complex networks is in good agreement with that observed experimentally. Thus, formulating the traveler’s dilemma and the minimum-effort coordination games on complex networks neatly resolves the paradoxical aspects of these games. DOI: 10.1103/PhysRevE.90.042134

PACS number(s): 02.50.Le, 87.23.Ge, 87.23.Kg, 64.60.aq

I. INTRODUCTION

Social dilemmas, which epitomize the conflict between the individual benefit and the common good, underlie many of the most important and intractable problems in the biological and social sciences, including the evolution of cooperation [1] and the use of shared finite resources [2]. In a social dilemma, individually self-interested behavior yields a situation in which all individuals are less well off than they could have been [3]. From a formal point of view, a social dilemma is a game in which there exists at least one Hicks inefficient (i.e., socially inefficient) Nash equilibrium: It is Hicks inefficient as there is at least one other outcome in which all individuals would be better off, and since it is a Nash equilibrium no individual has any incentive to change their behavior [3]. Examples of twoperson social dilemmas include: The prisoner’s dilemma [1,4], the snowdrift (or chicken or hawk-dove) game [5,6], and the stag-hunt (or assurance) game [7]. Multiperson social dilemmas include, the public goods game [8] and the tragedy of the commons [2]. The original formalizations of these social dilemmas typically involved discrete-strategy games, however more recently continuous-strategy versions of many of these social dilemmas have been formulated and studied (see Refs. [9–12]). Of particular fascination is a class of social dilemmas for which the predictions of game theory appear to be inconsistent with experimentally observed behavior [13]. Two examples of note of such games are the traveler’s dilemma (TD) game [14–16] and the minimum-effort coordination (MEC) game [16,17]. These games challenge the belief that the Nash equilibrium concept accurately describes the behavior of

*

humans in these social dilemmas. In the TD game there exists a unique pure strategy Nash equilibrium that is undesirable for everyone. In contrast to this, in the MEC game every pure strategy is a Nash equilibrium, and thus the Nash concept lacks any prescriptive power. The standard way of introducing the TD game is through an allegory of the following type. Two travelers, on their return journey from a foreign country, discover that their luggage, which contains identical souvenirs, has been lost by the airline. The official in the claims department puts them in separate rooms, hands each of them a claims form, and tells them that they can claim any integer amount between M + 1 and 2M, where M is assumed to be a positive integer. Moreover, he informs them that if they both claim the same amount, they will be paid that amount (as they will be assumed to have been truthful), and if they claim different amounts, each will be reimbursed the lower value but with a penalty R (where 1  R  M is assumed to be an integer) deducted from the higher claimant (who is assumed to have lied) and given to the lower claimant (as a reward for their presumed honesty). Formally, the TD game is a two-person game with the discrete strategy set S = {M + 1, . . . ,2M}, where M is a positive integer. The payoff to an r strategist against an s strategist is defined by ⎧ ⎨r + R, P (r,s) = r, ⎩s − R,

r < s, r = s, r > s,

(1)

where the reward or punishment parameter R is an integer with R ∈ [1,M]. Thus, the payoff P (r,s) may be written as P (r,s) = min(r,s) − R sgn(r − s).

[email protected]

1539-3755/2014/90(4)/042134(10)

if if if

042134-1

(2)

©2014 American Physical Society

SWAMI IYER AND TIMOTHY KILLINGBACK

PHYSICAL REVIEW E 90, 042134 (2014)

In this game a Nash equilibrium is a pair of strategies such that, if each strategy is known to the other player, then neither has an incentive to change their strategy. Using backward induction, it is easy to see that (M + 1,M + 1) is the unique Nash equilibrium for the TD game [14–16]. We note that this Nash equilibrium is independent of the reward or punishment parameter R. Thus, the Nash concept implies for the TD game the paradoxical outcome in which both individuals play the lowest possible strategies, although both would do better by playing the highest strategy pair (2M,2M). Perhaps not surprisingly, the Nash equilibrium prediction is not how individuals play this game in experiments [16,18,19]. It has been found in these experiments that when R is high most subjects chose the Nash equilibrium strategy, but when R is low most subjects chose the highest possible claim. Since the unique Nash equilibrium is independent of the parameter R, classical game theory is unable to explain the most striking feature of these experimental results: the effect of the reward or punishment parameter R on average claim levels. The MEC game is rather similar to the TD game as the payoffs are again determined by the minimum of two actions [16,17,19]. The MEC game is a two-person game with discrete strategy set S = {1,2, . . . ,M}, where M is an integer greater than 1 and the payoff to an r strategist against an s strategist is defined by P (r,s) = min(r,s) − κr,

(3)

where κ ∈ (0,1) is a cost parameter. The MEC game exhibits the opposite problem from the TD game. Instead of having a single deficient Nash equilibrium, the MEC game exhibits multiple Nash equilibria—it is simple to show that any common strategy is a Nash equilibrium [16,17,19]. These Nash equilibria are all strict and thus trembling hand perfect, and therefore classical game theory provides no desideratum to select among them. As with the TD game, when the MEC game is actually played with human subjects the observed behavior deviates from the results predicted by game theory [16,17]. In these experiments it has been found that when the effort cost κ is low behavior is concentrated close to the highest effort level, whereas when the effort cost is high the preponderance of effort levels are at the lowest possible value. These results clearly show that the effort levels employed by subjects are inversely related to the effort costs, despite the fact that any common effort level is a Nash equilibrium. It is worth noting that the paradoxical features of the TD and MEC games that emerge from classical game theory are not ameliorated by instead using deterministic evolutionary game theory. Since the unique Nash equilibrium (M + 1,M + 1) in the TD game is strict it is a globally stable equilibrium point for the replicator dynamics [20]. Hence, the replicator dynamics for the TD game will converge to the minimum strategy M + 1. Similarly, in the MEC game every common strategy is a strict Nash equilibrium and hence a stable equilibrium point for the replicator equations. Thus, the replicator dynamics does not select out any of the Nash equilibria. The paradoxical nature of both games, therefore, is equally apparent when studied using either classical game theory or deterministic evolutionary game theory.

The TD game and the MEC game are important both theoretically and in practice. The TD game is theoretically important because it so clearly reveals an apparently paradoxical aspect of game theory: the inability of the Nash equilibrium to predict the behavior of individuals interacting in this type of situation. In addition, it has been observed in Ref. [21] that the TD models competitive egg ejection in the greater ani, a species of communally nesting birds [22,23]. In this species, if two females share a nest then each female chooses a time to change from ejecting eggs from the nest to laying eggs. If both select an early time (which corresponds to a large claim), then both obtain a large payoff since they can both successfully lay many eggs. If, however, one chooses to wait, then she can eject the other birds already laid eggs and obtain an even greater payoff while at the same time inflicting a loss on the other bird [21]. This situation, therefore, has the structure of the TD game. The MEC game is theoretically important because it starkly illustrates the lack of prescriptive or predictive power inherent in the Nash equilibrium concept. The MEC game also has considerable practical significance since many interesting and important real-world situations can be modeled by such games [24]. The MEC game often arises naturally in economic situations in a way that is illustrated by the following parable. Consider two companies A and B, each of which manufacture a critical component of a jointly produced product. Suppose that company A makes widgets and company B makes grommets. The final product, which contains both a widget and a grommet, is sold jointly, with the revenues being split equally between the two companies. Each company can choose the amount of effort it expends in producing its component with higher effort levels resulting in components of higher quality. We will assume here that the performance of the final product and thus the revenues obtained from sales of the product is limited by whichever of the two components has the lower quality. Thus, the profits obtained by a given company from sales of the product may be represented as the minimum of the effort expended by each company to produce widgets or grommets, respectively, minus the cost associated with the companies’ own effort. Hence, such a situation can be modeled by a MEC game. It is also interesting to note that the MEC game has a potentially important application to evolutionary biology. This arises from considering a sexually reproducing pair of organisms. The male and female of the pair can each be viewed as making an investment in producing gametes of a given quality at a certain cost to themselves. However, the quality of the resulting zygote (and hence of their offspring) can plausibly be assumed to be limited by the gamete of lower quality. Hence, the fitness of each individual depends on the minimum of each individual’s investment minus their own investment cost and thus has the form of the MEC game. A variety of different theoretical approaches have been studied as possible explanations of the behavior found empirically in the TD and MEC games. Stochastic learning models [16,19,25] have been proposed to explain the anomalous behavior observed in the TD and MEC games. Stochastic evolutionary dynamics in finite populations [21] have also been investigated as a means of resolving the paradoxical

042134-2

EVOLUTIONARY DYNAMICS OF THE TRAVELER’s . . .

PHYSICAL REVIEW E 90, 042134 (2014)

features of the TD game. In Ref. [26], the authors propose an alternative resolution of the difficulties associated with the TD and MEC games by formulating them as smoothed continuous-strategy games. The role of spatial structure in accounting for the discrepancy between the Nash equilibrium prediction and the observed behavior in the TD game has been considered in Ref. [27]. Other theoretical approaches to explaining the behavior of the TD game have been studied in Refs. [28–30]. In this paper we extend the approach of Ref. [27] and study the evolutionary dynamics of both the TD and the MEC games on complex networks. We introduce several new elements in our study, namely: (1) We introduce a unitary measure of the total degree of cooperation in the TD and MEC games defined on arbitrary networks; (2) using this measure we compare the total levels of cooperation which evolve in the TD and MEC games on a variety of complex network topologies; (3) we investigate the effect on the evolution of cooperation in the TD and MEC games of the network properties of clustering and assortativity; and (4) we include the effect of mutations in the evolutionary dynamics on the evolution of cooperation in the TD and MEC games. In general, we show that formulating the TD and MEC games on complex networks leads to a straightforward and interesting resolution of the problematic features of the games detailed above. II. AGENT-BASED MODEL

In this section, we define a stochastic agent-based model which allows the evolutionary dynamics of the TD and MEC games to be studied for complex interaction patterns among the members of a population. The evolutionary dynamics of simple social dilemmas, such as the prisoner’s dilemma and the snowdrift game, have been well studied for populations with a variety of complex interaction patterns [31–40]. Let us consider a population consisting of n individuals, labeled i = 1, . . . ,n. In order to allow the possibility of complex population interactions, we identify the population with the set of vertices in a network . The structure of  determines which individuals in the population interact. To be precise, two networks are required to specify the evolutionary dynamics—an interaction network, which specifies whether two individuals in the population can interact by playing the game and an updating network, which specifies that an individual in the population can update its strategy by comparing its state to the states only of those individuals adjacent to it. Here, for simplicity, we will assume that the interaction and updating networks are the same and denote them both by . The set of neighbors of i ∈  (i.e., the set of individuals adjacent to i in ) will be denoted by N (i). The agent-based model is defined as a stochastic process on the network  (similar approaches to defining the evolutionary dynamics of games on networks have been considered for discrete games in Refs. [39,41] and for continuous games in Ref. [26]). We will fix either the TD or the MEC game as the game under consideration. We will begin with a population in which each individual starts out with an initial strategy picked uniformly at random from the strategy set S. We will also assume that initially each individual in the population has a payoff equal to zero. Each iteration of the

evolutionary dynamics consists of a round of asynchronous interactions followed by a round of asynchronous updates. Each of these rounds involves sampling the population n times with replacement. First, in an interaction step, we pick at random an individual i ∈  from the population and a random individual in the population j ∈ N (i) that is a neighbor of i. We then determine the payoff that player i receives from interacting with player j . That is, if r and s denote the strategies of i and j , respectively, then the payoff Pi received by the focal individual i is given by either Eq. (2) or (3), depending on the game under consideration. This procedure is repeated n times to complete the interaction stage of the dynamical process. Since individuals are picked from the population at random, it is possible that an individual is not chosen during a given interaction stage. In this case, the individual retains the payoff that it had in the previous interaction stage. Similarly, an individual may be picked more than once during a given interaction stage. In this case, the individual retains the payoff that it obtained from its last interaction during that interaction stage. Second, in an update step, we again pick at random an individual i ∈  from the population and a random individual in the population j ∈ N (i) that is adjacent to i. If Pi and Pj denote the payoffs (as determined in the interaction stage described above) of i and j , respectively, then the probability that the focal individual i will inherit j ’s strategy pi←j is given by  0, if Pi  Pj , (4) pi←j = Pj −Pi , otherwise, Pmax −Pmin where Pmax = maxk∈ Pk and Pmin = mink∈ Pk . This update rule is often referred to as the replicator rule. This update procedure is also repeated n times to complete the update stage of the dynamics. Again it is possible that an individual may not be chosen during a given update stage in which case it retains the strategy it had in the previous update stage. If an individual is picked more than once during a given update stage, then the individual retains the strategy that it obtained from its last update during that update stage. Mutations are incorporated in the update procedure as follows: When according to the update rule (4) i’s strategy would be replaced by j ’s, then with probability μ, i’s strategy is instead replaced by a strategy picked uniformly at random from S. Carrying out n interaction steps followed by n update steps constitutes a single generation of the evolutionary dynamics. We note that the results of our agent-based simulations (described in the next section) are robust to changes in the update rule. For example, in addition to employing the replicator update rule (4), we have also simulated the agent-based model using the Fermi update rule in which the probability pi←j that the focal individual i ∈  inherits individual j ’s strategy [with j ∈ N (i)] is given by pi←j =

1

, (5) 1+ where the parameter β > 0 is the “selection strength” of the update rule. We find that the evolutionary dynamics of the TD and MEC games is essentially identical irrespective of

042134-3

e−β(Pj −Pi )

SWAMI IYER AND TIMOTHY KILLINGBACK

PHYSICAL REVIEW E 90, 042134 (2014)

which of these update rules we employ. The results presented in the next section on the evolutionary dynamics of the TD and MEC games arise from simulations using the replicator update rule (4).

to represent more cooperative strategies. We will refer to the unitary measure of cooperation that we introduce here as the cooperation index  or  index. The  index for the TD game is defined by

III. SIMULATION RESULTS

=

In this section we present the results of agent-based simulations for the TD and the MEC games. For both games the agent-based model described in the previous section was simulated [using the replicator update rule (4)] for M = 100 on the following model networks (see, for example, Refs. [42–45]): a complete network (which models a randomly interacting population); random regular networks with degrees k = 4 and k = 8; scale-free networks (constructed using the Barab´asi-Albert preferential attachment scheme [43]) with mean degrees k = 4 and k = 8; networks with exponential degree distribution (constructed using the growing random network model [46]) with mean degree k = 4 and k = 8; scale-free networks with mean degrees k = 4 and k = 8 and different clustering coefficients C and assortativity coefficients r; and two-dimensional lattice networks with k = 4 and k = 8 neighbors and periodic boundary conditions. The network size was n = 10 000 for all these model networks. In addition, both the TD and the MEC games were simulated on the following empirical network: the network of coauthorships between scientists who posted preprints on the High-Energy Theory E-Print Archive between January 1, 1995 and December 31, 1999 [47], having n = 5835 vertices, mean degree k = 4.74, clustering coefficient C = 0.51, and assortativity r = 0.19. These simulations were also repeated on a random network having the same degree sequence as the coauthorship network but with zero clustering and assortativity coefficients (constructed using the configuration model [45]). The parameter values used for the simulations were as follows: number of generations T = 20 000 and mutation rate μ = 0.05. Each data point shown in the plots was obtained by averaging over ten separate runs of the model. A. Traveler’s dilemma game

Let xt (R) denote the average claim level in the population at generation t in the TD game with reward or punishment parameter R. The long-term average x¯∞ (R) of xt (R) is defined by T 1  xt (R). T →∞ T t=1

x¯∞ (R) = lim

(6)

Figure 1 shows the variation in x¯∞ (R) with the reward or punishment parameter R on the following networks: (a) a complete network, (b) a random regular network with degree k = 4, (c) degree k = 8, (d) an exponential network with mean degree k = 4, (e) mean degree k = 8, (f) a scale-free network with mean degree k = 4, (g) mean degree k = 8, (h) a twodimensional lattice network with k = 4 neighbors, and (i) k = 8 neighbors. It is very useful to have a unitary measure of the total degree of cooperation in the TD game on a given network. Since the payoff to a pair of r strategists in the TD P (r,r) = r increases with r, we may consider higher claim levels in the TD game

M 1  x¯∞ (R). M 2 R=1

(7)

The cooperation index  ∈ [0,1]:  = 1 represents complete cooperation, whereas  = 0 represents total lack of cooperation. The larger the value of , the more cooperative the system. Thus, the  index serves as a quantitative measure for comparing the totality of the long-term average investment levels across the different networks shown in Fig. 1. The inset in each plot in Fig. 1 indicates the corresponding  index. The  indices for the TD game on these networks are also summarized in Table I. Figure 1 reveals a number of interesting results. The results on a complete network [Fig. 1(a)], which represents a randomly interacting population, are in good agreement with the standard theoretical predictions and show very low levels of cooperation overall ( = 0.019). However, it should be noted that cooperation is not completely absent in this case. We believe that the low levels of cooperation found in this case are a result of stochastic evolutionary dynamics as shown in Ref. [21] (see also Ref. [48] for a general discussion of stochastic evolutionary dynamics). The results shown in Figs. 1(b)–1(i) indicate that for networks other than the complete network significant levels of cooperation evolve. Two general conclusions can be drawn from these results. First, in all cases the level of cooperation on a network (as measured by the  index) decreases as the average degree of the network increases. This is reasonable since the evolutionary dynamics on networks of increasing average degree will more and more closely approximate the evolutionary dynamics on the complete network. Second, comparing the results for different networks of the same average degree shows that there is a clear trend in which cooperation is higher on more homogeneous networks and lower on more heterogeneous networks. That is, cooperation is highest on lattice networks and random regular networks where all vertices have the same degree, it is next highest on networks with exponential degree distributions, and it is lowest on scale-free networks (whose degree distributions have the greatest variance). The explanation of the association between higher levels of cooperation with greater network homogeneity may lie with the fundamental mechanism by which cooperation evolves in a structured population: Namely, that clusters of cooperators can form that endow cooperators with an increased fitness that compensates for the higher fitness that defectors achieve when interacting with cooperators [9,49]. We conjecture that in the TD game clusters of cooperators are able to form more easily in homogeneous networks and less easily in heterogeneous ones. The results that we find for the evolution of cooperation on networks of different degrees of homogeneity are then a natural consequence of this conjecture. We note, however, that it is not clear why different degrees of homogeneity should affect the formation of cooperative clusters, and the elucidation of this issue seems to be an interesting subject for future research.

042134-4

EVOLUTIONARY DYNAMICS OF THE TRAVELER’s . . .

PHYSICAL REVIEW E 90, 042134 (2014)

(a)

(b)

(c)

(d)

(e)

(f)

(g)

(h)

(i)

FIG. 1. Long-term average claims x¯∞ (R) in the TD game (calculated over the last 20% of 20 000 generations) versus the reward or punishment parameter R, on (a) a complete network, (b) a random regular network with degree k = 4, (c) degree k = 8, (d) an exponential network with mean degree k = 4, (e) mean degree k = 8, (f) a scale-free network with mean degree k = 4, (g) mean degree k = 8, (h) a two-dimensional lattice network with k = 4 neighbors, and (i) k = 8 neighbors. Parameter values: n = 10 000 and μ = 0.05. Each data point shown in the plots was obtained by averaging over ten separate runs of the model.

A common property of many real-world networks is that they have a nonzero clustering coefficient. The clustering coefficient of a network  measures the average probability that two neighbors of a vertex are themselves adjacent. The local clustering coefficient Ci of a vertex i ∈  is defined to be [42,45] as follows: Ci =

(number of pairs of neighbors of i that are adjacent) . (number of pairs of neighbors of i)

The global clustering coefficient C ∈ [0,1] for the whole network is then defined as the mean of the local clustering coefficients Ci [42,45], 1 C= Ci . n i=1 n

A network with C = 1 has maximal clustering, whereas one with C = 0 has no clustering at all. Another common property of many real-world networks is that they possess some amount of assortativity or disassorta-

tivity [50]. Assortative networks have the tendency that high degree vertices are connected to other high degree vertices and low degree vertices are connected to other low degree ones. In contrast, in disassortative networks, high degree vertices tend to be connected to low degree vertices and vice versa. Social networks are often assortative, whereas biological and technological networks are usually disassortative [50]. The assortativity (or disassortativity) of a network  can be quantified by the coefficient of assortativity r ∈ [−1,1], TABLE I. -index values for the TD game on the model networks studied. Network

k=4

k=8

Scale-free Exponential Random regular Lattice

0.151 0.166 0.186 0.201

0.097 0.104 0.104 0.184

042134-5

SWAMI IYER AND TIMOTHY KILLINGBACK

PHYSICAL REVIEW E 90, 042134 (2014)

(a)

(b)

FIG. 2. Values of the cooperation index  in the TD game on scale-free networks with (a) clustering coefficient C = 0.0,0.1,0.2,0.3,0.4,0.5 and on scale-free networks with (b) assortativity coefficient r = −0.1,0.0,0.1,0.2. Parameter values: n = 10 000 and μ = 0.05. Each data point in the figure represents the mean value of  obtained from 20 different runs of the model, and the error bars (which are approximately the same size as the point markers) show the variance in .

defined by [50] n

i,j =1 (Aij

r = n

i,j =1 (ki δij

− ki kj /2m)ki kj − ki kj /2m)ki kj

,

where A is the adjacency matrix of , ki is the degree of vertex i ∈ , δij is the Kronecker δ, and m is the number of edges in the network. Networks with r > 0 are assortative, whereas those with r < 0 are disassortative. Networks with r = 0 are neither assortative nor disassortative. Figure 2 shows how the cooperation index  varies on scale-free networks with mean degrees k = 4 and k = 8 and (a) varying clustering coefficients C (generated using the Holme-Kim model [51]) and (b) assortativity coefficients r (generated by applying the rewiring algorithm of Ref. [52] to a Barab´asi-Albert scale-free network). Each data point in the figure represents the mean value of  obtained from 20 different runs of the model, and the error bars (which are approximately the same size as the point markers) show the variance in . Two potentially important general conclusions concerning the TD game on complex networks are apparent from the results shown in Fig. 2. First, there is a clear trend in which the level of cooperation on a network increases as the clustering coefficient increases. Second, there is a similar clear trend for cooperation on a network to increase as the assortativity coefficient increases. We believe that both these phenomena are consequences of an increase in the ease with which clusters of cooperators can form in networks with either higher clustering or assortativity. Networks with positive clustering exhibit densely connected local sets of vertices, which it might plausibly be assumed would be conducive to the formation of cooperative clusters. Similarly, networks with positive assortativity have a core of highly connected vertices, which may possibly increase the ease with which cooperative clusters can form. Obtaining a detailed understanding of the effect of clustering and assortativity on the evolution of cooperation in the TD game

seems to be an interestingly nontrivial subject for future research. If the TD game is formulated on a network that possesses both positive clustering and positive assortativity, as is the case for many social networks, then this would appear to significantly augment the level of cooperation that will evolve. Such a situation occurs for the following empirical network. Figures 3(a) and 3(b) show the variation in x¯∞ (R) with the reward or punishment parameter R on the coauthorship network (C = 0.51 and r = 0.19) and, for comparison, on a random network with the same degree sequence as the coauthorship network but with no clustering or assortativity. Figure 3(c) shows the number of individuals making a certain claim when the TD game is played on the coauthorship network with reward or punishment parameters R = 1 and R = 99. We observe that the difference in the level of cooperation shown in Fig. 3(a) ( = 0.175) versus Fig. 3(b) ( = 0.128) nicely illustrates the enhancing effect that increased clustering and assortativity has on the evolution of cooperation in the TD game on complex networks. We also note that the results shown in Fig. 3(c) are in good agreement with those obtained experimentally [16,18,19]. B. Minimum-effort coordination game

Let xt (κ) denote the average effort level in the population at generation t in the MEC game with effort cost parameter κ ∈ (0,1). The long-term average x¯∞ (κ) of xt (κ) is again defined by T 1  xt (κ). T →∞ T t=1

x¯∞ (κ) = lim

(8)

Figure 4 shows the variation in x¯∞ (κ) with the effort cost parameter κ on the following networks: (a) a complete network, (b) a random regular network with degree k = 4, (c) degree k = 8, (d) an exponential network with mean degree k = 4, (e) mean degree k = 8, (f) a scale-free network

042134-6

EVOLUTIONARY DYNAMICS OF THE TRAVELER’s . . .

PHYSICAL REVIEW E 90, 042134 (2014)

(a)

(b)

(c)

FIG. 3. Long-term average claims x¯∞ (R) in the TD game (calculated over the last 20% of 20 000 generations) versus the reward or punishment parameter R on (a) the coauthorship network and on (b) a random network with the same degree sequence as the coauthorship network but with no clustering or assortativity. (c) Number of individuals making a certain claim during the last generation in the TD game on the coauthorship network. Parameter values: n = 5835 and μ = 0.05. Each data point shown in the plots was obtained by averaging over ten separate runs of the model.

with mean degree k = 4, (g) mean degree k = 8, (h) a two-dimensional lattice network with k = 4 neighbors, and (i) k = 8 neighbors.

We now define the  index for the MEC game by  1 1 x¯∞ (κ)dκ. = M 0

(a)

(b)

(c)

(d)

(e)

(f)

(g)

(h)

(i)

(9)

FIG. 4. Long-term average effort levels x¯∞ (κ) in the MEC game (calculated over the last 20% of 20 000 generations) versus the effort cost parameter κ on (a) a complete network, (b) a random regular network with degree k = 4, (c) degree k = 8, (d) an exponential network with mean degree k = 4, (e) mean degree k = 8, (f) a scale-free network with mean degree k = 4, (g) mean degree k = 8, (h) a two-dimensional lattice network with k = 4 neighbors, and (i) k = 8 neighbors. Parameter values: n = 10 000 and μ = 0.05. Each data point shown in the plots was obtained by averaging over ten separate runs of the model. 042134-7

SWAMI IYER AND TIMOTHY KILLINGBACK

PHYSICAL REVIEW E 90, 042134 (2014)

TABLE II. -index values for the MEC game on the model networks studied. Network

k=4

k=8

Scale-free Exponential Random regular Lattice

0.534 0.535 0.538 0.588

0.532 0.533 0.533 0.599

Since the payoff to a pair of r strategists in the MEC game P (r,r) = (1 − κ)r increases with r, we may consider higher effort levels in the MEC game to represent more cooperative strategies. Again the -index  ∈ [0,1] serves as a quantitative measure for comparing the total levels of cooperation across different networks shown in Fig. 4. The inset in each plot in Fig. 4 indicates the corresponding  index. In addition, the  indices for the MEC game on these networks are summarized in Table II. The most immediately striking aspect of the results for the MEC game shown in Fig. 4 is that, even in the case of a complete network [Fig. 4(a)], the evolutionary dynamics revealed here is quite different from that predicted by classical game theory or by deterministic evolutionary game theory as embodied in the replicator dynamics. We find here, on a complete network (representing a randomly interacting population), exactly the pattern of high effort levels being expended at low effort costs and low effort levels being expended at high effort costs that is found in experimental studies of the MEC game [16,17]. We conjecture that the difference between the results we have found here for well-mixed populations and those predicted by the deterministic replicator dynamics are due to stochastic evolutionary dynamics (cf. Ref. [21]). The results for the evolutionary dynamics of the MEC game on networks other than the complete network shown in Figs. 4(b)–4(i) are also of considerable interest. We now find, in complete contrast to the TD game, that the overall level of cooperation in the MEC game on networks is barely affected by

the network topology. Indeed the overall level of cooperation that evolves in the TD game (as measured by the  indices) is almost the same on all the complex network topologies that we have studied as it is on the complete network. It should be noted that there is still a slight tendency for higher levels of cooperation to evolve on more homogeneous network topologies and lower levels of cooperation to evolve on more heterogeneous network topologies, but the trend hardly seems to be significant. Although we have found the evolutionary dynamics of the MEC game on all the networks we have studied (including the complete network) to be in good agreement with the results obtained in experimental studies of the game [16,17] and the total levels of cooperation to be little affected by the underlying network topology, it is nevertheless important to observe that Fig. 4 reveals an interesting effect of the network structure on the evolutionary dynamics of the MEC game—the sharpness of the transition between high and low effort levels as the effort cost increases. We see from Fig. 4 that: First, the transition from high effort levels to low effort levels as the effort cost varies is sharper on networks of lower average degree than on those of higher average degree; and second, comparing the results for different networks of the same average degree shows that the transition is sharpest on more homogeneous networks and less sharp on more heterogeneous networks. We have also studied the evolutionary dynamics of the MEC game on networks with positive clustering and assortativity. Figure 5(a) shows how the cooperation index  varies on scale-free networks with mean degrees k = 4 and k = 8 and different clustering coefficients C [42]. Figure 5(b) shows how the quantity varies on scale-free networks with mean degrees k = 4 and k = 8 and different assortativity coefficients r [45]. Each data point in the figure represents the mean value of  obtained from 20 different runs of the model, and the error bars (which are approximately the same size as the point markers) show the variance in . The contrast with the corresponding results found for the TD game is clear—we now find no significant effect of either increased clustering or assortativity

(a)

(b)

FIG. 5. Variation in the cooperation index  in the MEC game on scale-free networks with (a) clustering coefficient C = 0.0,0.1,0.2,0.3,0.4,0.5 and (b) scale-free networks with assortativity coefficient r = −0.1,0.0,0.1,0.2. Parameter values: n = 10 000 and μ = 0.05. Each data point in the figure represents the mean value of  obtained from 20 different runs of the model, and the error bars (which are approximately the same size as the point markers) show the variance in . 042134-8

EVOLUTIONARY DYNAMICS OF THE TRAVELER’s . . .

PHYSICAL REVIEW E 90, 042134 (2014)

(a)

(b)

(c)

FIG. 6. Long-term average effort levels x¯∞ (κ) in the MEC game (calculated over the last 20% of 20 000 generations) versus the effort cost parameter κ on (a) the coauthorship network and (b) a random network with the same degree sequence as the coauthorship network but without clustering or assortativity. (c) Number of individuals making a certain effort during the last generation in the MEC game on the coauthorship network. Parameter values: n = 5835 and μ = 0.05. Each data point shown in the plots was obtained by averaging over ten separate runs of the model.

on the total level of cooperation that evolves in the MEC game on scale-free networks. It is also interesting to study the evolutionary dynamics of the MEC game on an empirical network. Figures 6(a) and 6(b) show the long-term average effort levels x¯∞ (κ) in the MEC game versus the effort cost parameter κ on the coauthorship network and on a random network with the same degree sequence as the coauthorship network but without any clustering or assortativity. Figure 6(c) shows the number of individuals making a certain effort when the MEC game is played on the coauthorship network with the cost parameters of κ = 0.1 and κ = 0.9. The results for the evolutionary dynamics of the MEC game on these networks supports the results we have found for the MEC game on model networks. We see that the difference in the level of cooperation shown in Fig. 6(a) ( = 0.548) versus Fig. 6(b) ( = 0.532) is small, which is consistent with our earlier finding that increased clustering and assortativity only has a marginal effect on the evolution of cooperation in the MEC game on complex networks. We also note that the results shown in Fig. 3(c) are in good agreement with the experimentally observed behavior [16,17].

IV. DISCUSSION

In this paper we have investigated a simple and natural extension of the classical TD and MEC games in which the games are defined on an arbitrary complex network. We have studied the evolutionary dynamics of both games on a variety of network topologies using agent-based simulations. The results of the agent-based simulations are in good general agreement with the behavior observed in empirical studies of the TD and MEC games. For the TD game on both model and empirical networks, we find from our agent-based simulations that claims vary with the reward or punishment parameter R in a fashion that is in excellent agreement with the experimentally observed behavior: Low values of R result in high claims, and high values of R result in low claims. Moreover, by considering the case in which the network in question is the complete network, we recover the standard game theory result that in

a randomly interacting population claims in the TD game remain low for all values of R. Different interaction patterns among the individuals playing the TD game, as represented by networks of different topologies, have interesting effects on the evolutionary dynamics of the game. One of these effects is that higher levels of cooperation evolve on relatively homogeneous networks and lower levels evolve on more heterogeneous networks. Another interesting effect of the network topology is that the level of cooperation that evolves in the TD game increases with increasing network clustering or assortativity. This is potentially important since many empirical social networks exhibit significant degrees of both clustering and assortativity. We have shown here that high levels of cooperation evolve in the TD game on one such empirical network, and we expect this phenomenon to prevail in some generality. We find interesting and surprisingly different results for the MEC game on networks. On all network topologies we have studied (including complete networks), agent-based simulations yield results in good agreement with the behavior observed in experiments: High effort levels are found for low effort cost κ, and low effort levels occur for high effort cost. We now find, however, that when the MEC game is formulated on networks of differing topology, the topological type has no significant effect on the level of cooperation that evolves in the game. Similarly, we find that the degree of network clustering or assortativity has no significant effect on the level of cooperation that evolves in the MEC game. For the MEC game defined on a network, the effect of the network topology on the evolutionary dynamics of the game appears through the sharpness of the transition from high effort levels to low effort levels as the effort cost increases: It is sharper on rather homogeneous networks and less sharp on more heterogeneous networks. In conclusion, therefore, we have shown that the evolutionary dynamics of the TD and MEC games on complex networks provides a natural theoretical explanation of the behavior observed experimentally in these games. In addition, the network topology results in interesting and unexpected consequences for the evolutionary dynamics of these games.

042134-9

SWAMI IYER AND TIMOTHY KILLINGBACK

PHYSICAL REVIEW E 90, 042134 (2014)

[1] R. Axelrod, The Evolution of Cooperation (Basic Books, New York, 2006). [2] G. Hardin, Science 162, 1243 (1968). [3] P. Kollock, Annu. Rev. Sociol. 24, 183 (1998). [4] R. Axelrod and W. D. Hamilton, Science 211, 1390 (1981). [5] J. Maynard Smith, Evolution and the Theory of Games (Cambridge University Press, Cambridge, UK, 1982). [6] R. Sugden, The Economics of Rights, Cooperation and Welfare (Blackwell, Oxford, 1986). [7] B. Skyrms, The Stag Hunt and the Evolution of Social Structure (Cambridge University Press, Cambridge, UK, 2003). [8] E. Fehr and S. G¨achter, Nature (London) 415, 137 (2002). [9] T. Killingback, M. Doebeli, and N. Knowlton, Proc. R. Soc. London, Ser. B 266, 1723 (1999). [10] M. Doebeli, C. Hauert, and T. Killingback, Science 306, 859 (2004). [11] T. Killingback, M. Doebeli, and C. Hauert, Biol. Theory 5, 3 (2010). [12] T. Killingback, J. Bieri, and T. Flatt, Proc. R. Soc. London, Ser. B 273, 1477 (2006). [13] C. A. Holt and A. E. Roth, Proc. Natl. Acad. Sci. USA 101, 3999 (2004). [14] K. Basu, Am. Econ. Rev. 84, 391 (1994). [15] K. Basu, Sci. Am. Mag. 296, 90 (2007). [16] J. Goeree and C. Holt, Am. Econ. Rev. 91, 1402 (2001). [17] J. Van Huyck, R. Battalio, and R. Beil, Am. Econ. Rev. 80, 234 (1990). [18] C. Capra, J. Goeree, R. Gomez, and C. Holt, Am. Econ. Rev. 89, 678 (1999). [19] J. K. Goeree and C. A. Holt, Proc. Natl. Acad. Sci. USA 96, 10564 (1999). [20] J. Hofbauer and K. Sigmund, Evolutionary Games and Population Dynamics (Cambridge University Press, Cambridge, UK, 1998). [21] M. L. Manapat, D. G. Rand, C. Pawlowitsch, and M. A. Nowak, J. Theor. Biol. 303, 119 (2012). [22] C. Riehl, Proc. R. Soc. London, Ser. B 278, 1728 (2011). [23] C. Riehl and L. Jara, Wilson J. Ornithol. 121, 679 (2009). [24] J. Bryant, Q. J. Econ. 98, 525 (1983). [25] S. Anderson, J. Goeree, and C. Holt, Games Econ. Behav. 34, 177 (2001).

[26] S. Iyer, J. Reyes, and T. Killingback, PLoS ONE 9, 93988 (2014). [27] R. Li, J. Yu, and J. Lin, PLoS ONE 8, 58597 (2013). [28] J. Halpern and R. Pass, Games Econ. Behav. 74, 184 (2012). [29] V. Capraro, PLoS ONE 8, 72427 (2013). [30] V. Capraro, M. Venanzi, M. Polukarov, and N. Jennings, in Proceedings of the 6th International Symposium in Algorithmic Game Theory, edited by B. V¨ocking, Lecture Notes in Computer Science Vol. 8146 (Springer, Berlin, 2013), p. 146. [31] M. A. Nowak and R. M. May, Nature (London) 359, 826 (1992). [32] T. Killingback and M. Doebeli, Proc. R. Soc. London, Ser. B 263, 1135 (1996). [33] M. Nakamaru, H. Matsuda, and Y. Iwasa, J. Theor. Biol. 184, 65 (1997). [34] T. Killingback and M. Doebeli, J. Theor. Biol. 191, 335 (1998). [35] M. Van Baalen and D. A. Rand, J. Theor. Biol. 193, 631 (1998). [36] M. Ifti, T. Killingback, and M. Doebeli, J. Theor. Biol. 231, 97 (2004). [37] C. Hauert and M. Doebeli, Nature (London) 428, 643 (2004). [38] F. C. Santos and J. M. Pacheco, Phys. Rev. Lett. 95, 098104 (2005). [39] H. Ohtsuki, C. Hauert, E. Lieberman, and M. A. Nowak, Nature (London) 441, 502 (2006). [40] G. Szabo and G. F´ath, Phys. Rep. 446, 97 (2007). [41] H. Ohtsuki and M. A. Nowak, J. Theor. Biol. 243, 86 (2006). [42] D. J. Watts and S. H. Strogatz, Nature (London) 393, 440 (1998). [43] A.-L. Barab´asi and R. Albert, Science 286, 509 (1999). [44] R. Albert and A.-L. Barab´asi, Rev. Mod. Phys. 74, 47 (2002). [45] M. Newman, Networks: An Introduction (Oxford University Press, Oxford, 2010). [46] D. S. Callaway, J. E. Hopcroft, J. M. Kleinberg, M. E. J. Newman, and S. H. Strogatz, Phys. Rev. E 64, 041902 (2001). [47] M. E. J. Newman, Proc. Natl. Acad. Sci. USA 98, 404 (2001). [48] M. A. Nowak, Evolutionary Dynamics: Exploring the Equations of Life (Belknap Press, Cambridge, MA, 2006). [49] C. Hauert, Proc. R. Soc. London, Ser. B 268, 761 (2001). [50] M. E. J. Newman, Phys. Rev. Lett. 89, 208701 (2002). [51] P. Holme and B. J. Kim, Phys. Rev. E 65, 026107 (2002). [52] R. Xulvi-Brunet and I. M. Sokolov, Phys. Rev. E 70, 066102 (2004).

042134-10

Evolutionary dynamics of the traveler's dilemma and minimum-effort coordination games on complex networks.

The traveler's dilemma game and the minimum-effort coordination game are social dilemmas that have received significant attention resulting from the f...
590KB Sizes 0 Downloads 6 Views