Fish play Minority Game as humans do

We report the results of an unprecedented real Minority Game (MG) played by university staff members who clicked one of two identical buttons (A and B) on a computer screen while clocking in or out of work. We recorded the number of people who clicked button A for 1288 games, beginning on April 21, 2008 and ending on October 31, 2010, and calculated the variance among the people who clicked A as a function of time. The evolution of the variance shows that the global gain of selfish agents increases when a small portion of agents make persistent choice in the games. We also carried out another experiment in which we forced 101 fish to enter one of the two symmetric chambers (A and B). We repeated the fish experiment 500 times and found that the variance of the number of fish that entered chamber A evolved in a way similar to the human MG, suggesting that fish have memory and can employ more strategies when facing the same situation again and again.

The Minority Game (MG) is a simple evolutionary game designed for studying, among other things, how the actions of selfish players can be coordinated by an invisible hand to cooperate for global benefit.In the game, N players have to independently choose one of two sides (A and B) and those on the minority side win.
In a theoretical analysis, one must assume that agents have a fixed number of strategies and a fixed level of memory for the winning history.Under these assumptions, many interesting properties have been discovered 13 .One interesting property that has attracted much attention is that the global gain, defined as the portion of agents who have won the game, is maximal when the agents employ an optimal amount of information 2 .The global gain is related to the variance of the number of players choosing a particular side (e.g., side A) in a simple way such that variance is negatively correlated to the extent of global gain.Therefore the value of variance per player ) (A  decreases to a minimum as the amount of information available to all agents increases, and then increases to a value close to 1/4, indicating that when there is too much information all player choices are effectively random (Figure 1).
Obviously, human players of a real MG do not have a fixed memory and most likely will not use binary tables for strategies as in a theoretical MG model, and understanding of which essential features of the theoretical MG model would remain in a real game could have practical applications.Two reports have been written on real MGs played by humans 15 , 14 but the numbers of players in these two experiments (15 and 5, respectively) are so small that their conclusions are not statistically significant.and B randomly at every game.By October 31, 2010 a total of 1288 games had been recorded.In the first phase, consisting of the first 458 games, the winning buttons of the previous four games were provided on the screen for reference.In the second phase, consisting of the final 880 games, the winning buttons of previous four games were still provided but these results had been modified without the knowledge of the players in favor of button B by adding a value 0.09N to n(A) 16 .

Beginning in
We calculated the variance per player ) (A  and studied its variation as a function of time.Note the number of agents i N at time i t is not a constant in our games (Figure 2) since the number of university staff members working on any given day was subject to fluctuation.Thus each n(A) has to be multiplied by a factor game as they gained experience with it.to the simulation results shown in Figure 1, ) (A  will saturate to 1/4 when even more information is available to the agents.For our games, since ) (A  has passed the minimum and reached 1/4, we did not expect it to vary significantly if more games were played the same way.We therefore reactivated the game in phase 2, starting from game #459, by adding 9% of i N as virtual players who always chose button A. The manipulated results were provided as a reference to the agents, who were not aware of the addition of the virtual players.Apparently, the strategies learned from phase 1 did not work in phase 2 and ) (A  began to increase (Figure 2).It took another 350 games before the agents adapted to the effect of the virtual players, at which time ) (A  decreased again, this time to a lower minimum than seen in phase 1.A smaller variance means, on average, more agents in each game and so a larger global gain.The increase in global gain by adding persistent virtual players was consistent with what had been previously predicted in the theoretical MG model 16 .After reaching a minimum, ) (A  increased to near 1/4 and we concluded our human experiment. We also carried out a separate fish experiment, the experimental setup of which is shown in Figure 3.The Mosquito fish (Gambusia affinis) is a common fish about 3 cm long and is ubiquitous in Taiwan's ponds and rivers.Using a slow-moving mesh, we forced N = 101 Mosquito fish in a tank into two symmetric chambers A and B, lit from overhead by a lamp to reveal their entrances.The whole system was covered by a large piece of black cloth to isolate the fish from environmental cues.Once all the fish had entered a chamber, the number of fish in each chamber was quickly counted and recorded.We then reduced the water volume in the losing chamber so that the 50-plus fish inside were forced into a crowded cluster for a few seconds.Presumably, fish do not like being forced to be in direct contact with one another and would learn 12 10 to enter the other chamber when forced to choose next time.We repeated this fish MG 500 times (a few fish died during the repeated experiments and were replaced by new fish).experiment, and reaches a minimum even faster in the human case.However, )  is surprisingly large in the fish experiment, ranging from 0.55 to 1.25.Upon closer investigation, we found that the Mosquito fish have a tendency to form groups when escaping 17 .Assuming that a group averages 3 fish when entering the chambers, the variance per group is then given by one third of the value presented in Figure 2, thus its value begins with 0.4, drops to a minimum of 0.19, and then saturates to 1/4, a pattern very similar to the human case.

Methods
In the human game, we divided the 1288 games into 18 samples such that the number of games L in each sample is no less than 50 while minimizing the variance of i N in each sample.To calculate the variance of the number of agents choosing button A, we modified the value n(A).While pretending that all games in a given sample were played by   N agents, we multiplying it by the factor We have checked for possible errors due to the inconstancy of i N using random samples of length L. Take 400    N and , where i r is a random integer between -100 and 100.We used April 2008, with the help of the Office of Personnel and the Computer and Information Network Center of National Chung Hsing University (NCHU), we set up a time clock system for university staff to play the MG while clocking in and out of work.All university staff ( 600 200  N ) were required to click one of two buttons (A or B) shown on their office work stations to clock in before 08:30 and clock out after 16:45.Each button click was recorded automatically Nature Precedings : hdl:10101/npre.2010.5440.1 : Posted 27 Dec 2010 with instances denoted as n(A) and n(B), to determine the winning button.The general concept of the MG was explained to all staff, and they were instructed by memo how to use the system.Staff members were incentivized to play the MG seriously by giving away gift certificates redeemable at local convenience stores to the top three winners every month.For the first 20 games, we purposely set button A on the left-hand side of the screen and button B on the right-hand side.Button B had won all these games, suggesting that more people tended to click the button on the left-hand side without paying attention to the game.However, the number of winners oscillated and approached N/2, indicating that more and more people were playing the game seriously.Starting from April 21, 2008, we switched the positions of buttons A

N
in the time period considered.(See Methods for details.)To find the time evolution of ) (A  , we divided the total 1288 games into 18 time periods (with each period marked by a horizontal thick bar in Figure2) so that each period contains no less than 50 games and i N does not vary too dramatically within most periods.minimum and rises back to near 1/4.The evolution of the variance, as compared with the theoretical variance as a function of information in the standard MG (Figure1), shows the agents employed more information in playing the Nature Precedings : hdl:10101/npre.2010.5440.1 : Posted 27 Dec 2010 Variance per fish ) (A  as a function of time is given in the inset of Figure 2. Qualitatively, the evolution of ) (A  is similar to that in the human Nature Precedings : hdl:10101/npre.2010.5440.1 : Posted 27 Dec 2010 standard deviation of a sample is proportional to the square root of the sample size, we compared variance per agent between different samples by multiplying n(A) to 0.05.Nature Precedings : hdl:10101/npre.2010.5440.1 : Posted 27 Dec 2010