View original document

The full text on this page is automatically extracted from the file linked above and may contain errors and inconsistencies.

Playing with Money  WP 19-02R  Douglas Davis Virginia Commonwealth University Oleg Korenok Virginia Commonwealth University Peter Norman University of North Carolina Bruno Sultanum Federal Reserve Bank of Richmond Randall Wright UW Madison, FRB Minneapolis, FRB Chicago, and NBER  Playing with Money∗ Douglas Davis  Oleg Korenok  Virginia Commonwealth University  Virginia Commonwealth University.  Peter Norman  Bruno Sultanum  University of North Carolina  FRB Richmond  Randall Wright Zhejiang University, University of Wisconsin-Madison & FRB Minneapolis  August 15, 2019 Working Paper No. 19-02R Abstract Experimental work in monetary economics is usually based on theory that incorporates an infinite horizon. Yet, hard constraints on laboratory sessions lead to finite times when the game must (with probability 1) end, and then simple backward induction implies monetary equilibria cannot exist. Hence, these experiments cannot evaluate subjects’ ability to settle on the use of money as a medium of exchange, that ameliorates trading frictions, as an equilibrium outcome. To address this, we present some finite-horizon games where monetary exchange is an equilibrium outcome, and report some experimental results using these games.  ∗  Wright acknowledges support from the Ray Zemon Chair in Liquid Assets at the Wisconsin School of Business. Davis gratefully acknowledges financial support from the National Science Foundation (SES 1426980). We thank Daniela Puzzello and Chris Waller for comments, as well as participants at the Spring 2018 Midwest Macroeconomics Meeting in Madison Wisconsin, the 2018 Summer Workshop on Money, Banking Payments and Finance held at the Federal Reserve Bank of St. Louis, the 2018 Search and Matching workshop in Madison, the 2018 Theory and Experiments in Monetary Economics conference at George Mason, the 2019 Asian Meeting of the Econometric Society, the 2019 Annual Meeting of the Society for Economic Dynamics, and the 5th African Macro Conference in Marrakesh 2019. Any opinions or views expressed in this article are not necessarily those of the Federal Reserve Bank of Richmond or the Federal Reserve System. DOI: https://doi.org/10.21144/wp19-02  1  Playing for absurdly large stakes, gamblers can express their disdain for money as a medium of exchange. Jackson Lears (1995), “Playing with Money,” The Wilson Quarterly, 19(4), 7-23.  1  Introduction  There has been much progress over the last thirty years in monetary economics, where there are now several internally-consistent models where intertemporal exchange is facilitated by the use of assets as media of exchange. To be clear, by a medium of exchange we mean an object that is accepted in trade not to be used directly, for consumption or production purposes, but to be traded again for something else. For our purposes, money is an asset that is used as a medium of exchange. Although it took a long time to formalize these ideas, it is clear from the historical literature – or common sense – why it might be desirable to use money: it can be easier to trade x for y and then y for z, rather than trade x for z directly. Intuitively, this is because the former involves a pure-barter transaction, which requires a double coincidence of wants, while the latter involves two transactions using y as a medium of exchange, which requires only two single coincidences.1 Microfounded monetary theory studies environments with explicit frictions in the trading process. Unlike Walrasian models, where consumers simply trade by moving along their budget lines, agents in these models trade with each other. This trading process plays out in real time in the presence of specialization that can give rise to problems associated with double and single coincidences, as well as spatial or temporal separation, limited commitment, and imperfect information. These ingredients hinder direct barter and simple credit arrangements, and as a result, there can emerge endogenously a role for money to ameliorate the trading frictions. Monetary theory as just described is well suited to the methods of experimental economics. One reason is that the agents in these models trade with each other, which is exactly what usually happens in the laboratory. Moreover, one can ask if monetary exchange emerges as an outcome. Also, whether it does or does not emerge, one can try to use the results to understand why. Additionally it is straightforward to consider different treatments corresponding to salient features of the theory, in the sense that the important parameters can be changed easily, and it is understood how the different parameters lead to different theoretical outcomes. There is a substantial literature studying monetary economics in the lab. Early experiments, like Brown (1996), Duffy and Ochs (1999,2002) or Duffy (2001), use designs closely following the search-based model of commodity money in Kiyotaki and Wright (1989). In other work, Rietz (2017) considers the use of secondary monies (like cryptocurrencies) in the model of fiat money in Kiyotaki and Wright (1993), while Jiang and Zhang (2017) and Ding and Puzzello (2017) study international 1  The reader may consult the recent surveys by Lagos et al. (2017) and Rocheateu and Nosal (2017), as well as an earlier discussion in Wallace (1980) and other contributions to that volume.  2  monies building on Matsuyama et al. (1993). In yet other work, Duffy and Puzzello (2014a,b), study versions of the (in some ways more general) model of fiat money in Lagos and Wright (2005), while Camera and Casari (2014) study an abstract version related to Aliprantis et al. (2007a,b). See Marimon et al. (1993), Marimon and Sunder (1993,1995), McCabes (1998), Bernasconi and Kirchkamp (2000), Anbarci et al. (2015) and Berentsen et al. (2017), among others, for alternative approaches. All of these papers study models that in theory depend on an infinite horizon, while in actual experiments, of course, the horizon is finite, because for institutional/legal reasons there is always a hard stop after a few hours. This is problematic because monetary equilibria do not exist in finite versions of these theoretical environments. This is obvious because, if the horizon is finite, there is no incentive to accept fiat money in exchange for anything of value in the ultimate period; then in the penultimate period, every agent understands that money is of no value next period, so it is not accepted then; and so on. By backward induction, money is not accepted in any period. Moreover, importantly, this result is true even in experiments using random stopping rules, or ones featuring repeated supergames or iterations of randomly terminated games, as in the papers referenced above, because there is always a hard stop.2 To be precise, monetary equilibria do not exist in finite versions of these environments if time is discrete. As pointed out by Faust (1989), one can model time as continuous and defeat the backward induction argument, because even if there is an ultimate stopping time there is no penultimate time. However, while in principle money can be valued in continuous time models with a hard stop, in practice this cannot be implemented in the lab, because even with a small ε > 0 time required for subjects to make and process decisions, there is effectively a finite number of periods, and hence backward induction comes back into play. Also, the above discussion assumes agents are rational, which is the usual assumption in the literature. Branch and McGough (2016) consider a setup like Lagos and Wright (2005) with bounded rationality, and it might be interesting to see if a finite horizon of such an environment could support monetary equilibria, but here the focus is exclusively on rational agents. Given this, it is unclear what the experiments discussed are evaluating or testing. It cannot be the hypothesis that agents may settle on outcomes where money is used as a medium of exchange as an equilibrium, because monetary exchange is not an equilibrium in their finite environments. Yet this seems to be what the authors have in mind. Maybe it is the hypothesis that players do not understand backward induction? Or that money is valued due to participants’ attention to the static features of stage games? While any of these could be interesting, including 2  There is in fact considerable evidence suggesting that laboratory subjects do not approach games terminated with probabilistic stopping rules in a manner consistent with the discount factor that the rules are intended to mimic – e.g. see Dal Boca and Fr´echette (2017) or Frechette and Yuksel (2017).  3  the hypothesis that subjects may settle on the use of money when it is not an equilibrium, it seems more interesting to see if they do so when it is an equilibrium. That is, after all, what theory predicts.3 Our goal is to reconsider experimental monetary economics in environments where monetary exchange can be an equilibrium even with a finite horizon and bring those to the lab. We formalize this argument in some particular models and the same logic applies to any standard discrete-time model in the literature. In fact, it is known how to specify a model with a finite horizon where monetary exchange is an equilibrium, and is welfare enhancing, from Kovenock and De Vries (2002), which is not surprising in light of repeated game theory. In that literature the following is understood: if there is a unique stage game equilibrium, it is critical whether the repeated game has a finite or infinite horizon; but if there are multiple equilibria in the stage game, as shown by Benoit and Krishna (1985), there is a finite horizon folk theorem similar to the one for infinite horizon. Building on these ideas, we first consider a baseline sequential game in which (like in the previous experimental literature) the unique equilibrium outcome is autharky. This game is a finite horizon sequential trading model with a hard deadline, and the backward induction reasoning that establishes that money is useless is arguably very transparent. Then, we develop two alternative approaches for dealing with endgame effects that make it possible to interpret experimental results in terms of equilibrium outcomes, and then report the results of experiments. The payoff structure in our first alternative model is much like a conventional monetary model, but circumvents the backward induction argument in a way somewhat similar to Allen et al. (1993) or Allen and Gorton (1993), who construct equilibria with speculative bubbles by introducing incomplete information. Players are positioned in sequence. In the first period, the second person can produce for the first; in the second period, the third can produce to the second; and so on. The first person in the queue starts the game holding money, and is the only one who knows his sequential position. Hence, the player who might accept money in exchange for production faces a gamble: with some probability, he is the last person in the queue and the money is useless; with the complementary probability there is someone after him in the queue who might accept the money. If the gains from trade are large enough, there exist equilibria where fiat money is used despite the finite horizon. 3  Duffy and Puzzello (2014a) study a version of Lagos and Wright (2005) with a finite number of agents. When this number is relatively small, it is possible to use contagion strategies to support nonmonetary equilibria featuring gift giving that Pareto dominate monetary equilibria as long as agents are sufficiently patient. Their experimental results show, however, that money tends to be used when it is available and that better outcomes are achieved with money than with gift giving. They interpret this as suggesting that money is an efficiency-enhancing coordination device as well as a way to sustain intertemporal trade. But again the only equilibrium in their design is autarky – no trade in any period – and hence their results do not show how money enhances the likelihood of achieving better equilibria.  4  The second alternative model draws more directly on the logic in Benoit and Krishna (1985) and relates to designs that have been implemented in the experimental literature on repeated games (e.g., Anderson and Wengstr¨om 2012; Cooper and Kuhn 2014; and Frechette and Yuksel 2017) Again, players are positioned in sequence and, in the first period, the second person can produce for the first, and so on. Unlike the previous game each player know her position in the sequence, but in the last period a game with multiple equilibria is played. Our particular implementation uses a hawk-dove game, which has two pure strategy equilibria, one which is favorable for player 1 and the other which is favorable for player 2. In the full sequential game with a number of periods of standard monetary exchange followed by a final period in which the hawk-dove game is played there now emerge equilibria where the producer in the penultimate period is willing to accept intrinsically useless cash because it determines which equilibrium is played in the final period: if he brings money, then the equilibrium that favors him is played, otherwise, the equilibrium that favors the other agent. This then makes it consistent with equilibrium for every player in every previous period to accept money. We consider six different treatments in our experiments. The baseline sequential model, the model with asymmetric information about the queue position, and the model with the hawk-dove game in the final period are all ran both with and without fiat money. In all non-monetary treatments the unique equilibrium prediction is that no trade should occur, whereas monetary equilibria with beneficial trade exist in two of the monetary treatments. Importantly, however, money should not matter in the baseline sequential model with a known hard deadline. Our experimental results, however, are somewhat mixed. Consistent with the previous literature we find that every monetary treatment generates more trade and higher utility than the corresponding non-monetary treatment. However, this is true also in the baseline sequential model, where theory predicts that the introduction of money should have no effect on the equilibrium allocation. In itself, this may not seem too surprising as it is well known that experimental subjects often “fail” to backward induct, as well as behave in ways that suggest altruistic motives that are not allowed in the theoretical model. Instead, what we view as puzzling is that the introduction of money leads to higher trade and efficiency in the baseline sequential treatment than in the two models in which monetary equilibria exist. Moreover, in the baseline sequential treatment without money play quickly converges to the autarkic equilibrium outcome, whereas there is significant trade occurring in the treatment with money. This seems inconsistent with attributing trading to either a failure to backward induct or to altruism, as there seems to be no reason to care more or less for other players in the presence of money. The treatment in which money had the smallest effect was the one with the hawk-dove game in the final period. In retrospect, this is not too surprising as the monetary equilibrium relies on a rather sophisticated coordination on equilibria in the last period subgames. Specifically, the monetary equilibrium is supported by strategies in which the player that has a trading opportunity in the penultimate 5  period is rewarded with her preferred equilibrium if he hands over the money, and punished with the other equilibrium if he fails to hand over money. To expect that such coordination is achieved spontaneously in a short experimental session may simply be asking too much of the experimental subjects. Additionally, in the real world, the use of money has been evolving over generations, so asking whether subjects can overcome the coordination issues may be asking the wrong question.4 The rest of the paper is organized as follows. In Section 2 through 4 we lay out the three theoretical models: in section 2 we present the Baseline Sequential (BS) environment, which we use to highlight the main forces behind standard monetary models and what are the issues with the previous experimental approach; in section 3 we present the model with uncertain position (UI); and in section 4 we present the Role Identification (RI) model. Section 5 details the experimental design and summarizes the results. In Section 6 we discuss some possible ideas on how to explain the more puzzling features of the results. Finally, Section 7 concludes and discusses some ideas on how to change the experimental designs in light of our somewhat puzzling experimental findings.  2  Theory  We now present some rudimentary monetary economics, starting with environments that have an infinite horizon, then considering truncating the horizon.  2.1  Basic Model  Consider an environment with a discrete and, for now, infinite set of dates starting from date 1: {1, 2, 3, ...}. The set of agents is also discrete and infinite, but starts at 0 instead of 1: {0, 1, 2, ...}. This is because of the way in which agents meet: at each date t agent t meets agent t − 1, and the former is able to produce either 1 or 0 units of a good that the latter consumes. There is one exception: agent t = 0 only has one meeting, the one at t = 1, where he can consume; he never can never produce; and this is because t is infinite going only in one direction. Making the good indivisible allows us to focus on whether a good is produced, rather than how much is produced. It is also also convenient to make the good nonstorable – e.g., think of it as a service. Production has cost c and consumption has payoff u, both in terms of utility. Future payoffs are discounted using β ∈ (0, 1], but that plays little role except for one case mentioned below. The payoff to autarky, which means no trade, is 0. We assume that agent t is better of by producing in t and consuming in t + 1 than by staying in autarky. That is, c ≤ βu. 4  In future work we intend to exploit this by having the experimenter/mediator suggest strategies to see whether environments with monetary equilibria are less immune to deviations than environments in which no monetary equilibrium exist.  6  This is a special case of a random matching model of the type used in much modern monetary economics (see Corbae et al. 2003 for a general formulation in which matching is described by a partition of the set of agents, at each date, with agents able to trade iff they are in the same subset). Among the ways in which it is special, matching is degenerate here, in the sense and every agent t has exactly two meetings, one at t where is a potential consumer and one at t + 1 when he is potential producer. Obviously this resembles Samuelson’s (1958) OLG (overlappinggenerations) model of money, although there are also differences.5 Still, our points about money made in a matching context apply equally well to OLG theory. The Pareto efficient outcome is for agent t to produce for agent t − 1 at every t because we assume βu ≥ c.6 Can this outcome be consistent with incentives? That depends. If every agent t > 0 could somehow commit to produce at date t they would because βu − c ≥ 0 beats the autarky payoff. But we don’t think that having agents committing to actions that could be far in the future is a plausible assumption, so we assume that agents cannot commit. Still, as long as information is complete, cooperative behavior is incentive compatible in the sense that it is the outcome of a subgame-perfect equilibrium. We can construct such equilibrium using trigger strategies in the following way: instruct every agent t to produce at t iff every agent t0 < t has produced.7 Now assume information is incomplete in that agent t can only observe what happens in his own meetings at t and t + 1. With no commitment and this information structure, the equilibrium we constructed falls apart. Agents cannot condition their decisions to production in prior meetings and, as a result, the unique outcome consistent with incentives (i.e. with a subgame-perfect equilibrium) is autarky. That is where money comes in. Let us introduce a token, say a coin, and for our purposes it can be a single indivisible coin. It is intrinsically useless in consumption and production, and storable with neither cost nor benefit, but it can potentially still be valued because of its liquidity, as fiat currency. In particular, let us endow agent 0 with the coin and propose the following arrangement: at every t > 0 agent t 5  For one, that literature interprets agent t as being born at t and dying after t + 1, which is in no sense needed here, and indeed we could reinterpret t as location rather than date, to highlight spatial rather than temporal separation across and the notion of a generation need not come up. For another, that literature almost always studies Walrasian markets with competitive pricing instead of thinking about agents trading bilaterally. 6 Note that, if βu < c ≤ u, to always produce still maximizes total surplus, but this outcome is not a Pareto improvement over autarky since all agents other than 0 are worse off. 7 Some people call this cooperative behavior gift giving, but that is not something about which everyone agrees. The reason is that agent t produces for t − 1 not out of the goodness of his heart, but because that is the way he gets to consume at t + 1 (if he fails to produce at t the economy triggers to autarky at t + 1). So in a sense he is buying consumption one date with production at another, which can be thought of as a credit arrangements, even though it is multilateral and not bilateral credit (agent t produces for agent t − 1 but gets consumption from agent t + 1). It could be countered that it is different from credit in standard usage, where usually that means consuming before producing.  7  produces for agent t − 1 if, and only if, the latter gives the coin to the former. Fiat money thus can be valued – agents are willing to work for it – and that allows us to support the Pareto efficient outcome that otherwise could not be achieved due to commitment and information frictions. (Of course, as in any reasonable model with fiat money, there is also an equilbrium where it is not valued.) This result requires the underlying specification contain a double-coincidence problem – e.g., if we additionally let agent t − 1 produce something at t that agent t consumes they can simply barter. More formally, consider the following dynamic stage game in every meeting. First, agent t − 1 makes an offer that is a commitment (given there is commitment within, just not across, periods) to either transfer the money or not as a function of whether agent t produces for t − 1. Then, t either produces or not, and the coin is transferred in case the offer and production decisions stipulate that it should be.8 As soon as the coin is not transferred, the continuation game is the same as the nonmonetary economy with past transactions unobservable, where the unique equilibrium is autarky. Hence, if an agent enters a meeting without the coin, his continuation utility is 0. But if he can exchange the coin for a good, he gets u > 0 and the producer gets βu − c > 0, so this is a subgame perfect equilibrium.  2.2  Finite Truncations  Although the way we present the above results is somewhat novel, substantively they are all standard (see the above-mentioned surveys). What else is standard is this. Suppose we truncate the meeting process at T , so the sets of dates and agents are now {1, 2, ...T } and {0, 1, ...T }. Then, simple backward induction implies that money is not valued. At date T , agent T will not produce to get money because he will never have a meeting where he can use it. But then at date T − 1 agent T − 1 will not produce to get money because although he has a meeting at T he cannot use the money. And so on, so the unique outcome is again autarky. Moreover, this does not even use subgame perfection, but it applies to using Nash equilibrium or iterated elimination of strictly dominated strategies as a solution concept.9 Suppose that, in addition to the deterministic end of time at T , we let the game end at any t < T with probability 1 − πt and continue to the next round with probability πt . This does nothing to overturn the backward induction. Ergo, counter to what one may take away from some experimental papers, it is irrelevant for the nonexistence of valued money to have random termination at any or all dates 8  A simpler game would have t first produce and then t − 1 either hand over the money or not. This also works but is more fragile because t − 1 is indifferent to handing over the money or not. 9 Not only does the truncation kill the use of money, and hence production, when monetary exchange is desirable in the infinite economy due to the lack of commitment and information, it also kills all production when there is full commitment and information for similar reasons (backward induction).  8  t < T once we have a hard stop at T . In fact, all it does is tighten the incentive condition for accepting money at t from βu > c to πt βu > c.10 We conclude that it is theoretically impossible to implement a lab experiment in an environment where monetary equilibrium exists using the above setup or any of its standard generalizations or variations. Certainly, switching from deterministic to random meetings and having agents potentially trade many times does not help. For example, consider the simple model in Kiyotaki and Wright (1993). Agents each period meet someone with probability α, rather than just once in a lifetime, and each meeting is a random draw from the population. Also, there are no double coincidence meetings (that is easily relaxed), and there is a probability σ of a single coincidence meeting, where in any such meeting each agent has an equal chance to be the consumer. Let the measure of agents be 1 and suppose there are M ∈ (0, 1) indivisible coins, which agents cannot store more than one at time (so M is also the probability an agent has a coin). In the infinite horizon version of this model, the incentive condition for monetary steady-state equilibrium is c < β(V1 − V0 ), where Vm is the continuation value for an agent with m ∈ {0, 1} coins, instead of the simpler changes from c < βu (note that we can write c < βu in exactly the same way by defining Vm similarly in the simpler deterministic model; also note that this is a case where β < 1 actually matters). In equilibrium these satisfy 1 1 ασM (−c + βV1 ) + (1 − ασM )βV0 2 2 1 1 = ασ (1 − M ) (u + βV0 ) + [1 − ασ (1 − M )]βV1 2 2  V0 = V1  Solving for V0 and V1 and inserting them into c < β(V1 − V0 ) gives us a condition in parameters for the existence of stationary monetary equilibrium. There are also non-stationary monetary equilibria. To construct such equilibria is beyond the scope of this paper but could possibly be relevant for interpreting experimental results. See Kehoe et al. (1993), Renero (1998), and Wright (1994) for analysis of non stationary equilibria in the context of monetary models similar to the one in this paper. These monetary equilibria are more complicated than the version in our basic model, but it is the same in spirit, and gives similar substantive results. In particular, monetary equilibria never exist if there are no meetings after some finite date T .11 10  As an aside, note that in actual experiments the hard stopping rule is often not articulated explicitly as a terminal period, but that does not affect the argument. Suppose that the time it takes to play a round is random and that the experiment stops for sure when time t∗ is reached. Then, there must be some t∗1 < t∗ such that the players who meet at t ∈ [t∗1 , t∗ ] know for sure that there will be no next period. Hence, no one will not produce for money at t ∈ [t∗1 , t∗ ]. Continuing the argument by induction, we again conclude autarky is the only equilibrium. 11 The efficient equilibrium described in Araujo (2004) also does not exist in this case for the same reason. Araujo (2004) demonstrates that, for any population size, there exists some discount  9  We do not get around this by making other basic changes in the environment to make it look more like other well-known models in the literature (e.g., allowing goods to be divisible and having agents bargain over the amount one gets for a coin, as in Shi 1995 or Trejos and Wright 1995). To construct experimental designs with environments that support valued fiat money given a hard stop at T requires a different approach. Two alternatives are provided below. We provide a third alternative in the Appendix, which explores a public good game.12  3  Alternative Model 1: Uncertain Position  It is understood that uncertainty about the termination of play can help sustain cooperative outcomes in repeated games. One way to implement this is to have a random termination rule, with no hard stop, but we already argued that this cannot be implemented in the lab. Another way is to keep the termination rule fixed but make agents uncertain of how far they are from the termination. This point was made by Kovenock and Vries (2002). We build in their work to demonstrate that it is possible to circumvent end-game problem in a monetary model with just two periods and three agents where agents are uncertain about their queue position. We choose the specification with just two periods and three agents because it is the simplest way to make the point, but the logic generalizes immediately to environments with more periods and agents. As in our baseline environment with T = 2 periods, the set of agents is {0, 1, 2}. In period t = 1, agents 0 and 1 meet, and agent 1 can produce an indivisible good at cost c that is consumed by agent 0 for utility u. Then in period t = 2 there is an analogous meeting between agents 1 and 2 where 2 is the producer and 1 is the consumer. Again agents discount between meetings using β ∈ (0, 1), but that plays little role. Also there is again a single coin that is endowed to agent 0. The difference from the baseline case is simply this: there is asymmetric information about the sequence of meetings. We let agent 0 know his position in the queue since in the monetary economy we endow him with money, but the other two agents do not know their queue positions. That is, the other agents do not know if they are the agent 1, who can produce to get money in period 1 and consume in rate such that efficient equilibria without money exist using contagion strategies similar to Kandori (1994). But the argument relies on punishments in the future, and particularly far future if the population is very large, which is not feasible if T is finite. 12 The example is different from the standard models of money found in the literature. One agent wants to free ride on the public good investment of a second agent. Because the investment by the first agent cannot be verified by the second agent before he makes his investment, the second agent anticipates the first agent will free ride and he does not invest. In this case, money can be a substitute to record-keeping, as stressed by Kocherlakota (1998) and Wallace (1980). When the first agent can acquire money as proof of his investment in the public good, he can induce the second agent to also invest in the public good.  10  period 2, or agent 2, who can produce in period 2 and have no meeting where he can consume. Both agents assign equal probability to each event. The only equilibrium in the game without money is autarky. To see this, suppose that the agent who is not agent 0 produces with positive probability in his first meeting when he is a producer. The agent knows he can be in two possible states of the world. One is that he is agent 2 and will not have an opportunity to consume. In this case, he would clearly be better off if had not produced. The second possibility is that he is actually agent 1 and will meet agent 2 as a consumer. In this case, for production to be consistent with a rational play it must increase the probability of consuming in the meeting with 2. But in a nonmonetary economy, the probability 2 produces for 1 cannot depend on what 1 did in a prior meeting. Therefore, the agent is worse off by producing independently of whether he is agent 1 or 2 (indeed, the argument is stronger, as it establishes that autarky is the unique rationalizable outcome in the sense of Bernheim 1984). Now let us add money as before. Again suppose the agent with money commits to an offer specifying whether the money will be transferred as a function of whether production occurs. If the offer is rejected, there is no trade; if accepted there is. While as usual there is an equilibrium where money is not valued, if gains from trade are sufficiently large there is also an equilibrium where it is with the following strategies: • A consumer with money offers it in exchange for production. • A consumer with no money asks for production offering nothing in exchange (note that any feasible proposal works here). • A producer produces if offered money but not otherwise. Clearly, there is no profitable deviation for a consumer with money, and given the strategy of producers there is nothing else a consumer without money can do to increase his chances of consuming. As usual, the trick part is to verify that the producers actually have incentives to produce. Again there are two possibilities. With probability 1/2 there is another period, in which case the producer will be a consumer with money and, given the strategy profile above, he will offer it in exchange for production and his offer will be accepted. In this case, his payoff will be βu − c. With probability 1/2, there is no other period and the producer payoff is just −c. Hence, a producer has a strict incentive to accept the proposal to produce . We in exchange for a unit of money if, and only if, 12 (βu − c) − 21 c ≥ 0 ⇔ c ≤ βu 2 conclude that the monetary economy has an equilibrium that replicates the first-best outcome. The reader may notice that the conclusion depends on parameters. If βu/2 < c < βu, the construction doesn’t work and autarky is the unique equilibrium outcome also with money—even though there is a Pareto superior outcome with production. This is fully consistent with the existing literature on monetary economics, where 11  the point is that it is possible that money can overcome frictions that result from the lack of double coincidence for the right parameter configurations. That is, as stressed by Kocherlakota (1998), money serves as a substitute for record keeping in the equilibrium, but money is less precise than record keeping, so sometimes it cannot achieve full efficiency. This imperfection is intuitively why money is only helpful if the gains from trade are large enough.  4  Alternative Model 2: Role Identification  Uncertainty about the termination of the game generate monetary equilibria because it prevents agents from apply backward induction. Indeed, if there is only one possible equilibrium outcome in the last period, agents know there are no gains from producing in the period before the last since what happen in the last period is given, and so on and so forth. But what if there are more than one possible equilibrium outcome in the last period? Then an agent may find profitable to produce in the period before the last if his production decision influences what is gonna be the outcome in the last period. This idea is also similar to an example in Kovenock and De Vries (2002), and the underlying logic also relates to the construction of finite horizon folk theorems in Benoit and Krishna (1985). As in our baseline environment with T = 2 periods, the set of agents is {0, 1, 2}. In period t = 1, as before, agents 0 and 1 meet, and agent 1 can produce an indivisible good at cost c that is consumed by agent 0 for utility u. The difference is in period t = 2. In period t = 2 agents 1 and 2 meet, and play a Hawk-Dove game. We set the parameters of this game as Hawk Dove  v 2  Hawk − e, v2 − e 0, v  Dove v, 0 . v v , 2 2  The first period producer, agent 1, is the row player and period 2 consumer, agent 2, is the column player. We assume that v2 − e < 0 so the game has two pure strategy Nash equilibria: (Hawk ,Dove) and (Dove, Hawk). There is also a mixed strategy equilibrium that will not be used in the construction below. Again agents discount between meetings using β ∈ (0, 1), but that plays little role. What is important is that agents only observe actions in the meetings that they participate. Any equilibria in the game without money has autarky (i.e. no production) in period 1. To see this, suppose that agent 1 produces with positive probability in period 0. For production to be consistent with a rational play it must increase his payoff in the Hawk-Dove game in period 2. But in a nonmonetary economy, the probability agent 2 plays Hawk or Dove cannot depend on what 1 did in his prior meeting. Therefore, the agent 1 is worse off by producing. Now let us add money as before. Again suppose the agent with money commits to an offer specifying whether the money will be transferred as a function of whether 12  production occurs. If the offer is rejected, there is no trade; if accepted there is. As usual, all the equilibria of the economy without money are also equilibria with money in which money is not valued, but if gains from trade are sufficiently large there is also an equilibrium where it is with the following strategies: • In period 1, agent 0 offers the money in exchange for the consumer good. • In period 1, agent 1 accepts any offer in which he gets the money (i.e. producing for the money or getting it without producing), and rejects any offer in which he does not get the money. • In period 2, agent 1 transfers the money to agent 2 if he has it. • In period 2, agents 1 and 2 play (Hawk, Dove) if agent 1 transferred the money to agent 2 (so that agent 1 is the Hawk), and play (Dove, Hawk) if agent 1 did not (so that agent 1 is the Dove). Clearly, second period play is consistent with subgame perfection as players play a Nash equilibrium of the hawk-dove game after any history of play, and, since it improves the equilibrium selection, a player holding money has a strict incentive to give it to the other player. Obviously, the first period consumer has a strict incentive to follow the postulated strategy as it results in positive utility from giving up an object with no value (for that player). The only question is whether the producer in the first period is willing to produce, which will be the case as long as βv > c. So, again, for money to be valued the gains from trade must be large enough. It may also be noted that the monetary equilibrium with production in the first period is renegotiation proof. Because play shifts between equilibria that are not Pareto ranked depending on whether there is a monetary transfer, the two players in the second period do not have common interests to undo the coordination that supports good behavior in the first period. This is an advantage in comparison with using a coordination game at the end, which would be more fragile, as it would fail the test of renegotiation proofness. Importantly, the two examples developed above are not exhaustive. In the Appendix, for example, we consider a game with a payoff structure where the equilibrium outcome in a simultaneous move game is dominated by the sequential (Stackelberg) game obtained by letting one player observe the action of the others before taking an action. Thus, if the second player could condition on the action of the first, it would be possible to support a mutually desirable outcome. In the Appendix, we show that if there is a third player with money, a desirable equilibrium emerges that replicates the Stackelberg outcome. However, the two examples above, together, with the discussion of the baseline sequential game, suffice to show the importance of infinite horizons in the standard monetary models, and how finite horizon alternatives to the standard models exist in which monetary equilibria exist.  13  5  An Experimental Evaluation  The simple monetary models presented above allow a direct assessment of subjects’ propensity to adopt the use of money in finite horizon contexts.  5.1  Design  Our experiment involves four player variants of the Baseline Sequential (BS), Uncertain Position (UP), and Role Identification (RI) games developed above. In each case, we set the production cost at c = $1, the utility of consumption u = $3, and since discounting β is of little importance both in the theory and within the few hours of the lab experiment, we assume it equals one.13 For each game the experimental subjects play s a series of rounds. At the start of a round participants are randomly assigned an order in sequence s = 1, 2, 3, or 4 to make decisions in matches t = 1, 2, and 3. For the purpose of simplicity, we refer to the decision makers in a round as players 1, 2, 3, or 4. In the BS game, at the start of each round participants are told their sequence position. Then players 2, 3, and 4 decide in matches 1, 2, and 3 whether or not to produce for their respective players 1, 2, and 3. If players 2, 3, and 4 all choose to produce, players 1 to 4 earn, respectively, $3, $2, $2, and -$1, generating a maximum expected earnings of $1.50. If no player chooses to produce, all players earn $0. In the UP game, rounds proceed similarly except that players 2, 3, and 4 do not know their order in sequence when they make production decisions. We implement this experimentally by having these players make production decisions simultaneously, after which the game is played and payoffs determined. Maximum and minimum earnings in the RM game parallel those in the BS game. Finally, in the RI game, participants know their order in sequence at the start of each round (as in the BS game), and in periods 1 and 2, young players 2 and 3 make production decisions for their respective old counterpart players 1 and 2. In period 3, however, player 3 meets young player 4 to play the Hawk-Dove game with v = 3 and e = 2, which creates the normal form game structure shown as Table 1. If players 2 and 3 both produce, and then players 3 and 4 coordinate on the HawkDove equilibrium in period 3, expected earnings for players 1 through 4 are $3, $2, $2, and $0, respectively, generating maximum expected earnings of $1.75. Unlike the other two games, minimum expected earnings in the nonmonetary economy are non-zero. In the event that both players 2 and 3 choose to not produce, players 3 and 4 still play the Hawk-Dove game in period 3. In this case, players can use the labels 3 and 4 to coordinate on pure strategy equilibria, either the one that favors 13  Our prior perception was that a four-player version of these games would give selection of the monetary equilibrium a better chance of emerging in the UP and RI games. Conditional on not being in position s = 1, with four rather than three periods, expected earnings in the monetary equilibrium increase from $0.50 to $1.00 in the UP game and increase from $1.00 to $1.33 in the RI game.  14  Table 1: The Hawk-Dove Game with v = 3 and e = 2 Hawk Dove  Hawk Dove -0.5, -0.5 3, 0 0, 3 1.5,1.5  player 3 or the one that favors player 4. Both cases yields expected earnings of $1.50 to each player, thus the expected earnings over all players in the RI game is $0.75. Another possibility is the symmetric mixed strategy equilibrium. This yields expected earnings of $0.38 to each player, thus the expected earnings over all players in the RI game is $0.19.14 In each game, we evaluate treatments, both with and without money. Each monetary treatment differs from its non-monetary counterpart in the single respect that player 1 is given a token at the beginning of each round. This token has no value in and of itself, but player 1 can pass it to player 2 in exchange for the production good. Provided players 2 and then 3 obtain the token, they may similarly pass it to the next player in the sequence in meetings 2 and 3, respectively. Table 2: Equilibrium Outcomes Equilibrium  Tokens Passed  Units Produced  Expected Earnings Overall Players 2, 3 and 4  Nonmonetary Monetary  – 0  Baseline Sequential (BS) 0 $0.00 0 $0.00  $0.00 $0.00  Nonmonetary Monetary  – 3  Uncertain Position (UP) 0 $0.00 3 $1.50  $0.00 $1.0  Nonmonetary Monetary  – 3  Role Identification (RI) 0 ${0.19, 0.75, 0.75} 2 $1.75  ${0.25, 1.0, 1.0} $1.33  Table 2 summarizes theoretical predictions for nonmonetary and monetary equi14  Earnings rounded to the nearest cent. In a symmetric mixed strategy equilibrium, each player −0.5 34 +3 14 plays Hawk with probability 34 and Dove with probability 14 . Expected earnings are = 2 $0.1875.  15  libria in our parameterization of each game. Note that the non-monetary equilibria are both, the unique equilibrium in the non-monetary treatment, as well as equilibria in the monetary treatments. Columns two and three specify how many times the token is expected to be passed and productions taken place. Columns four and five specify the expected earning overall in the economy, and for players 2, 3 and 4 (the players with a production decision). Expected earnings are computed in the following way. In both the monetary and nonmonetary treatments of the BS game, and in the nonmonetary treatment of the UP game, there is a unique equilibrium with no production. Therefore, overall expected earnings and the earning of players 2, 3 and 4 are all zero. The monetary treatment of the UP game has a monetary equilibrium that implements the Pareto efficient outcome. In this case, players 1, 2 and 3 consume, generating utility of 9, and players 2, 3 and 4 produce, generating an aggregate = 1.50. Similarly, the utility from cost of 3. The overall expected earnings is 9−3 4 consumption of players 2, 3 and 4 is $6 since only players 2 and 3 consume. The aggregate cost is still $3 since the three players produce. Therefore, the expected = $1.00. earnings of players 2, 3 and 4 6−3 3 The nonmonetary treatment of the RI game has three equilibria: a mixed equilibrium associated with total earnings of $0.75, and two pure strategy equilibria associated with total earnings of $3.00. The overall expected earnings in the mixed = $0.1875. Similarly, the expected earnings of players 2, strategy equilibrium is 0.75 4 0.75 3 and 4 is 3 = $0.25 in the mixed strategy equilibrium. There are also two pure strategy equilibrium in this game. In both the expected earnings is $3. Therefore, the overall expected earnings in the pure strategy equilibrium is 34 = $0.75 and the expected earnings of players 2, 3 and 4 is 33 = $1.0. The monetary treatment of the RI game has a monetary equilibrium that implements the Pareto efficient outcome. In this case, players 1 and 2 consume, generating utility of 6, and players 2 and 3 produce, generating an aggregate cost of 2, and agents 3 and 4 play (Hawk, Dove), generating utility of 3. The overall expected earnings is 4+3 = 1.75. Similarly, the utility from consumption of players 2, 3 and 4 4 is $1 since only player 2 consume. The aggregate cost is still $2 and we still have the payoff of 3 from playing (Hawk, Dove) in period 4. Therefore, the overall expected = 1.33 and the expected earnings of players 2, 3 and 4 is 33 = $1.0. earnings is 1+3 3 Starting with the top rows of the table, observe that money doesn’t affect equilibrium predictions in the BS game. In the unique rationalizable outcome, no player accepts a token, and no player ever incurs the cost to produce a unit. As a result, earnings are $0.00 in the unique equilibrium regardless of whether money is introduced. In contrast, a welfare-improving monetary equilibrium exists in the money treatments of both the UP and RI games. In the monetary treatment of the UP model, the monetary equilibrium increases overall expected earnings over the nonmonetary equilibrium from $0.00 to $1.00. Player 1 consumes for sure, doesn’t have to produce in this equilibrium, and therefore has the most to gain, but equilibrium earnings for players other than player 1 increase from $0.00 to $0.75. In the 16  monetary treatment of the RI game, overall expected earnings are $1.75 per player, slightly higher than the monetary equilibrium for the UP model. This increase is attributed to the facts that (i) in the monetary equilibrium for the RI game, player 4 does not incur a $1 production cost, and (ii) in the corresponding nonmonetary treatment, expected earnings exceed 0.  5.2  Experimental Procedures  The experiment consisted of nonmonetary and monetary treatments for each of the three games summarized in Table 2, generating a total of six distinct treatments. For each treatment, we conducted a series of three sessions. In each session, a cohort of 16 volunteers were randomly seated at visually isolated computer terminals. Prior to the actual experiment, the participants listened to a recording with instructions while following along on printed copies of their own. Instructions were followed by a quiz of understanding, after which the session began.15 It appears that the participants understood the instructions well. Sessions consisted of a sequence of 11 three-match rounds. At the outset of the session, participants are randomly divided into groups of four, which remain fixed for the first 10 rounds. At the beginning of each round, the participants in each group are randomly re-ordered in a sequence of players 1 to 4. To separate learning from possible reputation effects that may possibly emerge as a consequence of being re-matched with the same participants (albeit in different sequence orders each period), we pause the session after period 10. At this point, participants are given additional instructions for how the final round differs from rounds 1-10: 1. First, participants will be re-matched into new groups. Importantly, under the rematching protocol, each participant will be grouped with three other participants with whom they have not previously interacted and none of whom have previously interacted with each other.16 The rationale for this is to make participants perceive the last round as a one-shot interaction. 2. Second, both production costs and the return from consumption are tripled, to c = $3 and u = $9 in round 11. Following round 11, the session ends. Lab dollars are converted to U.S. currency on a 1-to-1 basis; participants are privately paid the sum of their earnings for all periods plus a $6 appearance fee and dismissed. 15  Instructions are available in an online Appendix at www.sultanum.com/research.html. The absence of possible contagion that the round 11 re-matching protocol invoked was easy to convincingly explain. In rounds 1-10, subjects were randomly and anonymously assigned a group letter (A-D), as well as a number (1-4). For rounds 1-10, the subjects in each group letter were re-sequenced at the start of each round (thus, for example, a sequencing for members of group C might be C2, C4, C3, and C1. For round 11, groups were re-formed to consist of one member from each group letter A to D, such as A3, B2, C4, and D1. 16  17  Table 3: Matrix of Treatments Game No Money Money Baseline Sequential BS-N1, BS-N2, BS-N3 BS-M1, BS-M2, BS-M3 (BS) Uncertain Position UP-N1, UP -N2, UP-N3 UP-M1, UP -M2, UP -M3 (UP) Role Identification RI-N1, RI-N2, RI-N3 RI-M1, RI-M2, RI-M3 (RI)  A summary of the matrix of treatments appears as Table 3. The experiment was conducted between March 19, 2018, and April 4, 2018, at Virginia Commonwealth University, and involved a total of 288 student volunteers. Participants, mostly upper-level undergraduate business, engineering, and mathematics students, were recruited with the ORSEE recruitment software. The experiment was written in ZTree. Each session lasted roughly 45 minutes. Earnings (including a $6 appearance fee) ranged from $2.50 to $37.00 and averaged $19.00.  5.3  Results  We organize this subsection into two parts. The first part overviews results in terms of the two primary outcomes of interest, (i) how does the addition of money affect production levels and (ii) when money is available, how do players use it? Evaluation of these outcomes allows us to draw a series of four general findings. As will be seen, in some critical respects, behavior differs substantially from theoretic predictions. The sequence of production rates, expressed as a percentage of the maximum possible production levels in each treatment, shown as Figure 1 provides an overview of the effects of money on production levels. For the nonmonetary treatments, shown in the left panel of the figure, production decisions are remarkably similar to the pattern of decaying contributions routinely observed in experiments on voluntary contributions toward public goods(See, e.g., ch. 6 in Davis and Holt (1993)). In the initial round, roughly 55% of participants choose to produce. With repetition, production rates slowly diminish to an average of about 17% in round 10. Despite the sharp environmental differences between the public goods contribution game and the games considered here, similar factors may well drive much of the observed decay in each case. Subjects initially approach each situation as a group investment game where they produce out of an expectation of increased earnings through active participation. With repetition, production (contributions) falls as participants increasingly appreciate the strategic aspects of their decisions, a consequence that confirms our design decision to conduct a final round with a re-grouping featuring a no-contagion re-matching protocol, along with increased incentives. In the monetary treatments, shown in the right panel of Figure 1, initial pro18  Figure 1: Production rates  duction rates again roughly average about 55%. Unlike their non-monetary counterparts, however, production rates decay considerably more slowly with repetition, falling to about 40% on average in period 10, suggesting that in all games, the addition of money appears to slow and in some treatments to even stabilize production decisions. Observe also in Figure 1 that round 11 production rates do not simply mimic those observed in the previous round 10 for either the monetary or non-monetary treatments. In the non-monetary treatments, mean production rates for the UP and RI games each roughly double from about 17% to about 33%, while in the BS treatment they continue their steady downward path from 17% to about 11%. In the monetary treatments, the dispersion of period 11 production rates results are even more varied, with the mean production rate rising from 41% in period 10 to 55% in the BS setting, but falling from 47% to 30% in the UP treatment and falling still further, from 42% to 12.5% in the RI treatment. Restart effects are not uncommon in small-group decision-making experiments. Here, however, the varied direction of these effects across treatments suggests that differing dynamics may drive decision-making in each case. To more formally evaluate production decisions, we regress production decisions against a series of indicator variables Dj , j ∈ {BS, U P, RI, M } that delineate the three treatments, as well as the presence or absence of money. Specifically, we estimate yijt = β0 + βBS,M DBS DM + βU P,M DU P DM + βRI, M DRI DM +βU P DU P + βRI DRI + εj + uijt  (1)  where yijt is production decision (0 or 1) of a subject i (1 to 4) in group j (1 to 12) in period t (1 -11). We cluster data by groups and use a robust (White sandwich) estimator to control for possible unspecified autocorrleation or heteroskedasticity. As a rough control for learning, we disaggregate estimates for rounds 1 to 10 into session halves. Finally, given the different group composition and payoff structure for the final round, we also estimate round 11 results separately. 19  Table 4: Mean Production Rates Game  Money  †††  BS UP RI  47.8% 41.7%††† 42.5%†††  BS UP RI  †††  55.6% 30.6%††† 12.5%†††  No Money  Money-No Money  Rounds 6-10 27.8% 35.6% 22.5%  20.0%∗∗∗ 6.1% 20.0%∗∗  Rounds 11 11.1% 30.6% 33.3%  44.4%∗∗∗ 0.0% -20.8%∗  Key: † Reject the null hypothesis that mean production rates equal 100%. ∗ Reject the null hypothesis of no difference in mean production rates across the Money and No Money treatments of a game. In each case, one, two, or three markets indicate rejection of the null at p < 0.10, 0.05 and 0.01, respectively.  Table 4 summarizes regression results for rounds 6-10, as well as for round 11.17 Looking first at mean production rates for the money treatments of each game, observe that these rates exceed 50% only in round 11 of the BS treatment with money, and even in this case, at 55.6%, it remains significantly below the 100% level consistent with the efficient outcome. This is a first general finding. Finding 1. Given money, groups fail to coalesce on the efficient monetary outcome in every treatment. Notice also the differences in production rates across the Money and No Money treatments in each game, shown as the rightmost column of Table 4. For the BS game, the addition of money significantly increases production both in rounds 6-10, where the difference is 20 percentage points, and in period 11, where the difference increases to 44.4 percentage points. The same is not true of the UP game, where money increases mean production by an insignificant 6.1 percentage points in rounds 6-10, and fails to increase production rates at all in round 11. Perhaps most curious are results for the RI game, where in rounds 6-10 money increases production by 20 percentage points on average (significantly different from 0 at p < 0.05), but in round 11 production rates fall by 20.8 percentage points (significantly different 17  Summary results for rounds 1-5, as well as primary regression results for all estimates appear as Tables C4.1 and C4.1A in Appendix C. We report linear probability estimates for ease of interpretation. Probit estimates yield substantially identical results. Logit regressions reported as Tables C4.2 and C4.2A in Appendix C generate essentially identical results.  20  from 0 at p < 0.10). We will consider presently possible reasons for the productivityreducing effect of money in the final round of the RI game. Nevertheless, we observe here that money consistently improves production rates only in the BS game. This is a second general finding. Finding 2. Mean production rates in the monetary treatment consistently exceed those in the non-monetary counterpart only in the BS game. To understand the differing effects of money across games, we consider the strategic behavior in the Money treatments of each game. Recall that a monetary equilibrium strategy consists of two components. First, when a player has a token, he passes it along by offering the token in exchange for a unit of her consumption good. Second, when a token is offered to an agent, the agent accepts it in exchange for producing a unit. Figures 2 and 3 provide summary information regarding these components. Figure 2 illustrates the time path of token pass rates for the money treatments of each game. Notice in the figure that participants overwhelmingly passed available tokens when making a consumption request. In all treatments, participants passed tokens when available at least 80% of the time even in initial rounds. Further, in terminal period 11, the token pass rate is very close to 100% (100% in the BS treatment, 90.5% in the UP treatment, and 85.7% in the R treatment). This is a third general finding. Figure 2: Token Pass Rate  Finding 3. In the money treatments of all games, players uniformly offer tokens in exchange for production. Consider next players’ responses to token offers. The three panels of Figure 3 illustrate production response rates for the money treatments of each game. Notice first that in all panels production rates both with and without token offers 21  Figure 3: Production rates  exhibit considerable variability, suggesting a randomness in responses that impedes the drawing of strong conclusions. Nevertheless, we also observe that in each game, money clearly matters in rounds 1-10 in the sense that the difference between production rates in response to and without token offers is persistent and in many instances quite large. In the BS and UP treatments, the difference between production responses to and without token offers in round 11 remains at roughly the same levels as in the immediately preceding 2 to 3 rounds. In the RI game, however, the production response to token offers collapses in round 11, with the effect of driving down the overall production rate for the RI treatment with money as was previously observed in Figure 3. To quantify how participant production rates respond whether a token is offered or not, we regress production decisions in the money treatments against a series of indicator variables Dj , j ∈ {BS, U P, RI, OF }, where OF denotes the event that a token is offered. Specifically, we consider the regression equation (2), yijt = β0 + βBS,OF DBS DOF + βU P,OF DU P DOF + βRI, OF DRI DOF +βU P DU P + βRI DRI + εj + uijt  (2)  where yijt is production decision (0 or 1) of subject i in group j in period t. As in equation (1) we cluster data by groups and use a robust (White sandwich) estimator to control for possible unspecified autocorrleation or heterskedasticity. Summary results for rounds 6-10, as well as for round 11, appear in Table 5.18 18  As in Table 4, we present results only for rounds 6-10 for purposes of succinctness. Summary  22  Table 5: Production Rates, with and without Token Offers Game  Offer  No Offer  BS-M UP-M RI-M  58.8% 60.0% 54.2%  Rounds 6-10 26.2% 18.8% 16.2%  32.6∗∗∗ 41.3∗∗∗ 38.0∗∗∗  Rounds 11 14.2% 5.9% 8.3%  51.2∗∗∗ 46.7∗∗∗ 8.4  BS-M 65.5%††† UP-M 52.6%†† RI-M 16.7%  Offer-No Offer  Key: † Reject the null hypothesis that mean production rates given an offer do not vary across the row treatment and RI-M. ∗ Reject the null hypothesis that production rates with an offer do not differ from production rates without an offer.  In the BS treatment with money, over rounds 6-10, players produced in response to a token offer in 58.8% of instances and produced without a token offer in 26% of instances for an average difference of 32.6 percentage points. In round 11, the production response to a token offer increased to 65.5, and the production rate absent an offer fell to 14.2% for a net 51.2 percentage-point increase in production rates associated with the use of a token (difference significant at p < .01). In the UP treatment, over rounds 6-10, players produced in response to a token offer in 60% of instances and produced without a token offer in 18.8% of instances for a 41.2 percentage point increase in production rates associated with a token offer (difference significant at p < .01). In round 11, the production response to an offer fell slightly to 52.6%, but the production rate absent an offer fell still further to 5.9%, resulting in a 46.7 percentage-point production rate increase resulting from token offers (difference significant at p < .01), an increase very similar to that in the BS treatment. Finally, in the RI treatment with money, over rounds 6-10, players produced in response to a token offer in only 54.2% of instances and produced without a token offer in 16.2% of instances, yielding a 38 percentage-point increase in production rates due to token offers (difference significant at p < .01). In round 11, however, the production response to token offers collapsed. The production rate following a results for rounds 1-5, as well as primary regression results for all estimates, appear as Tables C5.1 and C5.1A, respectively, in Appendix C. For ease of interpretation, we report linear probability estimates. Logit estimates, reported as Table C5.2 and C5.2A in Appendix C, yield more or less identical results.  23  token offer fell to 16.7%, while the production rate absent a token offer fell to 8.3%, for a statistically insignificant 8.4 percentage-point difference in production rates in response to and absent a token offer. We summarize these observations as the following fourth general finding. Finding 4. In all treatments, production requests with token offers increase production rates over production requests without token offers by 30 to 40 percentage points in rounds 6-10. In the BS and UP treatment, this difference increases to roughly 50 percentage points in terminal round 11. In the RI treatment, however, production responses to token offers collapse to a level that differs insignificantly from production rates without a token offer.  6  Discussion  From the findings described in the previous section, we see that: 1. the introduction of money matters in all our setups, but 2. play fails to approach the efficient monetary equilibrium in all the games, and 3. in stark contrast to theory, money most consistently matters in the BS game, a game with no monetary equilibrium. Viewed in light of the equilibrium outcomes, these findings raise a number of questions. First, why doesn’t production collapse toward the unique equilibrium in the BS treatment with money? Second, given that players tend to use tokens in rounds 6-10 in the monetary UP and RI treatments, why don’t outcomes gravitate toward the efficient monetary equilibrium? Third, in the UP game, given that players respond strongly to token offers in the monetary treatment, why doesn’t money matter more? Finally, why did players stop accepting tokens in the terminal round of the monetary RI treatment? We offer some thoughts on these puzzles since they were surprising to us. However, we need additional experiments to isolate the factors driving these results since our experiment was not designed for this. Here we just describe these puzzles.  Puzzle 1 One finding is that money is used, and useful, when it should not be. Consider first the persistently stable rates of production in the monetary BS treatment. In itself, this could simply reflect a failure to understand equilibrium analysis, and there is ample evidence in the experimental literature that subjects often “fail” to backward induct. However, the game being played is arguably very simple, and it seems hard to believe that player 3 would not be able to look forward and conclude that 24  player 4 doesn’t have any incentives to produce in exchange for a token in the final period when meeting player 2. Indeed, the almost linearly decreasing production rates throughout rounds 1-11 of the associated nonmonetary treatment looks much like the subjects are learning to appreciate the strategic considerations and thereby converging to the unique equilibrium with no production. The introduction of money, while does not have any effect on equilibrium outcomes because of the finite horizon, changes the environment. With money, it is possible to make inferences about past behavior (money is memory after all), and the patterns in Figure 3 and Table 4 suggest that being able to make such inferences is important. When money is offered in the BS treatment, the likelihood of production is 58.8% — about twice the production rate when money is not offered. This appears consistent with indirect reciprocity.19 Indirect reciprocity is the decision by one agent to make a costly effort for a second agent based on whether or not that agent has previously assisted a third agent. In the BS treatment, if agents follow indirect reciprocity they still need money to infer whether the consumer has produced to a third agent before or not.  Puzzle 2 Another question we have is why play does not converge towards the efficient monetary equilibrium when one exists. Obviously, theory doesn’t have much to say about which equilibrium players should coordinate on. Hence, one could simply argue that at least some players are trying to coordinate on the autarky equilibrium, which explains the failure to coordinate on the efficient equilibrium. We can’t completely rule this out, but it doesn’t look like this can fully explain the patterns in the data. In particular, if players were drawn by equilibrium strategies, one would think that the most common outcome would be that they would either uniformly accept tokens for production or always reject tokens. This is not what happens. Equilibria involving mixed acceptances do exist, but they do seem rather fragile, so we don’t believe this is what is going on. Furthermore, if strategic considerations are the driver of behavior, then, in the RI treatment, we should see more coordination in the hawk-dove outcome when player 3 transfers a coin in the final meeting. We can see no evidence of this in the data. As the summary of outcomes for the Hawk-Dove game shown in Table 6 illustrates, there is very little difference in the incidence of the Hawk-Dove outcome following a token offer relative to no-money offer or the treatment with no money. None of the differences are statistically significant, as can be seen in Tables C7.1 to C7.4 in Appendix C, so we can conclude that money transfer had no effect on the outcome of the Hawk-Dove game in the last period. 19  The notion of indirect reciprocity was first proposed by Alexander (1987). Experiments, both in the laboratory and in the field, suggest that indirect reciprocity can prominently affect group outcomes. See Seinen and Schram (2006), and van Apeldoorn and Schram (2016).  25  Table 6: Hawk-Dove Game Outcomes  Hence, again we are left with a pattern that seems hard to explain convincingly from strategic considerations. Indirect reciprocity seems more consistent with our experimental findings. Given the similarity with the BS treatment, one would think that whatever motivates behavior in that game should be a factor also in the UP and RI models.20  Puzzle 3 The reason we developed the games with monetary equilibria in finite periods is just so we could test them in the lab. However, money seems less useful exactly in the treatments with monetary equilibria. Specifically, in this section we ask (i), why isn’t money more effective in the UP game, and; (ii) why does the use of money collapse in round 11 of the RI game? We think these issues are related and will therefore discuss them together. In each case, the experiments are telling us that money is more helpful in the BS game than the UC and RI games — where money in theory could be useful. While hard-wired behavior such as indirect reciprocity could explain the use of money in the BS game, one would think that it would be helpful rather than harmful to have money circulate in a setup with a monetary equilibrium. After all, it seems that some players would respond to the strategic considerations and refuse to produce in exchange for a token in the BS setup, whereas this unravelling would not occur in the other UC and RI setups. Consider first the UP game. Production rates with money remain 6.1 percentage 20  The production rate sequences disaggregated by meeting number for the monetary treatments of the UP and RI models, shown as Figures C.1 and C.2 in Appendix C, follow patterns roughly similar to those in the monetary BS treatment, albeit with some increased variability across rounds.  26  points lower in the monetary UP treatment compared to the monetary BS treatment and 7.8 percentage point higher in the nonmonetary UP treatment than in the nonmonetary BS treatment. This results in a highly significant 20 percentage-point production rate increase when money is introduced in the BS model, compared to an insignificant 6 percentage-point increase in the UP model for rounds 6-10, with the round 11 difference across treatments growing to 44 percentage points in the BS game and falling to 0 in the UP game. One possibility is that the monetary BS treatment allows participants to condition their production decisions on their position in sequence, whereas in the UP treatment participants make a decision to respond to a token offer or not, independent of their position in sequence.21 Thus, even with similar average token offer response propensities, production rates may be higher in the monetary BS treatment because participants may be more likely to respond a token offer in the early meetings of a round than UP treatment players who submit token response decisions independent of their position in sequence. Indeed, token offers in the first meeting of the BS treatment were 5.8% more likely to be accepted than in the UP treatment (73.2% versus 67.5%). This is consistent with participants in the monetary BS treatment conditioning their response on their place in sequence. Moreover, in the first two meetings of the BS treatment with money, players 1 and 2 failed to offer available tokens in only 2.2% of instances (five of 225). In contrast, in the first two meetings of the monetary UP treatment, players 1 and 2 failed to offer tokens in 10.8% of possible instances (23 of 213) available). Consider now the collapse in production response to token offers in round 11 of the RI treatment. This could actually be consistent with theory. The monetary equilibrium requires coordination on equilibria that depend on whether or not player 3 has a token to offer. This seems pretty hard to achieve in a one-shot interaction, whereas having 10 rounds to learn to coordinate seems more hopeful. However, as we already discussed, the raw data on how the Hawk-Dove gamed is played don’t find that play is contingent on token offers. As a result, following the reshuffling of players and the tripling of payoffs in round 11, it is possible that players conclude that there was little benefit for player 3 to collect a token, making player 2 also less inclined to produce. If this is the case, it is still puzzling why this is the only treatment where strategic considerations drove production to zero.  7  Conclusion  Considerable progress has been made in monetary theory, from building rudimentary models designed to illustrate important principles in a succinct way, to studying more elaborate ones appropriate for empirical and policy work. There has also been a sizable body of research using experimental methods to try to evaluate the theory. 21  Of course, from a strategic perspective, this is the very feature of the UP-M game that allows a monetary equilibrium to exist.  27  However, the theory being tested is almost always an infinite horizon model. The actual experimental design has, by necessity, some finite truncation of the original model. While the literature has employed probabilistic stopping rules to mitigate the problem, that does not solve the issue: the models that have been tested do not have any monetary equilibria. One might argue that it is interesting to see if monetary exchange emerges in the lab when theory says monetary equilibrium does not exist. However, that seems like a disingenuous motivation, and is certainly not the way these papers are typically written. It seems clear that the writers were more interested in checking if monetary exchange emerges in the lab when theory says that a monetary equilibrium exists. Therefore, the fact that it does not exist in the games being implemented is problematic. Our contribution is to point out that there are simple finite horizon models in which monetary exchange can emerge as an equilibrium outcome. Therefore, we are able to test in the lab what happens when fiat money is introduced in treatments where, according to theory, money can be valued as an equilibrium outcome, and also compare with treatments where money theoretically should have no effect. The results of our experiments are puzzling, with money being more useful in the settings where theory says there is no monetary equilibrium. An avenue for future research is to identify what is driving individuals to use money in lab experiments, if not srtandard notions of monetary equilibrium. And, of course, such research should include experiments in settings that actually have a monetary equilibrium.  References [1] Alexander, Richard. The biology of moral systems (foundations of human behavior). (1987). [2] Aliprantis, C. D., Camera, G., and Puzzello, D. (2007a). Contagion equilibria in a monetary model. Econometrica 75, 277-82. [3] Aliprantis, C. D., Camera, G., and Puzzello, D. (2007b). Anonymous markets and monetary trading. Journal of Monetary Economics 54, 1905-28. [4] Allen, F., and Gorton, G. (1993). Churning bubbles. The Review of Economic Studies, 60(4), 813-836. [5] Allen, F., Morris, S., and Postlewaite, A. (1993). Finite bubbles with short sale constraints and asymmetric information. Journal of Economic Theory, 61(2), 206-229. [6] Anbarci, N., Dutu, R., and Feltovich, N. (2015). Inflation tax in the lab: a theoretical and experimental study of competitive search equilibrium with inflation. Journal of Economic Dynamics and Control, 61, 17-33.  28  [7] van Apeldoorn, Jacobien, and Arthur Schram. ”Indirect Reciprocity; A Field Experiment.” PloS one 11.4 (2016): e0152076. [8] Araujo, L. (2004). Social norms and money. Journal of Monetary Economics, 51(2), 241-256. [9] Benoit J. P. , and Krishna, V. (1985). Finitely repeated games. Econometrica, 53(4), 905-22 [10] Berentsen, A., McBride, M., and Rocheteau, G. (2017). Limelight on dark markets: Theory and experimental evidence on liquidity and information. Journal of Economic Dynamics and Control, 75, 70-90. [11] Bernasconi, M. and Kirchkamp, O. (2000). Why monetary policy matters? An experimental study of saving, inflation and monetary policies in an overlapping generations model. Journal of Monetary Economics, 46, 315-343. [12] Bernheim, D. (1984) Rationalizable Strategic Behavior. Econometrica 52: 10071028. [13] Brown, P. M. (1996). Experimental evidence on money as a medium of exchange. Journal of Economic Dynamics and Control, 20(4), 583-600. [14] B´o, P. D. (2005). Cooperation under the shadow of the future: Experimental evidence from infinitely repeated games. American Economic Review, 95(5), 1591-1604. [15] B´o, P. D., and Fr´echette, G. R. (2011). The evolution of cooperation in infinitely repeated games: Experimental evidence. American Economic Review, 101(1), 411-29. [16] B´o, P. D., and Fr´echette, G. R. (2011). On the Determinants of Cooperation in Infinitely Repeated Games: A Survey. Mimeo, Brown University and New York University. [17] Camera, G., and Casari, M. (2014). The coordination value of monetary exchange: Experimental evidence. American Economic Journal: Microeconomics, 6(1), 290-314. [18] Cooper, D. J., and Kuhn, K.U. (2014). Communication, renegotiation, and the scope for collusion., American Economic Journal: Microeconomics, 6(2), 247–278. [19] Davis, Douglas D., and Charles A. Holt. Experimental economics. Princeton university press, 1993.  29  [20] Ding, S., and Puzzello, D. (207) Legal restrictions and international currencies: An experimental approach. Mimeo. [21] Duffy, J. (2001). Learning to speculate: Experiments with artificial and real agents. Journal of Economic Dynamics and Control, 25(3-4), 295-319. [22] Duffy, J., and Ochs, J. (1999). Emergence of money as a medium of exchange: An experimental study. American Economic Review, 89(4), 847-877. [23] Duffy, J., and Ochs, J. (2002). Intrinsically worthless objects as media of exchange: Experimental evidence. International Economic Review, 43(3), 637673. [24] Duffy, J., and Puzzello, D. (2014). Gift exchange versus monetary exchange: Theory and evidence. American Economic Review, 104(6), 1735-76. [25] Duffy, J., and Puzzello, D. (2014). Experimental evidence on the essentiality and neutrality of money in a search model. In Experiments in Macroeconomics (pp. 259-311). Emerald Group Publishing Limited. [26] Duffy, J., and Puzzello, D. (201?). Monetary policies in the lab”, Mimeo. [27] Faust, J., (1989). Supernovas in monetary theory: Does the ultimate sunspot rule out money?. The American Economic Review, 79(4), pp.872-881. [28] Fr´echette, G. R., and Yuksel, S. (2017). Infinitely repeated games in the laboratory: Four perspectives on discounting and random termination. Experimental Economics, 20(2), 279-308. [29] Jiang, J. H., and Zhang, C. (2017). Competing currencies in the laboratory. Mimeo, Bank of Canada. [30] Kehoe, T. J., Kiyotaki, N., and Wright, R. (1993). More on money as a medium of exchange. Economic Theory, 3(2), 297-314. [31] Kandori, M. (1992). Social norms and community enforcement. The Review of Economic Studies, 59(1), 63-80. [32] Kiyotaki, N., and Wright, R. (1989). On money as a medium of exchange. Journal of Political Economy, 97(4), 927-954. [33] Kiyotaki, N., and Wright, R. (1993). A search-theoretic approach to monetary economics. The American Economic Review, 63-77. [34] Kocherlakota, N. R. (1998). Money is memory. Journal of Economic Theory, 81(2), 232-251.  30  [35] Kovenock, D., and Vries, C. G. (2002). Fiat exchange in finite economies. Economic Inquiry, 40(2), 147-157. [36] Lagos, R., Rocheteau, G., and Wright, R. (2017). Liquidity: A new monetarist perspective. Journal of Economic Literature, 55(2), 371-440. [37] Lagos, R., and Wright, R. (2005). A unified framework for monetary theory and policy analysis. Journal of Political Economy, 113(3), 463-484. [38] Marimon, R., Spear, S., and Sunder, S. (1993) Expectationally-driven market volatility: An experimental study. Journal of Economic Theory, 62. [39] Marimon, R., and Sunder, S. (1993). Indeterminacy of equilibria in a hyperinflationary world: Experimental Evidence. Econometrica, 61, 1073–1107. [40] Marimon, R., and Sunder, S. (1995). Does a constant money growth rule help stabilize inflation?: Experimental evidence. Carnegie–Rochester Conference Series on Public Policy, 43, 111–156. [41] Matsuyama, K., Kiyotaki, N., and Matsui, A. (1993). Toward a theory of international currency. The Review of Economic Studies, 60(2), 283-307. [42] Nosal, E., and Rocheteau, G. (2011). Money, payments, and liquidity. MIT press. [43] Renero, J. M. (1998). Unstable and stable steady-states in the Kiyotaki-Wright model. Economic Theory, 11(2), 275-294. [44] Rietz J. (2017). Secondary currency acceptance: Mimeo.  Experimental evidence.  [45] Samuelson, P. A. (1958). An exact consumption-loan model of interest with or without the social contrivance of money. Journal of Political Economy, 66(6), 467-482. [46] Seinen, Ingrid, and Arthur Schram. ”Social status and group norms: Indirect reciprocity in a repeated helping experiment.” European Economic Review 50.3 (2006): 581-602. [47] Tirole, J. (1985). Asset bubbles and overlapping generations. Econometrica, 1499-1528. [48] Wallace, N., (1980). The overlapping generations model of fiat money. In Models of Monetary Economies, edited by John H. Kareken and Neil Wallace, Minneapolis: Federal Reserve Bank Minneapolis. [49] Wright, R. (1994). A note on sunspot equilibria in search models of fiat money. Journal of Economic Theory, 64(1), 234-241. 31  [50] Branch, W., and McGough, B. (2016). Heterogeneous beliefs and trading inefficiencies. Journal of Economic Theory, 163, 786-818.  Appendix This appendix overviews a game not included in the text. It sketches out a public goods game in which players can accept or pass a token in exchange for a binding commitment to contribute to a costly public good. There are three players labeled i = 1, 2, 3. Players i = 1, 2 can invest in a public good, which we model as a binary choice, whereas player 3 makes no decisions in the non-monetary economy. We write w for the action where a player works to invest in the public good and we denote by s the action where a player shirks and doesn’t contribute. We assume the payoffs as given as w s w G − c1 , G − c2 , G g − c1 , g, g . s g, g − c2 , g 0, 0, 0  (3)  An interpretation is that G is the level of the public good that is achieved if players 1 and 2 both works to invest, g is the level achieved if only one player works, and ci is the cost of working for i = 1, 2. Usually parameters in discrete public good games are set so as to generate a prisoner’s dilemma, but this would not work for us as the sequential structure would then be irrelevant. We therefore set parameters such that (w, w) is the efficient outcome and (s, s) is the unique Nash, but where player 2 would have an incentive to work if player 2 could commit to working. That is, we assume that G − c1 < g < G − c2 ,  (4)  to ensure that player 1 shirks if player 2 works and that player 2 wants to work if player 1 works. Additionally, we assume that g − c2 < 0,  (5)  which implies that player 2 shirks if player 1 shirks. Combining (4) and (5) we obtain g − c1 < g − c2 < 0. (6) Hence, always shirking is a dominant strategy for player 1 in the simultaneous move normal form game depicted in (3). The best reply for player 2 is to work if and only if player 1 works, so the unique Nash equilibrium is (s, s). To avoid trivialities, we also assume that g > 0 and that G − c1 > 0, 32  (7)  2.1  The non-monetary economy (q = 0)  Let us start studying the non-monetary economy: q = 0. Figure 1 depicts the game tree associated with this environment. In period 1, agent A has the only relevant choice, he chooses whether to work, denoted by action w, or(s, not, by action nw. Inhas period which guarantees that (w, w) Pareto dominates s). denoted The inequality in (7) also 2, agent B has the only relevant choice, he chooses whether to work, denoted by action the implication that the unique subgame perfect equilibrium in a sequential game w, orwhere not, denoted by action nw.player In period 3, 2payoffs realized each final nod in player 1 moves before 2 is for to workare if and only and if 1 works and for the tree the payoffs of agents the A, Bdesirable and C. outcome As usual(w, in w) game 1 to contain work, which would implement . theory, we use the dashed line connecting nodes to characterize the information set of an agent when he is called to play. The action nodes of agent B are connected because when he plays, he Figure 4: Sequential game does not know whether he is in the branch where agent A took action w or nw. A  w  nw  B  B  w    G − cA    G − cB  G  nw  w   g − cA   g   0      g    g − cB  0  nw  0    0  0   Figure 1: Non-monetary game tree It is easy to see that both agents, A and B, working for the provision of the public Nonmonetary Equilibrium goodThe cannot be an equilibrium of the above game. If that was an equilibrium, then agent B would take action w with one, but then, G −period, c A < g, agent1Aand would We assume that there areprobability two pairwise meetings. In since the first players free 3ride and take action nw. meet. In this meeting, player 1 can work to produce some public good and player 3 does nothing in the non-monetary economy. In the second period, players 1 and Proposition 1 The efficient outcome, where A and B takesome action w, isgood, not anwhereas equilibrium 2 meet, and then it is player 2 who canagents work to produce public of theplayer non-monetary depict action in Figure 1 has nogame available in 1. the model without money. Player 2 cannot observe the action taken by player 1 in the first period, so the normal form of the Proof. In the text.in (3), and the unique non-monetary equilibrium is thus for players 1 game is that and 2 to shirk. It should be noted that adding cheap talk in stage 2 does not have an effect on the equilibrium outcome, as player 1 would always use a message that would make player 2 more likely to work, regardless of the action taken in period 1. The Monetary Economy  3  We now endow player 3 with a unit of an object that no agent can derive any consumption utility from, which we refer to as money. For simplicity, we will treat money as indivisible, but the example trivially extends to the case with divisible  33  The monetary economy (q = 1)  2.2  money. What is important, however, is the timing that allows player 3 to have some Let us consider now the monetary economy: q = 1. Figure 2 depicts the game tree commitment power. associated with this environment. In period 1, agent A has the choice of whether to In the beginning of period 1, player 3 makes a take-it-or-leave-it offer, which work, denoted by action w, or not, denoted by action nw. Then agent C has the choice specifies an action a in {w, s} for player 1 and whether the unit of money is transof whether to transfer1 agent A the money, denoted by action t, or not, denoted by action ferred from 3 to 1. Then player 1 either accepts or rejects. If the offer is accepted nt. In period 2, if agent C transferred agent A the money, then agent A has the choice of both parties are bound execute their part by of action the contract, the money whether to transfer agent to B the money, denoted t, or not,otherwise denoted by action nt. doesn’t change hands and (without loss of generality) player 1 takes the action s. If agent C did not transfer agent A the money, then agent A only choice is to not transfer Then, in the second period, player 1 can transfer a unit of money to player 2 if agent B the money. Then agent B has the choice of whether to work, denoted by action holding before player takes an 3, action a2 are in {w, s} . Notice that player 2 w, or not,money denoted by action nw.2 In period payoffs realized and each final nod in can tree onlycontain directly holds a unit or money, not the investment the theobserve payoffswhether of agents1 A, B and C. When agentand B plays, he only knows decision. The extensive form of the game is sketched in Figure 5 below: whether agent A transferred him the money or not. He does not know the actions that led to agent A transferring him the money. In particular, he doesn’t know whether agent A worked or not for the provision of the public good. Figure 5: Sequential game A w  nw  C t  C nt  t  A t  A nt  B w  nw  w  A t  nt  B  B nw  w  nt  A nt  B nw  w            G − cA g − cA G − cA g − cA G − cA g − cA g          g g g  G − c B    G − c B    G − c B    g − cB  G 0 G 0 G 0 0  nt  B nw    0    0  0  w    g    g − cB  0  B nw    0    0  0  w    g    g − cB  0  nw    0    0  0  Figure 2: Monetary game tree  We first note that, just like in most microfounded models of fiat money, there The problem implementing the efficient outcome is that agent A wants to free ride exists equilibria in which money is not valued. This is because if player 2 shirks on agent B work. If agent B could condition his working decision on agent A working regardless of whether player 1 has money or not, then player 3 may always (or never) hand the coin over to player 1. Assuming4player 2 has pessimistic beliefs and thinks that player 1 shirked off the equilibrium path, player 2 is behaving sequentially rationally. Hence, the strategies described together with such pessimistic beliefs constitutes a perfect Bayesian Nash equilibrium. In terms of the variables that the players care about, such an equilibrium replicates the nonmonetary equilibrium. There is also an equilibrium in which money is valued. Assume that:  34  • player 3 offers to hand over the coin if and only if 1 works; • player 1 accepts the offer; • player 1 hands over the unit of money to player 2 whenever possible. Then, both information sets for player 2 are reached and the unique beliefs consistent with Bayes rule are that player 1 worked for sure when the coin is transferred to player 2, and that having no coin is evidence for not working. Hence, the best response by player 2 given the unique consistent beliefs corresponding to these strategies by players 1 and 3 is to also work as g < G − c2 . Moreover, under the assumption that player 3 provides the coin if and only if player 1 works and that player 2 works if and only if player 1 pays her to do so with the (intrinsically useless) unit of money, working is a best response for player 1. Finally, player 3 has a strict incentive to provide the money if player 1 works in return for the money, and therefore has no profitable deviation. Hence, this is a perfect Bayesian equilibrium that generates an outcome that Pareto dominates the non-monetary equilibrium. The example may appear non-robust as it relies on the player with money to have no incentive to hand over the money when player 1 shirks, which in turn requires that the player is being made no better off when the public good is partially provided compared with no provision at all. This, however, can be fixed by changing the game so that player 3 can extract some of the surplus from player 1. For example, letting the player with money make a “take it or leave it offer” would allow us to change preferences so that the equilibrium would survive even if all players are strictly better off when the project is partially completed. Notice that the example is a simple illustration of money being a substitute to record-keeping as stressed by Kocherlakota (1998) and Wallace (1980). If there would be a record of player 1 working in period 1, we could just forget about money because incentives (with or without money) would be captured by the sequential game in (4). In this example, there is a monetary equilibrium that perfectly replicates this outcome when there is no record keeping, and this is most easily understood from observing that if money is evidence of working, the incentives for the two potential investors is the same as in (4) (and that it is a best reply for the money holder to hand over the money if and only if player 1 works).  35