The full text on this page is automatically extracted from the file linked above and may contain errors and inconsistencies.
FEDERAL RESERVE BANK OF ST. LOUIS The Practice of Central Bank Intervention: Looking Under the Hood Christopher J. Neely T here has been a long and voluminous literature about official intervention in foreign exchange markets. Official intervention is generally defined as those foreign exchange transactions of monetary authorities that are designed to influence exchange rates, but can more broadly refer to other policies for that purpose. Many papers have explored the determinants and efficacy of intervention (Edison, 1993; Sarno and Taylor, 2000) but very little attention has been paid to the more pedestrian subject of the mechanics of foreign exchange intervention like choice of markets, types of counterparties, timing of intervention during the day, purpose of secrecy, etc. This article focuses on the latter topics by reviewing the motivation for, methods, and mechanics of intervention. Although there apparently has been a decline in the frequency of intervention by the major central banks, reports of a coordinated G-7 intervention to support the euro on September 22, 2000, remind us that intervention remains an active policy instrument in some circumstances. The second section of the article reviews foreign exchange intervention and describes several methods by which it can be conducted. The third section presents evidence from 22 responses to a survey on intervention practices sent to monetary authorities. TYPES OF INTERVENTION Intervention and the Monetary Base Studies of foreign exchange intervention generally distinguish between intervention that does Christopher J. Neely is a senior economist with the Federal Reserve Bank of St. Louis. The author thanks the monetary authorities of Belgium, Brazil, Canada, Chile, the Czech Republic, Denmark, France, Germany, Hong Kong, Indonesia, Ireland, Italy, Japan, Mexico, New Zealand, Poland, South Korea, Spain, Sweden, Switzerland, Taiwan, and the United States for their cooperation with this study. The author thanks Michael Melvin and Paul Weller for discussions about the survey and Hali Edison, Trish Pollard, and Lucio Sarno for helpful comments. Mrinalini Lhila provided research assistance. This article was originally published in Central Banking (November 2000, XI(2), pp. 24-37; <http://www.centralbanking.co.uk>). or does not change the monetary base. The former type is called unsterilized intervention while the latter is referred to as sterilized intervention. When a monetary authority buys (sells) foreign exchange, its own monetary base increases (decreases) by the amount of the purchase (sale). By itself, this type of transaction would influence exchange rates in the same way as domestic open market purchases (sales) of domestic securities; however, many central banks routinely sterilize foreign exchange operations—that is, they reverse the effect of the foreign exchange operation on the domestic monetary base by buying and selling domestic bonds (Edison, 1993). The crucial distinction between sterilized and unsterilized intervention is that the former constitutes a potentially useful independent policy tool while the latter is simply another way of conducting monetary policy. For example, on June 17, 1998, the Federal Reserve Bank of New York bought $833 million worth of yen (JPY) at the direction of the U.S. Treasury and the Federal Open Market Committee. In the absence of offsetting transactions, this transaction would have increased the U.S. monetary base by $833 million, which would tend to temporarily lower interest rates and ultimately raise U.S. prices, depressing the value of the dollar.1 As is customary with U.S. intervention, however, the Federal Reserve Bank of New York also sold an appropriate amount of U.S. Treasury securities to absorb the liquidity and maintain desired conditions in the interbank loan market. Similarly, to prevent any change in Japanese money market conditions, the Bank of Japan would also conduct appropriate transactions to offset the rise in demand for Japanese securities caused by the $833 million Federal Reserve purchase. The net effect of these transactions would be to increase the relative supply of U.S. government securities versus Japanese securities held by the public but to leave the U.S. and Japanese money supplies unchanged. Fully sterilized intervention does not directly affect prices or interest rates and so does not influence the exchange rate through these variables as ordinary monetary policy does. Rather, sterilized intervention might affect the foreign exchange market through two routes: the portfolio balance channel and the signaling channel. The portfolio 1 Empirically, it has been very difficult to establish the reaction of exchange rates to changes in economic fundamentals. M AY / J U N E 2 0 0 1 1 REVIEW balance channel theory holds that sterilized purchases of yen raise the dollar price of yen because investors must be compensated with a higher expected return to hold the relatively more numerous U.S. bonds. To produce a higher expected return, the yen price of the U.S. bonds must fall immediately. That is, the dollar price of yen must rise. In contrast, the signaling channel theory assumes that official intervention communicates information about future monetary policy or the long-run equilibrium value of the exchange rate. Spot and Forward Markets for Intervention The previous example implicitly assumed that the Federal Reserve Bank of New York conducted its purchase of yen in the spot market—the market for delivery in two days or less. Intervention need not be carried out in the spot market, however; it also may be carried out in the forward market.2 Forward markets are those in which currencies are sold for delivery in more than two days. Because the forward price is linked to the spot price through covered interest parity, intervention in the forward market can influence the spot exchange rate. To understand covered interest parity, consider the options open to an American bank that has capital to be invested for one year. The bank could lend that money at the interest rate on U.S. dollar assets, earning the gross return of (1+itUSD) on each dollar. Or, it could convert its funds to a foreign currency (e.g., the euro), lend that sum in the overnight euro money market at the euro interest rate, and then convert the proceeds back to dollars at the end of the year. If, at the beginning of the contract, the bank contracts to convert the euro proceeds back to dollars, it will receive 1/Ft,t +365 dollars for each euro, where Ft,t +365 is the euros-per-dollar forward exchange rate. The gross return to each dollar through this second strategy is St 1 + iteuro , Ft ,t + 365 ( ) where St is the euros-per-dollar spot exchange rate on day t. If the return to one strategy is higher than the other, market participants will invest in that strategy, driving its return down and the other return up until the strategies have approximately 2 M AY / J U N E 2 0 0 1 equal return. Covered interest parity (CIP) is the condition that the strategies have equal return: (1) (1 + i ) = F S (1 + i ) . USD t t euro t t ,t + 365 As equation (1) must approximately hold all the time, intervention that changes the forward exchange rate must also change the spot exchange rate.3 For example, a forward purchase of euros that raises Ft,t +365 must also raise St. Forward market interventions—the purchase or sale of foreign exchange for delivery at a future date—have the advantage that they do not require immediate cash outlay. If a central bank expects that the need for intervention will be short-lived and will be reversed, then a forward market intervention may be conducted discreetly, with no effect on foreign exchange reserves data. For example, published reports indicate that the Bank of Thailand used forward market purchases to shore up the baht in the spring of 1997 (Moreno, 1997).4 Both the spot and forward markets may be used simultaneously. A transaction in which a currency is bought in the spot market and simultaneously sold in the forward market is known as a currency swap. While a swap itself will have little effect on the exchange rate, it can be used as part of an intervention. The Reserve Bank of Australia (RBA) used the swaps market to sterilize spot interventions. In these transactions, the spot leg of the swap is conducted in the opposite direction to the spot market intervention, leaving the sequence equivalent to a forward market intervention. The RBA uses the spot/swap combination rather than an outright forward transaction because the former permits more flexible implementation of the intervention. 2 Exchange rate markets and practices are described in detail in the Bank for International Settlements Central Bank Survey of Foreign Exchange and Derivatives Market Activity (1999). 3 Of course, equation (1) could continue to hold with a change in itUSD or iteuro instead of Ft,t +365, but the interest rates are held fixed by conditions in the U.S. and euro money markets, respectively. 4 Not all spot or forward market transactions are interventions, of course. For example, to limit the costs of capital controls that made it hard to hedge foreign exchange exposure, the Reserve Bank of South Africa (RBSA) used to provide forward cover for firms with foreign currency exposure. That is, it would buy dollars forward from foreign firms with dollar accounts receivable and sell dollars to foreign firms with dollar accounts payable in the future. As capital controls have been reduced, the RBSA has reduced its net open position in the forward market. FEDERAL RESERVE BANK OF ST. LOUIS The Options Market and Intervention The options market has also been used by central banks for intervention. A European-style call (put) option confers the right, but not the obligation, to purchase (sell) a given quantity of the underlying asset on a given date. Usually, the option contract specifies the price for which the asset may be bought or sold, called the strike or exercise price. Monetary authorities seeking to prevent depreciation or devaluation of their currency may sell put options on the domestic currency or call options on the foreign currency.5 While the price of options has no direct effect on spot exchange rates, speculators often purchase put options instead of shorting a weak currency. The writers (sellers) of these put options attempt to hedge their position by taking a long position in the weak currency, adding to the downward pressure on its price. By writing put options on the weak currency—adding liquidity to the options market—the central bank provides dealers with a synthetic hedge; dealers need not go into the spot market to take short positions in the weak currency. This arrangement creates the same type of financial risk for the central bank—if the currency is devalued—as would the direct purchase of the weak currency in spot or forward markets. Like forward market intervention, it does not, however, require the monetary authority to immediately expend foreign exchange reserves. In fact, the strategy generates revenues upon the sales of the options. The Bank of Spain reportedly used this strategy of selling put options on the peseta to fight devaluation pressures during 1993 (The Economist, 1993), though the institution denied it emphatically (The Financial Times, 1993). In another intervention strategy using options, the Banco de Mexico has employed sales of put options on the U.S. dollar to accumulate foreign exchange reserves since August 1, 1996 (Galan Medina, Duclaud Gonzalez de Castillo, Garcia Tames, 1997). The put options give the bearer the right to sell dollars to the Banco de Mexico at a strike price determined by the previous day’s exchange rate, called the fix exchange rate. The option may be exercised only if the peso has appreciated over the last month, if the fix peso price of dollars is no higher than a 20-day moving average of previous strike prices. This restriction is designed to prevent the Banco de Mexico from having to buy dollars (sell pesos) during a period of peso depreciation. The sales of these put options may be considered foreign exchange intervention because they are designed to prevent the necessity of intervention to purchase dollar reserves that might affect the exchange rate in undesirable ways. Because the mechanism is totally passive—the public decides when to exercise the options—the use of these options effectively lessens the signaling impact of Banco de Mexico purchases of foreign exchange reserves. Indirect Intervention Recall that although official intervention is generally defined as foreign exchange transactions of monetary authorities that are designed to influence exchange rates, it can also refer to other (indirect) policies for that purpose. In addition to direct transactions in various instruments, Taylor (1982a, b) recounts a number of methods by which countries intervene indirectly in foreign exchange markets. For example, he reports that in the 1970s governments manipulated the currency portfolio of nationalized industries in France, Italy, Spain, and the United Kingdom to influence exchange rates. This practice was allegedly used to “disguise” intervention, as was the French and Italian practice of transacting through undisclosed foreign exchange accounts held at commercial banks. There are innumerable methods of indirectly influencing the exchange rate that do not fit in the narrow definition of intervention as foreign exchange transactions of monetary authorities designed to influence exchange rates. These methods involve capital controls—taxes or restrictions on international transactions in assets like stocks or bonds—or exchange controls—the restriction of trade in currencies (Dooley, Mathieson, and RojasSuarez, 1993; Neely, 1999), rather than transactions. Sometimes such methods are substituted for more direct foreign exchange intervention, especially by the monetary authorities of countries without a long history of free capital movements. For example, Spain, Ireland, and Portugal introduced capital controls—including mandatory deposits against the holding of foreign currencies—in the exchange rate mechanism (ERM) crises of 1992-93, in response to speculation against their currencies. 5 Blejer and Schumacher (2000) discuss the implications of the use of derivatives for central banks’ balance sheets. M AY / J U N E 2 0 0 1 3 REVIEW Table 1 Responding Monetary Authorities Country/authority Belgium Brazil Canada Chile Czech Republic Denmark France Germany Hong Kong Indonesia Ireland Italy Japan Mexico New Zealand Poland South Korea Spain Sweden Switzerland Taiwan United States NOTE: The table identifies the 22 monetary authorities that responded to the survey. SURVEY RESULTS To investigate the practice of foreign exchange intervention, questionnaires on the topic were sent to the 43 institutions that participated in the Bank for International Settlements (BIS) (1999) survey of foreign exchange practices and the European Central Bank. Of those 44 authorities, 22 responded to some or all of the questions asked. Table 1 lists the surveyed authorities who responded. A cover letter explained that the survey covered practices over 1990-2000, so that authorities that no longer intervene—or even no longer have independent currencies—could report on past practices. The Reserve Bank of New Zealand was the only authority to report that it had not intervened in the last 10 years.6 To respect the confidentiality of the respondents, the responses of specific institutions will not be identified unless the information is clearly in the public domain. Table 2 shows statistics summarizing the distribution of the responses to survey questions on the mechanics of, motivation for, and the efficacy of intervention. The Mechanics of Intervention Question 1 inquires about the frequency of intervention. Of the 14 authorities that responded to the question, the percentage of business days on which they report intervening—using either sterilized or unsterilized transactions—ranged from 0.5 percent to 40 percent, with 4.5 percent being the median. While there might be some selection bias because authorities that intervene are more likely to respond to the survey, it does 4 M AY / J U N E 2 0 0 1 appear that official intervention is reasonably common in foreign exchange markets. In responding to question 2, 30 percent of authorities report that all their foreign exchange transactions change the monetary base, 30 percent that such dealings sometimes change the base, and 40 percent that they never change the base. For some issues, such as motivation or time horizon of effectiveness, this conflation of responses about sterilized and unsterilized intervention is potentially unfortunate. When monetary authorities do intervene (question 3), they seem to have some preference for dealing with major domestic banks but will also transact with major foreign banks. This should not come as a complete surprise as banks tend to specialize in trading their own national currencies (Melvin, 1997). Approximately half of authorities will sometimes conduct business with other entities, such as other central banks (23.5 percent) or investment banks (25 percent); 6.3 percent will always transact with investment banks. Intervention transactions over the last decade have almost always been conducted at least partially in spot markets according to the answers to question 4: 95.2 percent of authorities report that their official intervention activities always include spot market transactions and another 4.8 percent sometimes include spot transactions. A total of 52.9 percent of authorities report sometimes using the forward market, perhaps in conjunction with the spot market to create a swap transaction. No authority reports always using the forward market. Finally, one authority reports having used a futures market to conduct intervention. There is no clear pattern as to the method of dealing with counterparties (question 5). Direct dealing over the telephone is most popular, being used sometimes or always by 100 percent of authorities. Direct dealing over an electronic network is used by 43.8 percent of authorities sometimes or always. Live foreign exchange brokers are used sometimes or always by 63.2 percent of the respondents. Finally, electronic brokers such as EBS are used by 12.5 percent of the authorities. There 6 The Reserve Bank of New Zealand’s response on this point is a matter of public record. See Deputy Governor Sherwin’s May 9, 2000, speech to the World Bank Treasury at <http://www.rbnz.govt.nz/speeches/0092115.html>. This link was current as of April 20, 2001. The Appendix to this article provides links to descriptions of foreign exchange markets and/or intervention policies in the Web pages of a number of monetary authorities. FEDERAL RESERVE BANK OF ST. LOUIS Table 2 Summary of Intervention Survey Responses 1. In the last decade, on approximately what percentage of business days has your monetary authority conducted intervention? No. of responses Minimum Median Maximum 4.5 40.0 14 0.5 No. of responses Never 2. Foreign exchange intervention changes the domestic monetary base. 20 40.0 30.0 30.0 3. Intervention transactions are conducted with the following counterparties: Major domestic banks Major foreign banks Other central banks Investment banks 21 18 17 16 0.0 16.7 76.5 68.8 28.6 72.2 23.5 25.0 71.4 11.1 0.0 6.3 4. Intervention transactions are conducted in the following markets: Spot Forward Future Other (please specify in margin) 21 17 16 15 0.0 47.1 93.8 93.3 4.8 52.9 6.3 6.7 95.2 0.0 0.0 0.0 5. Intervention transactions are conducted by: Direct dealing with counterparties via telephone Direct dealing with counterparties via electronic communication Live FX brokers Electronic brokers (e.g., EBS, Reuters 2002) 20 16 19 16 0.0 56.3 36.8 87.5 30.0 31.3 52.6 0.0 70.0 12.5 10.5 12.5 6. The following strategies determine the amount of intervention: A prespecified amount is traded Intervention size depends on market reaction to initial trades 17 20 17.6 5.0 70.6 55.0 11.8 40.0 7. Intervention is conducted at the following times of day: Prior to normal business hours Morning of the business day Afternoon of the business day After normal business hours 16 21 20 17 56.3 0.0 0.0 35.3 43.8 85.7 90.0 64.7 0.0 14.3 10.0 0.0 8. Is intervention sometimes conducted through indirect methods, such as changing the regulations regarding foreign exchange exposure of banks? 21 76.2 23.8 0.0 19 10.5 42.1 47.4 18 17 16 33.3 100.0 62.5 44.4 0.0 25.0 22.2 0.0 12.5 10. Intervention transactions are conducted secretly for the following reasons: To maximize market impact To minimize market impact For portfolio adjustment Other 17 14 11 12 23.5 42.9 100.0 75.0 35.3 57.1 0.0 16.7 41.2 0.0 0.0 8.3 11. In your opinion, how long does it take to observe the full effect of intervention on exchange rates? A few minutes A few hours One day A few days More than a few days Intervention has no effect on exchange rates 18 18 18 18 18 18 38.9 22.2 0.0 27.8 11.1 0.0 9. The following are factors in intervention decisions: To resist short-term trends in exchange rates To correct long-run misalignments of exchange rates from fundamental values To profit from speculative trades Other Sometimes Always NOTE: Question 1 shows the minimum, median, and maximum responses (from 0 to 100) on the percentage of days intervention was conducted in the last decade. Questions 2 through 10 show the percentage of responses of “Never,”“Sometimes,” and “Always” to those questions. Question 11 shows the percentage of responses indicating that the full effects of intervention were felt at each horizon. M AY / J U N E 2 0 0 1 5 REVIEW was some evidence that responding institutions are moving increasingly toward electronic methods, along with other market participants. There has been very little research on factors that determine the magnitude of intervention— though reaction function research touches on this issue—but the responses to question 6 indicate that the size of interventions frequently depends on market reaction to initial trades. Ninety-five percent of monetary authorities report that market reaction sometimes or always affects the size of total trades. Because the size of the intervention is endogenous to market reaction, determining the interaction of intervention and exchange rate changes at high frequencies might require careful evaluation.7 Most intervention takes place during the business day, but almost half the banks report that they sometimes intervene prior to business hours and more than half intervene after business hours.8 For example, the Reserve Bank of Australia publicly states that it is willing to intervene outside of normal business hours (Rankin, 1998). Some after-hours intervention might be in support of other authorities. About a quarter of central banks report that they always intervene in the business morning (14.3 percent) or business afternoon (10 percent), however. In answering question 8, 23.8 percent of authorities report sometimes intervening with indirect methods. The authorities cite changing banking regulations on foreign exchange exposure and moral suasion as methods of indirect intervention. Not surprisingly, indirect methods seem to be used predominantly by central banks without a long history of free capital movements or a convertible currency. The Motivation for Intervention The motivation for intervention decisions has been widely researched and often discussed. Research and official pronouncements support the idea that monetary authorities with floating exchange rates most often employ intervention to resist short-run trends in exchange rates, the leaning-against-the-wind hypothesis.9 Another popular hypothesis is that intervention is used to correct medium-term “misalignments” of exchange rates away from “fundamental values.” Question 9 inquires about these possibilities. The responses generally support these hypotheses with 89.5 percent of monetary authorities sometimes or always 6 M AY / J U N E 2 0 0 1 intervening to resist short-run trends and 66.7 percent seeking to return exchange rates to “fundamental values.” Some countries specified “other” reasons that might be interpreted as variations on the leaning-against-the-wind or misalignment hypotheses. Still other countries note macroeconomic goals such as limiting exchange rate passthrough to prices, defense of an exchange rate target, or accumulating reserves as motivating intervention. One hypothesis that has received some attention in the last few years is that profitability is a consideration in intervention. A series of papers have examined the profitability of intervention (Leahy, 1995; Sweeney, 1997; Saacke, 1999), the relationship between intervention profitability and technical analysis (Neely, 1998, 2000), and whether past profits influence intervention (Kim and Sheen, 1999). While the early evidence (Taylor, 1982b) indicated that central banks were losing money on their intervention, the later papers have been much more supportive of the hypothesis that central banks have at least broken even on floating rate intervention, with some evidence that they have made large profits.10 The notion that profitability is a consideration in intervention decisions is uniformly rejected, however, by the survey respondents. Not one respondent to question 9 reports that profitability was ever a motivation for intervention. Despite this, private conversations with individuals involved in intervention decisions suggest that profitability is a useful gauge of their success as careful stewards of public resources. In addition, the Reserve Bank of Australia argues that profitability attests that its intervention has stabilized the exchange rate (Rankin, 1998). This RBA claim relies on Friedman’s (1953) claim that stabilizing speculation is equivalent to profitable speculation. If speculators consistently buy (sell) when the asset price is below (above) its equilibrium value, they will both tend to drive the asset price toward 7 The author thanks Lucio Sarno for pointing this out. 8 Fischer and Zurlinden (1999) examine the affect of intervention using high frequency data and the time of data. 9 Indeed, the published statements of several central banks specifically cite the desire to counter trends in exchange rates as motivating intervention. See Board of Governors (1994, p. 64) or Rankin (1998). 10 Sweeney (1997, 2000) argues that risk adjustment is crucial in assessing profits or losses from official intervention. FEDERAL RESERVE BANK OF ST. LOUIS its equilibrium value and to profit from these transactions.11 The link between profitability and stabilizing speculation is tenuous, however. Salant (1974), Mayer and Taguchi (1983), and De Long, Schleifer, Summers, and Waldmann (1990) provide counterexamples. The Role of Secrecy in Intervention The role of secrecy in intervention is not well understood. Most monetary authorities usually chose to intervene secretly, releasing actual intervention data with a lag, if at all. Some authorities, the Swiss National Bank, for example, always publicize interventions at the time they occur. Does secret intervention maximize or minimize the impact of the transaction? If authorities intervene to convey information to markets, why do they conceal these transactions? Dominguez and Frankel (1993) recount several possibilities: When fundamentals are inconsistent with intervention, monetary authorities would prefer not to draw attention to the intervention. Or, the monetary authority might have poor credibility for sending trustworthy signals. Or, the monetary authority might wish to simply adjust the currency holdings of its portfolio. Bhattacharya and Weller (1997) provide another possible explanation for secrecy. They present a model in which small amounts of intervention reveal the authority’s information to private parties, thus influencing exchange rates. This secrecy issue has not been satisfactorily resolved in the literature. Consistent with the confusion in the academic literature, the answers to question 10 reflect disagreement among the respondents about the purpose of secrecy. More authorities report sometimes or always intervening secretly to maximize market impact (76.5 percent) than report sometimes or always intervening secretly to minimize market impact (57.1 percent). Such disagreement is significant. No central bank cites portfolio adjustment as a reason for secret intervention, contrary to the reasonable conjecture of Dominguez and Frankel (1993). Of course, the central banks might not consider transactions for portfolio adjustment to be intervention. ing the exchange rate. For many years, the biggest hurdle to answering this question was the paucity of data. More recently, even as more data have become available, it is manifest that two barriers to answering the question remain. First, what would the exchange rate have been in the absence of intervention? Second, over what horizon should we measure the effectiveness of intervention and how large and long-lasting an effect can be considered a success? The academic literature has been ambivalent about the efficacy of official intervention in the foreign exchange market. The Jurgensen Report (Jurgensen, 1983) was pessimistic about the effects of intervention. Dominguez and Frankel (1993)— using then-recently released intervention data— argued that intervention can work by changing expectations of future exchange rates. Edison (1993) concluded that, while the evidence might be consistent with some short-run effect, there is no evidence for a lasting effect from intervention. Sarno and Taylor (2000), in contrast, conclude that the recent consensus of the profession is that intervention is effective through both the signaling and portfolio balance channels. Despite skepticism on the part of academics, central banks continue to intervene—though perhaps less frequently than in the past—implying that policymakers do think that intervention can be an effective tool. Question 11 asks the monetary authorities whether intervention has an effect on exchange rates and, if so, over what horizon one might see the full effect. All of the respondents indicate that they think intervention has some effect on exchange rates.12 Most of the respondents believe in a relatively rapid response, over a few minutes (38.9 percent) or a few hours (22.2 percent). Still, a substantial minority think that intervention’s full effect is seen over a few days (27.8 percent) or more (11.1 percent). The dispersion in the survey is substantial, indicating almost as much discord among central bankers as among academics. DISCUSSION AND CONCLUSION This article has examined the mechanics of intervention—instruments, counterparties, timing, The Horizon of Intervention Effects 11 Perhaps the most important question in the literature on central bank intervention is whether central bank intervention is effective in influenc- Friedman (1953) was referring more generally to speculation in foreign exchange and discussed government speculation (intervention) as a special case. 12 Of course, having an effect on exchange rates at some horizon might not imply that intervention is successful. M AY / J U N E 2 0 0 1 7 REVIEW and amounts—as well as related issues like secrecy, motivation, and the perceived efficacy of such transactions. A survey of monetary authorities’ intervention practices reveals that a number of monetary authorities do intervene with some frequency in foreign exchange (mostly spot) markets. The desire to check short-run trends or correct longer-term misalignments often motivates intervention, whereas the size of intervention often depends on market reaction to initial trades. Although intervention typically takes place during business hours, most monetary authorities will also intervene outside of these hours, if necessary. And, while there is unanimous agreement that intervention does influence exchange rates, there is much disagreement about the horizon over which the full effect of this influence is felt, with estimates ranging from a few minutes to more than a few days. The topic of the practice of official intervention is very broad. To simplify this study, issues like coordinated versus unilateral intervention, choice of intervention currency, and distinguishing between intervention in fixed and flexible exchange rate regimes have been left for further study. Other issues that merit further consideration are motivations for secrecy and the metric for judging the success of intervention. REFERENCES Bank for International Settlements. “Central Bank Survey of Foreign Exchange and Derivatives Market Activity.” May 1999. Bhattacharya, Utpal and Weller, Paul A. “The Advantage to Hiding One’s Hand: Speculation and Central Bank Intervention in the Foreign Exchange Market.” Journal of Monetary Economics, July 1997, 39(2), pp. 257-77. Dominguez, Kathryn and Frankel, Jeffery. “Does Foreign Exchange Intervention Work?” Washington, DC: Institute for International Economics, 1993. Dooley, Michael P.; Mathieson, Donald J. and Rojas-Suarez, Liliana. “Capital Mobility and Exchange Market Intervention in Developing Countries.” Working Paper No. 6247, National Bureau of Economic Research (Cambridge, MA), October 1997. Economist. “Spanish Peseta; Adios.” 1 May 1993, pp. 84. Edison, Hali J. “The Effectiveness of Central-Bank Intervention: A Survey of the Literature After 1982.” Special Papers in International Economics No. 18, Department of Economics, Princeton University, 1993. Financial Times. “Peseta Devalued by 8% in ERM: Lisbon Realigns Escudo as Madrid Plans 1 1/2 Point Rate Cut.” 14 May 1993, pp. 1. Fischer, Andreas M., and Zurlinden, Mathias. “Exchange Rate Effects of Central Bank Interventions: An Analysis of Transaction Prices.” Economic Journal, October 1999, 109(458), pp. 662-76. Friedman, Milton. Essays in Positive Economics. Chicago: University of Chicago Press, 1953, pp. 157-203. Galan Medina, Manuel; Duclaud Gonzalez de Castilla, Javier and Garcia Tames, Alonso. “A Strategy for Accumulating Reserves Through Options to Sell Dollars.” Unpublished manuscript, Banco de Mexico, 1997. <http://www. banxico.org.mx/siteBanxicoINGLES/bPoliticaMonetaria/ FSpoliticaMonetaria.html>. Jurgensen, Philippe. “Report of the Working Group on Exchange Market Intervention.” Washington, DC: U.S. Department of the Treasury, March 1983. Blejer, Mario I. and Schumacher, Liliana. “Central Banks Use of Derivatives and Other Contingent Liabilities: Analytical Issues and Policy Implications.” Working Paper No. 00-66, International Monetary Fund (Washington, DC), March 2000. Kim, Suk-Joong and Sheen, Jeffery. “The Determinants of Foreign Exchange Intervention by Central Banks: Evidence from Australia.” Unpublished manuscript, The University of New South Wales, December 1999. Board of Governors of the Federal Reserve System. The Federal Reserve System: Purposes and Functions. Washington, DC, 1994. Leahy, Michael P. “The Profitability of US Intervention in the Foreign Exchange Markets.” Journal of International Money and Finance, December 1995, 14(6), pp. 823-44. De Long, J. Bradford; Shleifer, Andrei; Summers, Laurence H. and Waldmann, Robert J. “Noise Trader Risk in Financial Markets.” Journal of Political Economy, August 1990, pp. 703-38. Mayer, Helmut and Taguchi, Hiroo. “Official Intervention in the Exchange Markets: Stabilising or Destabilising?” Bank for International Settlements Economic Paper 6, March 1983. 8 M AY / J U N E 2 0 0 1 FEDERAL RESERVE BANK OF ST. LOUIS Melvin, Michael T. International Money and Finance. 5th Ed. Reading, MA: Addison Wesley, 1997. ___________. “Official Intervention in the Foreign Exchange Market, or, Bet Against the Central Bank.” Journal of Political Economy, April 1982b, 90(2), pp. 356-68. Moreno, Ramon. “Lessons from Thailand.” Federal Reserve Bank of San Francisco Economic Letter No. 97-33, 7 November 1997. Neely, Christopher J. “Technical Analysis and the Profitability of U.S. Foreign Exchange Intervention.” Federal Reserve Bank of St. Louis Review, July/August 1998, 80(4), pp. 3-17. ___________. “An Introduction to Capital Controls.” Federal Reserve Bank of St. Louis Review, November/December 1999, 81(6), pp. 13-30. ___________. “The Temporal Pattern of Trading Rule Returns and Central Bank Intervention: Intervention Does Not Generate Trading Rule Profits.” Working Paper 2000-18B, Federal Reserve Bank of St. Louis, August 2000. Rankin, Bob. “The Exchange Rate and the Reserve Bank’s Role in the Foreign Exchange Market.” Reserve Bank of Australia, 1998. <http://www.rba.gov.au/publ/pu_teach_ 98_2.html>. Saacke, Peter. “Technical Analysis and the Effectiveness of Central Bank Intervention.” Unpublished manuscript, University of Hamburg, 1999. Salant, Stephen W. “Profitable Speculation, Price Stability, and Welfare.” International Finance Discussion Paper 54, Board of Governors of the Federal Reserve System, November 1974. Sarno, Lucio and Taylor, Mark P. “Official Intervention in the Foreign Exchange Market.” Unpublished manuscript, University College, Oxford, 2000. Sweeney, Richard J. “Do Central Banks Lose on ForeignExchange Intervention? A Review Article.” Journal of Banking and Finance, December 1997, 21(11-12), pp. 1667-84. ___________. “Does the Fed Beat the Foreign Exchange Market?” Journal of Banking and Finance, May 2000, 24(5), pp. 665-94. Taylor, Dean. “The Mismanaged Float: Official Intervention by the Industrialized Countries,” in Michael B. Connolly, ed., The International Monetary System: Choices for the Future. Westport, CT: Praeger Publishers, 1982a, pp. 4984. M AY / J U N E 2 0 0 1 9 REVIEW Appendix Monetary Authority Web Pages Describing Intervention or Foreign Exchange Markets Monetary authority Uniform Resource Locator on intervention or foreign exchange Notes on the pages Reserve Bank of Australia http://www.rba.gov.au/publ/pu_teach_98_2.html Central Bank of Brazil http://www.bcb.gov.br/ingles/himf1900.shtm#destino6 Foreign exchange policy. Bank of Canada http://www.bankofcanada.ca/en/backgrounders/bge2.htm Concise description of intervention policy. Bank of England http://www.bankofengland.co.uk/factfxmkt.pdf Description of foreign exchange markets. European Central Bank http://www.ecb.int No work directly describing intervention policy but speeches and working papers may relate. German Bundesbank http://www.bundesbank.de/en/presse/ pressenotizen/pdf/mp100197.pdf 1997 description of the ECB’s responsibility in foreign exchange intervention. Hong Kong Monetary Authority http://www.info.gov.hk/hkma/eng/speeches/speechs/ joseph/speech_030398b.htm Speech by the CEO describing exchange rate policy. Bank of Japan http://www.boj.or.jp/en/faq/faqkainy.htm Outline of the Bank of Japan’s foreign exchange intervention operations. Banco de Mexico http://www.banxico.org.mx/siteBanxicoINGLES/ bPoliticaMonetaria/FSpoliticaMonetaria.html Description of monetary and exchange rate policies. National Bank of Poland http://www.nbp.pl/en/publikacje/index.html Some descriptions of intervention in annual reports. Monetary Authority of Singapore http://www.mas.gov.sg/resource/index.html See the pamphlet, Economics Explorer #1. Reserve Bank of South Africa http://www.resbank.co.za/ibd/fwdcover.html Describes South Africa’s use of forward cover in somewhat technical terms. Swiss National Bank http://www.snb.ch/e/publikationen/publi.html Some intervention descriptions in annual reports. Bank of Thailand http://www.bot.or.th/BOTHomepage/BankAtWork/ Monetary&FXPolicies/EXPolicy/8-23-2000-Eng-i/ exchange_e.htm Very brief description of current exchange rate policy. Federal Reserve of the United States http://www.federalreserve.gov/pf/pdf/frspurp.pdf See page 64 for a very brief description of intervention policy. 10 M AY / J U N E 2 0 0 1 Excellent descriptive page on markets and intervention. FEDERAL RESERVE BANK OF ST. LOUIS Forecasting Inflation and Growth: Do Private Forecasts Match Those of Policymakers? William T. Gavin and Rachel J. Mandal enerally, we value forecasts for their accuracy. In some cases, however, the forecasts themselves are interesting because of what they reveal about the forecaster. Monetary policymaker forecasts are important because they partially reveal what policymakers believe will follow from their decisions. Forecasts of inflation and real output (whether made by Federal Reserve officials or private sector economists) contain information that is important for changing the stance of monetary policy. Market participants generally believe that Fed policymakers will change their policy stance if the economy appears to be headed in a different direction from what was expected at the time policy was adopted. Svensson (1997) and Svensson and Woodford (2000) explain why a central bank might want to target its inflation forecast. The intuition in their explanation is that policymakers should look at everything that is relevant when deciding to change the policy stance. The trouble with looking at everything is that there is so much information to process, one needs an organizing framework such as a forecasting model. Forecasting models are developed to monitor incoming information and to weigh each piece appropriately. Forecasting models range from the very largest, with over a thousand equations, to small models that are no more than simple rules of thumb. Whether using a large econometric model or a simple rule of thumb, forecasters rarely use the values that come directly from the model. Rather, they typically make judgmental adjustments before reporting the forecasts. In this article, we examine the role of forecasts G William T. Gavin is a vice president and senior economist and Rachel Mandal is a research associate at the Federal Reserve Bank of St. Louis. The authors thank Dean Croushore and Dinah Maclean for helpful comments. The original version of this article, winner of the Edmund A. Mennis Contributed Paper Award for 2000, appeared in the January 2001 issue of Business Economics, the journal of the National Association for Business Economics (NABE). For more information, visit NABE’s Web site at <www.nabe.com>. in the monetary policy process. Our focus is on the forecasts of inflation and economic growth, the main policy objectives. Economic forecasts are important because they reflect incoming information about the current state of the economy, including the forecasters’ beliefs about monetary policy objectives. In the United States, there are no explicit numerical objectives for output and inflation. Thus, policymaker forecasts are particularly interesting because they may reveal information about long-run policy goals. Fed forecasts, unfortunately, are not readily available to the public. We show that the Blue Chip consensus forecasts, made by a group of private economists, are a good stand-in for the policymakers’ forecasts. This is important because the policymakers in the Federal Reserve, the members of the Federal Open Market Committee (FOMC), reveal their forecasts only sparingly and after policy decisions are made. First, we show how well the forecasts match. We find that the forecasts of economic growth are very similar and appear to be about equal on average. The result for inflation forecasts is more interesting. Here we see that the private sector economists generally predicted higher inflation than did Fed policymakers, especially in the 1980s. The Blue Chip economists did not believe that the FOMC would achieve and maintain such a low inflation rate in the 1980s. Since 1995, the forecasts have converged. Evidently, the FOMC has achieved some credibility with the Blue Chip economists. When researchers want to know the history of policymakers’ forecasts, they typically go to the Fed’s briefing documents to extract the forecasts of the research staff at the Board of Governors. We show that the Blue Chip forecasts for output are as good a proxy for Fed policymakers’ views as are the research staff forecasts. In the case of inflation, the results vary with the time horizon. Generally, the Blue Chip consensus forecasts for inflation match the policymakers’ forecasts at shorter horizons while the research staff forecasts are closer at the longest horizon. Finally, we examine the use of alternative forecasts in a version of the Taylor rule, a popular characterization of monetary policy actions. It is popular because it is a simple summary of a complicated policy process. It is expressed as: (1) FFtA = r e + p t -1 + 0.5 (p t -1 - p T ) + 0.5 ( yt -1 - ytF-1 ) , where FFtA is the federal funds rate target chosen M AY / J U N E 2 0 0 1 11 REVIEW by the FOMC, r e is the long-run equilibrium real interest rate (assumed by Taylor to be equal to 2 percent per year), pt–1 is the average inflation rate observed over the previous four quarters, p T is the inflation target (which Taylor assumed to be equal to 2 percent per year), yt–1 is last period’s real gross domestic product (GDP) measured in logarithms, and ytF–1 is last period’s potential real GDP measured in logarithms. The term in the bracket, ( yt–1 – ytF–1), is approximately equal to the percentage deviation of GDP from the perceived level of potential GDP. This backward-looking rule prescribes settings for the federal funds rate, the Fed’s short-term policy instrument, according to the deviation of the past year’s inflation from a 2 percent target and the deviation of last period’s GDP from a measure of potential GDP. We begin by showing that historical analysis of the Taylor rule should use real-time data; that is, data that were available when the federal funds rate target was being set. We show that the forward-looking rule based on policymaker forecasts is virtually identical to one based on Blue Chip consensus forecasts. Neither does quite as well as the backward-looking rule using real-time data; however, all three versions of the Taylor rule do much better at explaining historical movement in the federal funds rate than do rules based on the current revised data. Because purely forward-looking rules may be inherently unstable, we also examine a combination rule that includes both lagged values of inflation and the output gap using real-time data and the Blue Chip forecasts of the current-year inflation and output gap.1 This rule with both backward- and forward-looking elements matches the actual federal funds rate slightly better than the rule based on real-time data. FOMC AND BLUE CHIP FORECASTS FOMC members prepare forecasts for Congressional testimony twice a year.2 This testimony was mandated by the Full Employment and Balanced Growth Act of 1978. Section 108 of this act explicitly required the Fed to submit “written reports setting forth (1) a review and analysis of recent developments affecting economic trends in the nation; (2) the objectives and plans … with respect to the monetary and credit aggregates …; and (3) the relationship of the aforesaid objectives and plans to the short-term goals set forth in the most 12 M AY / J U N E 2 0 0 1 recent Economic Report of the President …” In order to satisfy the third item, the Federal Reserve Chairman began reporting a summary of Fed policymakers’ forecasts to Congress in July 1979. Since then, similar summaries of forecasts have been reported every February and July.3 Forecasts are made of annual, fourth-quarter-over-fourthquarter growth rates for nominal GDP, real GDP, and inflation.4 Fed policymakers also forecast the average level of unemployment for the fourth quarter of the year. In February, the forecasts pertain to the current calendar year (referred to below as the 12-month-ahead forecast). In July, forecasts are updated for the current calendar year (6-monthahead forecasts) and preliminary projections are made for the next calendar year (18-month-ahead forecasts). We focus on the forecasts of real output growth and inflation because they best capture monetary policy objectives. We use the output price deflator as the measure of inflation primarily because it has been consistently forecasted throughout the entire period. Even when the Fed was reporting the forecast for inflation based on the consumer price index (from 1989 through 1999), there was also a forecast for both nominal and real output, so there was always an implied forecast for the output deflator. Individual Federal Reserve officials submit their economic forecasts based on their judgment about the appropriate policy to be followed over the coming year. These individual projections may be revised after the FOMC adopts a specific policy. The revised projections are then reported as a range, listing the high and low values for each item, and as a central tendency that omits extreme forecasts and is meant to be a better representation of 1 See Woodford (2000) for a summary of the argument that purely forward-looking rules may lead to instability. 2 The FOMC is the policymaking committee of the Federal Reserve System. When the Board is full, the Committee consists of the 7 governors of the Board, the president of the Federal Reserve Bank of New York, and 4 of the remaining 11 Federal Reserve Bank presidents who serve on a rotating basis. All 12 presidents attend every meeting, contribute to the discussion, and provide forecasts that are summarized in testimony to the Congress. The Green Book is a briefing document with macroeconomic forecasts prepared by staff economists at the Board of Governors about three workdays before each FOMC meeting. 3 This reporting requirement has now expired, but the Fed provided forecasts to Congress on July 20, 2000, and February 13, 2001. These data are not included in this study. 4 The Fed switched from GNP to GDP in 1992. FEDERAL RESERVE BANK OF ST. LOUIS Figure 1 Figure 2 Output Forecasts (1983 to 1994) Inflation Forecasts (1983 to 1994) 7 6 45 line 6 5 Blue Chip 5 Green Book Blue Chip Green Book 4 4 3 3 2 2 1 1 0 45 line 0 0 1 2 3 4 5 Midpoint of FOMC Central Tendency 6 7 the consensus view. In this article, we define the consensus FOMC forecast as the midpoint of this central tendency range. The Blue Chip consensus forecasts are taken from the February and July reports. These forecasts are collected on the first three working days of the month, and the information available to private sector economists is approximately the same as the information available to the FOMC members when they make their forecasts. Most importantly, both groups usually had the latest information on the price indexes from the Bureau of Labor Statistics and the most recent report on actual GDP from the Bureau of Economic Analysis. Figure 1 is a scatter diagram with triangles showing the relation between the consensus GDP growth forecasts made by the FOMC between 1983 and 1994 and those made by the Blue Chip economists during the same period. We start in 1983 because that is when the Federal Reserve first began to report the central tendency of the forecasts. It was also the first year that they reported forecasts for all the participants: FOMC members and nonvoting Federal Reserve Bank presidents.5 If the FOMC and Blue Chip forecasts were exactly the same, they would lie on the 45-degree line shown. As Figure 1 shows, the forecasts were quite similar and seem to be distributed evenly above and below the 45-degree line. That is, there 0 1 2 3 4 Midpoint of FOMC Central Tendency 5 6 does not seem to be any tendency for the Blue Chip economists to systematically forecast more or less output growth than the FOMC. The same cannot be said of the inflation forecasts. The triangles in Figure 2, where most of the points lie above the 45-degree line, show that the Blue Chip economists usually forecasted higher inflation than did the FOMC. The period from 1983 to the present has been a period of moderate and falling inflation. Throughout, the Federal Reserve has had a goal of eliminating inflation. In general, the FOMC’s forecasts of inflation have been lower than the Blue Chip forecasts. However, as inflation became lower in the 1990s, the forecasts have converged, indicating that the private sector has gained confidence in the Fed’s ability to deliver low inflation. So, although the Blue Chip inflation forecasts have not always been unbiased indicators of the FOMC’s inflation forecasts, they have been better in recent years. GREEN BOOK FORECASTS The Green Book forecast is put together by a large staff of economists at the Board of Governors in Washington, D.C. It is prepared for the FOMC members who read it in advance of the meetings 5 In July 1979, the Fed reported a range of Board member forecasts (governors only). From 1980 through 1982, the Fed reported a range of forecasts for FOMC members. M AY / J U N E 2 0 0 1 13 REVIEW Table 1 Blue Chip Versus Green Book as a Proxy for FOMC Forecasts (1983 to 1994) RMSE of output forecast All 3 horizons RMSE of inflation forecast Blue Chip Green Book Blue Chip Green Book 0.22 0.36 0.32 0.38 6-Month horizon 0.17 0.35 0.21 0.25 12-Month horizon 0.25 0.32 0.32 0.38 18-Month horizon 0.24 0.40 0.40 0.47 NOTE: Bold typeface indicates a better proxy for the midpoint of the FOMC tendency. and receive an oral presentation of this forecast at the meeting. These forecasts are only available to the public five years after they are made. Romer and Romer (2000) compare the Green Book forecasts with private sector forecasts using quarterly data from 1965 through 1991 and forecasts over several horizons (usually from forecasts of the current quarter out to seven quarters ahead). They present convincing evidence that the Green Book inflation forecasts have been more accurate than the private forecasts, including the Blue Chip consensus (for the period from 1980 to 1991). They also report that the Green Book forecasts of output were better than private sector forecasts, but the evidence for output forecasts is weaker. The Green Book forecasts from 1983 through 1994 are depicted as circles in Figures 1 and 2. Casual observation suggests that the Green Book forecasts and the Blue Chip consensus represent the policymakers’ consensus equally well. These scatter diagrams combine forecasts across the three horizons of 6, 12, and 18 months ahead. Table 1 gives more detailed information about how well the Blue Chip consensus and the Green Book forecast match the FOMC consensus. Results are reported for the combined forecasts (combined over the three forecasting horizons) and for the three separate horizons. The forecast error in Table 1 is defined as the difference between the alternative forecast (Blue Chip consensus or Green Book) and the midpoint of the FOMC central tendency forecast. We report root-mean-squared errors (RMSE) for both inflation and output forecasts. The results are interesting. On average, the differences in errors between the Green Book and Blue Chip are larger for the real output forecasts than they are for the inflation forecasts. For both 14 M AY / J U N E 2 0 0 1 real output and inflation, the Blue Chip consensus is closer to the FOMC forecast than is the Green Book. For the first 12 years after the FOMC began reporting the central tendency, the Blue Chip forecast has provided a good measure of the FOMC’s view of the future, as least as good as one would get by knowing the Green Book forecast. RELATIVE ACCURACY 1983 Through 1994 Table 2 reports the relative accuracy of real output forecasts to the real-time data from 1983 through 1994. For the separate and combined horizons, we compare the individual forecasts to the value that was first reported by the Bureau of Economic Analysis.6 The Blue Chip forecasts are best (lowest RMSE) for the 12- and 18-month horizons. The FOMC’s forecasts have the lowest RMSE at the 6-month horizon. In none of these cases are the Green Book forecasts of real output best.7 The Green Book fares better, however, for inflation forecasts from 1983 through 1994, as shown in Table 3. Earlier, we saw that the Blue Chip inflation forecasts were generally above the FOMC’s forecasts in the 1980s. Here we see that all three forecasts, on average, predicted higher than actual inflation, with the FOMC forecasts sandwiched between the Blue Chip forecasts on the 6 We used the vintage data sets from the Federal Reserve Bank of Philadelphia described in Croushore and Stark (1999). 7 This is surprising given the conclusions in Romer and Romer (2000). They examined an earlier and longer sample with more frequent forecasts over more horizons. We examine only those dates and forecast horizons for which the central tendency of FOMC members’ forecasts were reported to Congress. FEDERAL RESERVE BANK OF ST. LOUIS Table 2 Accuracy of Output Forecasts (1983 to 1994) Mean error RMSE Blue Chip FOMC members Green Book Blue Chip FOMC members Green Book 0.04 0.06 –0.06 0.94 0.96 1.05 All 3 horizons 6-Month horizon 0.02 0.05 –0.02 0.76 0.74 0.80 12-Month horizon –0.11 –0.08 –0.15 1.05 1.11 1.23 18-Month horizon 0.22 0.22 –0.02 0.99 1.00 1.06 NOTE: Best forecast indicated by bold typeface. Table 3 Accuracy of Inflation Forecasts (1983 to 1994) Mean error RMSE Blue Chip FOMC members Green Book Blue Chip FOMC members Green Book All 3 horizons 0.69 0.46 0.35 0.92 0.80 0.65 6-Month horizon 0.45 0.33 0.21 0.64 0.55 0.36 12-Month horizon 0.60 0.41 0.26 0.79 0.74 0.61 18-Month horizon 1.01 0.65 0.57 1.23 1.05 0.88 NOTE: Best forecast indicated by bold typeface. high end and the more accurate Green Book forecasts on the low end. 1995 Through 1999 Table 4 examines the accuracy of the Blue Chip and FOMC real output forecasts from 1995 through 1999. Again, we report results based on the combined data sets and also separately for each forecast horizon. For these five years, both the Blue Chip and the FOMC policymakers’ forecasts for real output growth were about 1 percent below actual. The large bias in the mean error reflects the ongoing surprise about the strength of economic growth and upward revisions to estimates of the underlying trend. We find that in the last five years, on average, the FOMC has been more accurate, as measured by the RMSE, than the Blue Chip at all forecast horizons. We saw in Figure 2 that the FOMC and Blue Chip forecasts converged as inflation came down in the 1990s. Table 5 looks at the accuracy of the Blue Chip and FOMC inflation forecasts over the last five years of the sample. Both the FOMC and Blue Chip forecasts predicted higher than actual inflation from 1995 through 1999. The FOMC inflation forecasts have been slightly more accurate than the Blue Chip forecast for all three forecast horizons. Although the FOMC forecasts were more accurate than the Blue Chip forecasts, the forecasts were not far apart. On average for all three horizons, the Blue Chip consensus for GDP growth was a tenth of a percentage point below the FOMC’s, and the Blue Chip consensus for inflation was onetenth higher than the FOMC’s. The five years reported in Tables 4 and 5, 1995 through 1999, have been characterized by surprisingly high real GDP growth and surprisingly low inflation, as is seen by the negative mean errors for output growth and the positive mean errors for inflation. USING FORECASTS IN TAYLOR-TYPE RULES In this section we use a simple policymaking framework to see whether the differences between the Blue Chip and FOMC forecasts are economically M AY / J U N E 2 0 0 1 15 REVIEW Table 4 Accuracy of Output Forecasts (1995 to 1999) Mean error RMSE Blue Chip FOMC members Green Book Blue Chip FOMC members Green Book All 3 horizons –1.13 –1.02 NA 1.46 1.35 NA 6-Month horizon –0.52 –0.53 NA 0.81 0.73 NA 12-Month horizon –1.26 –1.01 NA 1.67 1.50 NA 18-Month horizon –1.73 –1.65 NA 1.78 1.71 NA Green Book Blue Chip FOMC members NOTE: Best forecast indicated by bold typeface. Table 5 Accuracy of Inflation Forecasts (1995 to 1999) Mean error Blue Chip FOMC members RMSE Green Book All 3 horizons 0.59 0.48 NA 0.72 0.64 NA 6-Month horizon 0.36 0.29 NA 0.43 0.39 NA 12-Month horizon 0.52 0.37 NA 0.64 0.50 NA 18-Month horizon 0.98 0.86 NA 1.03 0.96 NA NOTE: Best forecast indicated by bold typeface. significant. Taylor (1993) proposed characterizing past Fed policy as if it were made according to a formula similar to equation (1), which has come to be known as the Taylor rule.8 Rotemberg and Woodford (1999) show that a rule of this form can be derived as an optimal policy under certain conditions. Clarida, Gali, and Gertler (1999) show that a rule of this type can be optimal in a dynamic, forward-looking IS/LM model in which the central bank’s loss function is quadratic in deviations of inflation from target and output from potential. Even if the central bank cares only about the inflation objective, the nominal interest rate target may be set as a function of the state of the economy. If the real interest rate is procyclical, adjusting the federal funds rate target for changes in the gap between potential and actual GDP may be a method for taking into account the cyclical deviation of the real interest rate from the long-run equilibrium value.9 While clearly not advocating that any central bank follow any such simple rule slavishly, Taylor recommended his rule as a reference point in 16 M AY / J U N E 2 0 0 1 debates about whether a policy change might be needed. Indeed, that has happened as many central banks now regularly monitor variations of the original Taylor rule. Figure 3 shows the quarterly average federal funds rate and our calculation of the federal funds rate target implied by the Taylor rule for the period from 1983 to 1999. We begin by showing the federal funds rate target implied by equation (1).10 As Figure 3 shows, the rule does not do particularly well during the periods before 1990 or after 1994. Table 6 shows that the federal funds rate target, predicted by using current revised data, is, on average, 166 basis points below the actual federal funds rate. 8 In his 1993 paper, Taylor used current year values for the GDP gap and inflation. Since current year data are unknown at the time policy is made, we have used lagged values. 9 For recent evidence suggesting that the real interest rate is procyclical, see Dotsey and Scholl (2000). 10 Note that the usefulness of the Taylor rule has been questioned by many researchers, including recent articles by Hetzel (2000), Kozicki (1999), McCallum (1999), and Orphanides (1998). FEDERAL RESERVE BANK OF ST. LOUIS Figure 3 Figure 4 Taylor Rules: Current Versus Real-Time Data Taylor Rules: Blue Chip Versus FOMC Forecasts (Semi-Annual Data) 12 12 Actual FF Taylor Rule with Real-Time Data 10 Blue Chip 10 Fed Policymakers 8 Actual Federal Funds Rate 8 6 6 4 2 0 1983 4 Taylor Rule with Current Vintage Data 2 1985 1987 1989 1991 1993 1995 1997 1999 Figure 3 also includes the Taylor rule for the federal funds rate target using real-time data for GDP and inflation and a forecast for potential GDP from a recursive model that fits a quadratic time trend to the real-time data. As the figure shows, there is an important difference in the target calculated for the federal funds rate when we use the real-time data. Contrary to the case using currently available revised data, the real-time Taylor rule generally lies above the actual federal funds rate. The right-most column in Table 6 shows that the average deviation was 34 basis points. These results show that ex post policy rules based on revised data may do a poor job of replicating actual policy choices. Figure 4 includes two versions of a forwardlooking Taylor rule where we modify Taylor’s general specification by replacing the backwardlooking measures of inflation and output with FOMC and Blue Chip forecasts for the calendar year. The modified Taylor rule used is (2) FFtB = r e + p te + 0.5 (p te - p T ) + 0.5 ( yte - ytF ), where pte is the forecast of fourth-quarter-overfourth-quarter inflation for the current year and ( yte – ytF ) is the output gap expected for the current year. We use the real-time data and our quadratic time trend to predict potential GDP in the fourth quarter of each year. We construct a fourth- 0 1983 1985 1987 1989 1991 1993 1995 1997 1999 quarter forecast of the level of GDP using the actual real-time value of the previous fourthquarter level of GDP and the fourth-quarter-overfourth-quarter forecast of GDP for the current year. Whether we use forecasts from the FOMC or Blue Chip, the implications for the federal funds rate target are almost identical. In Table 6, the RMSE between the actual federal funds rate and the target predicted by the alternative Taylor rules are given along a diagonal in parentheses. For this period, using these forecasts, the backward-looking rule using real-time data predicts the actual federal funds rate slightly more accurately than do the forward-looking rules. The forward-looking version using the Blue Chip consensus forecasts is more accurate than the version using FOMC forecasts. However, the mean error for the FOMC version is closest to zero. As we saw in Figure 4, the Blue Chip and FOMC versions of the Taylor rule seem to move in tandem. The correlation between these versions of the Taylor rule is 0.99. Bernanke and Woodford (1997) have argued that purely forward-looking Taylor rules may not be practical. Chari (1997) explains simply, Suppose, for instance, that the central bank wants to stabilize inflation rates and private forecasters have information that is not available to the central bank about future M AY / J U N E 2 0 0 1 17 REVIEW Table 6 Alternative Versions of the Taylor Rule Actual federal funds rate Current revised data Current revised data 0.73 (1.42) Real-time data 0.87 0.82 (1.04) Real-time data Blue Chip forecast FOMC Combination: members’ real time and forecasts Blue Chip Mean error –1.66 0.34 Blue Chip forecast 0.84 0.67 0.92 (1.16) FOMC members’ forecasts 0.82 0.67 0.91 0.99 (1.23) 0.15 Combination: real time and Blue Chip 0.88 0.76 0.98 0.98 0.97 –0.03 (1.02) 0.24 NOTE: Correlations among the alternative predictions of the Taylor rule and the actual federal funds rate are shown in bold. RMSE are shown in parentheses (Taylor rule minus actual federal funds rate). Right column shows the mean error for each version of the Taylor rule. inflation. The central bank could use private forecasts of inflation to choose its policy instrument. The problem is that if the central bank is completely effective in using its policy instrument to stabilize inflation, private forecasts of inflation should rationally be the central bank’s inflation target in which case, private forecasts provide no information about inflation! This paradox arises because market forecasts of a goal variable depend upon the central bank’s policy rule and if the central bank used the information well, market forecasts will not be informative. (p. 685) Woodford (2000) recommends policies that include both backward- and forward-looking elements. We create a combination rule that uses both the lagged values of inflation and the output gap as well as the Blue Chip forecasts for the current year. It is equivalent to taking an average of the real-time Taylor rule (FFtA ) and the forward-looking rule using Blue Chip forecasts (FFtB ). The results for this combination rule are given in the bottom row of Table 6. The federal funds rate target that comes out of this rule has the highest correlation with the actual federal funds rate (0.88) and the lowest RMSE (1.02) of all the rules that we considered. CONCLUSION We have found that the Blue Chip consensus appears to have been closely matched to the mid18 M AY / J U N E 2 0 0 1 point of the FOMC’s central tendency forecasts. During the period from the beginning of 1983 through the summer of 1994, the Blue Chip forecasts for output were not only more closely related to the FOMC’s output forecasts, but they were slightly more accurate than the forecasts in the Green Book. The Green Book forecasts of inflation were much more accurate than were the Blue Chip’s during the period between 1983 and 1994. Nevertheless, the Blue Chip forecasts were still as closely related to the FOMC forecasts as were the Green Book forecasts. In the period since 1994, the FOMC consensus has been more accurate than the Blue Chip consensus for both inflation and output, but not by much. During the period from 1995 through 1999, inflation has been lower than expectations while the real economy has been unexpectedly strong. For the entire period, the differences between the Blue Chip consensus forecasts and the midpoint of the central tendency are not statistically or economically relevant for the policymaking process, at least not as that process has been characterized by Taylor (1993). We should not be surprised to learn that the Blue Chip forecasts of inflation and output are highly correlated with FOMC forecasts. Both the FOMC members and the economists who contribute to the Blue Chip consensus observe the same statistical releases and use similar economic theories to interpret the data. FEDERAL RESERVE BANK OF ST. LOUIS REFERENCES Bernanke, Ben S. and Woodford, Michael. “Inflation Forecasts and Monetary Policy.” Journal of Money, Credit and Banking, November 1997, 29(4, Part II), pp. 653-84. Chari, V.V. “Comment on Inflation Forecasts and Monetary Policy.” Journal of Money, Credit and Banking, November 1997, 29(4, Part II), pp. 685-86. Clarida, Richard; Gali, Jordi and Gertler, Mark. “The Science of Monetary Policy: A New Keynesian Perspective.” Journal of Economic Literature, December 1999, 37(4), pp. 1661-707. Croushore, Dean and Stark, Tom. “A Real-Time Data Set for Macroeconomists.” Working Paper 99-4, Federal Reserve Bank of Philadelphia, June 1999. Dotsey, Michael and Scholl, Brian. “The Behavior of the Real Rate of Interest Over the Business Cycle.” Unpublished manuscript, Federal Reserve Bank of Richmond, 27 February 2000. Hetzel, Robert L. “A Critical Appraisal of the Taylor Rule.” Unpublished manuscript, Federal Reserve Bank of Richmond, 11 February 2000. Kozicki, Sharon. “How Useful Are Taylor Rules for Monetary Policy?” Federal Reserve Bank of Kansas City Economic Review, Second Quarter 1999, 84(2), pp. 5-33. McCallum, Bennett T. “Recent Developments in the Analysis of Monetary Policy Rules.” Federal Reserve Bank of St. Louis Review, November/December 1999, 81(6), pp. 3-12. Orphanides, Athanasios. “Monetary Policy Rules Based on Real-Time Data.” Finance and Economics Discussion Series No. 1998-3, Board of Governors of the Federal Reserve System, January 1998. Romer, Christina D. and Romer, David H. “Federal Reserve Private Information and the Behavior of Interest Rates.” American Economic Review, June 2000, 90(3), pp. 429-57. Rotemberg, Julio J. and Woodford, Michael. “Interest Rate Rules in an Estimated Sticky Price Model,” in John B. Taylor, ed., Monetary Policy Rules. Chicago: The University of Chicago Press, 1999, pp. 57-119. Svensson, Lars E.O. “Inflation Forecast Targeting: Implementing and Monitoring Inflation Targets.” European Economic Review, June 1997, 41(6), pp. 1111-46. ___________ and Woodford, Michael. “Indicator Variables for Optimal Policy.” Presented at Structural Change and Monetary Policy Conference, Federal Reserve Bank of San Francisco, 2000. Taylor, John B. “Discretion Versus Policy Rules in Practice.” Carnegie-Rochester Conference Series on Public Policy, December 1993, 39, pp. 195-214. Woodford, Michael. “Pitfalls of Forward-Looking Monetary Policy.” American Economic Review, May 2000, 90(2), pp. 100-4. M AY / J U N E 2 0 0 1 19 REVIEW 20 M AY / J U N E 2 0 0 1 FEDERAL RESERVE BANK OF ST. LOUIS Toward a New Paradigm in Open Economy Modeling: Where Do We Stand? Lucio Sarno n the last few decades, there have been a number of important developments, both theoretical and empirical, in open economy macroeconomics and exchange rate economics (see, for example, Sarno and Taylor, 2001a, b). Also, the increasing availability of high-quality macroeconomic and financial data has stimulated a large amount of empirical work. While our understanding of exchange rates has improved as a result, many challenges and questions remain. This paper selectively surveys the recent literature on “new” open economy macroeconomics. This literature, stimulated by the work of Obstfeld and Rogoff (hereafter OR) (1995), reflects the attempt by researchers to formalize exchange rate determination in the context of dynamic general equilibrium models with explicit microfoundations, nominal rigidities, and imperfect competition.1 The main objective of this research program is to develop a new workhorse model for open economy macroeconomic analysis. Relative to the still ubiquitous Mundell-Fleming-Dornbusch (MFD) model (Mundell, 1962, 1963; Fleming, 1962; Dornbusch, 1976), new open economy models offer a higher standard of analytical rigor coming from fully specified microfoundations; they offer the ability to perform welfare analysis and rigorously discuss policy evaluation in the context of a framework that allows for market imperfections and nominal rigidities. On the other hand, the main virtue of the MFD model is its simpler analytical structure, which makes it easy to discuss in I Lucio Sarno is a reader in economics and finance at the Warwick Business School, University of Warwick, and a research affiliate of the Centre for Economic Policy Research, London. This paper was written in part while the author was a visiting scholar at the Federal Reserve Bank of St. Louis. The author thanks the United Kingdom Economic and Social Research Council (ESRC) for providing financial support (grant No. L138251044) and Gaetano Antinolfi, James Bullard, Giancarlo Corsetti, Brian Doyle, Fabio Ghironi, Peter Ireland, Marcus Miller, Chris Neely, Michael Pakko, Neil Rankin, Mark Taylor, and Dan Thornton for constructive comments. Paige Skiba provided research assistance. The views expressed are those of the author and should not be interpreted as reflecting those of any institution. policy circles. Because the predictions of new open economy models are sensitive to the particular specification of the microfoundations, policy evaluation and welfare analysis depend on the specification of preferences and nominal rigidities. In turn, this generates a need for the profession to agree on the “correct” or at least “preferable” specification of the microfoundations. The present paper reviews the key contributions in new open economy macroeconomics in the last five to six years, also assessing how the intellectual debate stimulated by OR has led to models that reflect reality more satisfactorily over time. The paper also discusses some of the most controversial issues that currently still prevent any of the models in this area to emerge as a new paradigm for open economy modeling and describes the directions taken by the latest literature. The remainder of the paper is set out as follows. The first section provides a review of the seminal paper in this literature, proposing the socalled redux model, while the second section covers a number of variants and generalizations of the redux model that permit allowance for alternative nominal rigidities, pricing to market, alternative preference specifications, and alternative financial markets structures. I then discuss some stochastic extensions of these models, focusing on their implications for the relationship between uncertainty and exchange rates in the third section. Some new directions taken by the latest literature on stochastic open economy modeling are described in the fourth section. A final section presents some concluding remarks. THE REDUX MODEL The Baseline Model OR (1995) is the study often considered as having initiated the literature on new open economy macroeconomics (see, for example, Lane, 1999, and Corsetti and Pesenti, 2001). However, a precursor of the OR (1995) model that deserves to be noted here is the model proposed by Svensson 1 An early draft of this paper covered some of the models discussed below in a more technical fashion. The preliminary technical version is available from the author upon request (Sarno, 2000). Walsh (1998) also provides an excellent treatment of the redux model, especially focusing on monetary issues. See also the comprehensive textbook treatment of the early new open economy literature by OR (1996) and its selective coverage by Lane (1999). For a treatment of the role of imperfect competition in macroeconomic models, see the survey by Dixon and Rankin (1994). M AY / J U N E 2 0 0 1 21 REVIEW and van Wijnbergen (1989). They present a stochastic, two-country, neoclassical rational-expectations model with sticky prices that are optimally set by monopolistically competitive firms, where possible excess capacity is allowed for to examine international spillover effects of monetary disturbances on output. In contrast to the prediction of the MFD model that a monetary expansion at home leads to a recession abroad, the paper suggests that spillover effects of monetary policy may be either positive or negative, depending on the relative size of the intertemporal and intratemporal elasticities of substitution in consumption. It is also fair to say that the need for rigorous microfoundations in open economy models is not novel in new open economy macroeconomics and has been emphasized by several papers prior to OR (1995); notable examples are Lucas (1982), Stockman (1980, 1987), and Backus, Kehoe, and Kydland (1992, 1994, 1995), among others. The baseline model proposed by OR (1995) is a two-country, dynamic general equilibrium model with microfoundations that allows for nominal price rigidities, imperfect competition, and a continuum of agents who both produce and consume. Each agent produces a single differentiated good. All agents have identical preferences, characterized by an intertemporal utility function that depends positively on consumption and real money balances but negatively on work effort; effort is positively related to output. The exchange rate is defined as the domestic price of the foreign currency. The two countries are called Home and Foreign, respectively. Because the model assumes no impediments to international trade, the law of one price (LOOP) holds for each individual good and purchasing power parity (PPP) holds for the internationally identical aggregate consumption basket. PPP is the proposition that national price levels should be equal when expressed in a common currency; the LOOP is the same proposition applied to individual goods rather than a consumption basket. Since the real exchange rate is the nominal exchange rate adjusted for relative national price levels, variations in the real exchange rate represent deviations from PPP. Hence, the LOOP and continuous PPP imply a constant real exchange rate, while long-run PPP (where temporary deviations from PPP are allowed for) implies mean reversion in the real exchange rate. OR also assume that both countries can borrow and lend in an integrated world capital mar22 M AY / J U N E 2 0 0 1 ket. The only internationally traded asset is a riskless real bond, denominated in the consumption good. Agents maximize lifetime utility subject to their budget constraints (identical for domestic and foreign agents). Utility maximization then implies three clearly interpretable conditions. The first is the standard Euler equation, which implies a flat time path of consumption over time. The second condition is the money market equilibrium condition that equates the marginal rate of substitution of consumption for the services of real money balances to the consumption opportunity cost of holding real money balances (the nominal interest rate); the representative agent directly benefits from holding money in the utility function but loses the interest rate on the riskless bond as well as the opportunity to eliminate the cost of inflation. (Note that money demand depends on consumption rather than income in this model.) The third condition requires that the marginal utility of the higher revenue earned from producing one extra unit of output equals the marginal disutility of the needed effort, and so can be interpreted as a labor-leisure trade-off equation. In the special case when net foreign assets are zero and government spending levels are equal across countries, OR solve the model for income and real money balances. Because this model is based on a market structure with imperfect competition where each agent has some degree of market power arising from product differentiation, the solutions of the model imply that steadystate output is suboptimally low. As the elasticity of demand (say q ) increases, the various goods become closer substitutes and, consequently, the monopoly power decreases. As q approaches infinity, output increases, tending to the level corresponding to a perfectly competitive market. The main focus of OR (1995) is the impact of a monetary shock on real money balances and output. Under perfectly flexible prices, a permanent shock produces no dynamics and the world economy remains in steady state (prices increase by the same proportion as the money supply). That is, an increase in the money supply has no real effects and cannot remedy the suboptimal output level. Money is neutral.2 2 Note that in the redux model and in a number of subsequent papers, monetary shocks are discussed without a formalization of the reaction functions of the monetary authorities. However, some recent studies have formally investigated reaction functions in new open economy macroeconomic models; see, for example, Ghironi and Rebucci (2000) and the references therein. FEDERAL RESERVE BANK OF ST. LOUIS With prices displaying stickiness in the short run, however, monetary policy may have real effects. If the money supply increases, because prices are fixed, the nominal interest rate decreases and hence the exchange rate depreciates. This is because, due to arbitrage in the foreign exchange market, uncovered interest parity holds. Foreign goods become more expensive relative to domestic goods, generating a temporary increase in the demand for domestic goods and inducing an increase in output. Consequently, monetary shocks generate real effects on the economy. But how can one ensure that producers are willing to increase output? If prices are fixed, output is determined by demand. Because a monopolist always prices above the marginal cost, it is profitable to meet unexpected demand at the fixed price. Noting that in this model the exchange rate rises less than the money supply, currency depreciation shifts world demand toward domestic goods, which causes a short-run rise in domestic income. Home residents consume some of the extra income, but, because they want to smooth consumption over time, they save part of it. Therefore, although in the long run the current account is balanced, in the short run Home runs a current account surplus. With higher long-run wealth, Home agents shift from work to leisure reducing Home output. Nevertheless, because Home agents’ real income and consumption rise in the long run, the exchange rate does not necessarily depreciate.3 Unlike the scenario in a Dornbusch-type model, the redux model does not yield exchange rate overshooting. The exchange rate effect is smaller the larger the elasticity of substitution, q ; as q approaches infinity, Home and Foreign goods become closer substitutes, producing larger shifts in demand with the exchange rate changing only slightly. Finally, a monetary expansion leads to a firstorder welfare improvement.4 Because the price exceeds the marginal cost in a monopolistic equilibrium, global output is inefficiently low. An unanticipated money shock raises aggregate demand stimulating production and mitigating the distortion. Summing up, in the redux model, monetary shocks can generate persistent real effects, affecting consumption and output levels and the exchange rate, although both the LOOP and PPP hold. Welfare rises by equal amounts at home and abroad after a positive monetary shock, and pro- duction is moved closer to its efficient (perfectly competitive market) level. Adjustment to the steady state occurs within one period, but money supply shocks can have real effects lasting beyond the time frame of the nominal rigidities because of the induced short-run wealth accumulation via the current account. Money is not neutral, even in the long run. A Small Open Economy Version of the Baseline Model The baseline redux model and most of the subsequent literature on new open economy macroeconomics are based on a two-country framework, which allows an explicit analysis of international transmission channels and the endogenous determination of interest rates and asset prices. Nevertheless, similar, simpler models may be constructed under the assumption of a small open economy rather than a two-country framework. In the small open economy version it is also easier to allow a distinction between tradable and nontradable goods in the analysis. OR (1995) provide such an example in their Appendix. In this model, monopolistic competition characterizes the nontradable goods sector. The tradable goods sector is characterized by a single homogenous tradable good that sells for the same price all over the world, perfect competition, and flexible prices. The representative agent in the small open economy, called Home, has an endowment of the tradable good in constant quantity in each period and monopoly power over the production of one of the nontradable goods. In this setup, a permanent monetary shock does not generate a current account imbalance. Because output of tradable goods is fixed, current account behavior is determined by the time path for tradables consumption, which, under logseparable preferences and a discount rate equal to 3 Again, note that in this model money demand depends on consumption rather than income. Thus, an increase in consumption due to an increase in the nominal money supply raises money demand by the same proportion. 4 In order to produce more, Home agents have to work harder. The effects from reallocating consumption-production and leisure over time are second-order, and the excess demand that leads to an increase in production outweighs these effects. Of course, welfare results depend upon the welfare function assumed. In the present context, for example, it is important to note that inflation costs (obviously generated by an expansionary monetary policy) are not modeled explicitly. M AY / J U N E 2 0 0 1 23 REVIEW the world interest rate, implies a perfectly flat optimal time path for consumption. Hence, the current account remains in balance. Unlike the scenario in the baseline redux model, however, exchange rate overshooting may occur in this model. Since the monetary shock does not produce a current account imbalance, money is neutral in the long run and the nominal exchange rate rises proportionately to the money stock. Because the consumption elasticity of money demand is less than unity (by assumption), the nominal exchange rate overshoots its long-run level.5 Lane (1997) uses this small open economy model to examine discretionary monetary policy and the impact of openness (measured by the relative size of the tradables sector) on the equilibrium inflation rate. A more open economy (with a large tradables sector) gains less from “surprise” inflation because the output gain from a monetary expansion is exclusively obtained in the nontradables sector and is relatively low. Since the equilibrium inflation rate under discretion is positively related to the gains from “surprise” inflation (Barro and Gordon, 1983), the model predicts that more open economies have lower equilibrium inflation rates (see also Kollmann, 1997; Velasco, 1997). Lane (2001) further extends this model by considering an alternative specification of the utility function under which monetary shocks generate current account imbalances. The sign of the current account response is ambiguous, however; in fact, it depends on the interplay between the intertemporal elasticity of substitution, s, and the intratemporal elasticity of substitution, q. s governs the willingness to substitute consumption across periods, while q governs the degree of substitutability between traded and nontraded consumption. If s<q, a positive monetary shock generates a current account surplus; however, a current account deficit occurs if s>q, whereas the current account remains in balance if s=q. Hence, this model clearly illustrates how the results stemming from this class of models are sensitive to the specification of the microfoundations. The implications of small open economy models of this class seem plausible. While the relevant literature (and consequently the rest of this survey) largely uses a two-country global economy framework, I think it might also be worthwhile to pursue research based on the small open economy assumption. Indeed, the small open economy assumption is plausible for most coun24 M AY / J U N E 2 0 0 1 tries, except the United States. Furthermore, testing the empirical implications of the small open economy models discussed in this section represents a new line of research for applied economists. RETHINKING THE REDUX MODEL Nominal Rigidities As mentioned earlier, subsequent work has modified many of the assumptions of the redux model. In this section, I discuss modifications based on the specification of nominal rigidities. The open economy literature surveyed here provides some novel thoughts in this context and might generate evidence for choosing among alternative specifications of stickiness in macroeconomic models. Whether the extension from closed to open economy models does help to achieve consensus on the specification of nominal rigidities remains to be seen (see, for example, the arguments presented by OR, 2000a, discussed below). With respect to nominal rigidities, the redux model assumes that prices are set one period in advance, which implies that the adjustment to equilibrium is completed after one period. As Corsetti and Pesenti (2001) emphasize, however, if price stickiness is motivated by fixed menu costs, firms have an incentive to adjust prices immediately after a shock if the shock is large enough to violate their participation cost by raising the marginal cost above the price. Hence, the redux analysis may be seen as plausible only within the relevant range of shocks. Hau (2000) generalizes the redux model in three ways to investigate the role of factor price (wage) rigidities and nontradables for the international transmission mechanism. First, following Blanchard and Kiyotaki (1987), the model allows for factor markets and for nominal rigidities originating from sticky factor prices (wages). Second, Hau assumes flexible price setting in local currency and does not assume international goods arbitrage. While the LOOP still holds because of optimal monopolistic price setting, nontradables in the consumer price index produce deviations from PPP. Third, unlike the scenario in the redux analysis, Hau also allows for nontradable goods. The main result of the paper is that factor price rigidities have similar implications to rigid domes5 Indeed, this is exactly the same overshooting condition derived in the Dornbusch (1976) model. FEDERAL RESERVE BANK OF ST. LOUIS tic producer prices. In some sense, the results of the redux analysis are confirmed in the context of a market structure with factor price rigidities. However, nontradables modify the transmission mechanism in important ways. A larger nontradables share implies that exchange rate movements are magnified, since the money market equilibrium relies on a short-run price adjustment carried out by fewer tradables. This effect is interesting since it may help explain the observed high volatility of the nominal exchange rate relative to price volatility. Within the framework of price level rigidities, however, a more sophisticated way of capturing price stickiness is through staggered price setting that allows smooth, rather than discrete, aggregate price level adjustment. Staggering price models of the type developed by, among others, Taylor (1980) and Calvo (1983) are classic examples. Kollmann (1997) calibrates a dynamic open economy model with both sticky prices and sticky wages and then explores the behavior of exchange rates and prices in response to monetary shocks with predetermined price and wage setting and Calvo-type nominal rigidities. His results suggest that Calvo-type nominal rigidities match very well the observed high correlation between nominal and real exchange rates and the smooth adjustment in the price level, but they match less well correlations between output and several other macroeconomic variables. Chari, Kehoe, and McGrattan (CKM) (1998, 2000) link sticky price models to the behavior of the real exchange rate in the context of a new open economy macroeconomic model. They start by noting that the data show large and persistent deviations of real exchange rates from PPP that appear to be driven primarily by deviations from the LOOP for tradable goods. That is, real and nominal exchange rates are about six times more volatile than relative price levels and both are highly persistent, with first-order serial correlations of about 0.85 and 0.83, respectively, at annual frequency. CKM then develop a sticky price model with pricediscriminating monopolists that produces deviations from the LOOP for tradable goods. However, their benchmark model, which has prices set for one quarter at a time and a unit consumption elasticity of money demand, does not come close to reproducing the serial correlation properties of real and nominal exchange rates noted above. A model in which producers set prices for six quarters at a time and with a consumption elasticity of money demand of 0.27 does much better in generating persistent and volatile real and nominal exchange rates. The serial correlations of real and nominal exchange rates are 0.65 and 0.66, respectively, and exchange rates are about three times more volatile than relative price levels. In a closely related paper, Jeanne (1998) attempts to assess whether money can generate persistent economic fluctuations in a dynamic general equilibrium model of the business cycle. Jeanne shows that a small nominal friction in the goods market can make the response of output to monetary shocks large and persistent if it is amplified by real-wage rigidity in the labor market. He also argues that, for plausible levels of real-wage rigidity, a small degree of nominal stickiness may be sufficient for money to produce economic fluctuations as persistent as those observed in the data.6 OR (2000a), discussed in detail later in this paper, develop a stochastic new open economy macroeconomic model based on sticky nominal wages, monopolistic competition, and exporterscurrency pricing. Solving explicitly the wage-setting problem under uncertainty allows the analysis of the welfare implications of alternative monetary regimes and their impact on expected output and terms of trade. To motivate their model, OR show that observed correlations between terms of trade and exchange rates appear to be more consistent with their assumptions about nominal rigidities than with the alternative specification based on local-currency pricing. I now turn to a discussion of the reformulations of the redux model based on the introduction of pricing to market. Pricing to Market While the redux model assumes that the LOOP holds for all tradable goods, a number of researchers have questioned the model on the ground that deviations from the LOOP across international borders appear to be larger than can be explained by geographical distance or transport costs (see, for example, Engel, 1993, and Engel and Rogers, 1996). Some authors have therefore extended the redux model by combining international segmentation with imperfectly competitive firms and local-currency pricing (essentially pricing to market or PTM). Krugman (1987) used the term PTM to characterize price discrimination for 6 See also Andersen (1998), Benigno (1999), and Bergin and Feenstra (1999, 2000). M AY / J U N E 2 0 0 1 25 REVIEW certain types of goods (such as automobiles and many types of electronics) where international arbitrage is difficult or perhaps impossible. This may be due, for example, to differing national standards (for example, 100-volt light bulbs are not used in Europe and left-hand-side-drive cars are not popular in the United Kingdom, Australia, or Japan). Further, monopolistic firms may be able to limit or prevent international goods arbitrage by refusing to provide warranty service in one country for goods purchased in another. To the extent that prices cannot be arbitraged, producers can discriminate across different international markets. Studies allowing for PTM typically find that PTM may play a central role in exchange rate determination and in international macroeconomic fluctuations. This happens because PTM acts to limit the pass-through from exchange rate movements to prices, reducing the “expenditure switching” role of exchange rate changes and potentially generating greater exchange rate variability than would be obtained in models without PTM. Also, nominal price stickiness, in conjunction with PTM, magnifies the response of the exchange rate to macroeconomic fundamentals shocks. Further, by generating deviations from PPP, PTM models also tend to reduce the comovement in consumption across countries while increasing the comovement of output, fitting some well-known empirical regularities (see Backus, Kehoe, and Kydland, 1992). Finally, the introduction of PTM has important welfare implications for the international transmission of monetary policy shocks, as discussed below. Betts and Devereux (2000b), for example, characterize PTM by assuming that prices of many goods are set in the local currency of the buyer and do not adjust at high frequency. Consequently, real exchange rates move with nominal exchange rates at high frequency. These assumptions also imply that price/cost markups fluctuate endogenously in response to exchange rate movements rather than nominal prices (see also Knetter, 1993, on this point). In the Betts-Devereux framework, traded goods are characterized by a significant degree of national market segmentation and trade is carried out only by firms. Households cannot arbitrage away price differences across countries, and firms engage in short-term nominal price setting. Therefore, prices are sticky in terms of the local currency.7 The Betts-Devereux model is based on an economy with differentiated products and assumes 26 M AY / J U N E 2 0 0 1 that firms can price-discriminate across countries. With a high degree of PTM (that is, when a large fraction of firms engages in PTM), a depreciation of the exchange rate has little effect on the relative price of imported goods faced by domestic consumers. This weakens the allocative effects of exchange rate changes relative to a situation where prices are set in the seller’s currency; in the latter case, pass-through of exchange rates to prices is immediate. Hence, PTM reduces the expenditure switching effects of exchange rate depreciation, which generally implies a shift of world demand toward the exports of the country whose currency is depreciating. Because domestic prices show little response to exchange rate depreciation under PTM, the response of the equilibrium exchange rate may be substantially magnified and, consistent with well-known observed empirical regularities, exchange rates may vary more than relative prices. PTM also has implications for the international transmission of macroeconomic shocks. In the absence of PTM, for example, monetary disturbances tend to generate large positive comovements of consumption across countries but large negative comovements of output. However, PTM reverses the ordering: the deviations from PPP induced by PTM make consumption comovements fall. At the same time, the elimination of expenditure switching effects of the exchange rate enhances comovements of output across countries. In terms of welfare, recall that the framework based on the LOOP and PPP generally suggests that an unanticipated monetary expansion raises welfare of all agents at home and abroad. With PTM, however, a domestic monetary expansion raises home welfare but reduces foreign welfare and monetary policy is a “beggar-thy-neighbor” instrument. Therefore, the PTM framework, unlike the framework based on the LOOP and PPP, provides a case for international monetary policy coordination. Overall, the PTM framework suggests that goods market segmentation might help explain international quantity and price fluctuations and may have important implications for the international transmission of economic shocks, policy, and welfare. 7 The model of Betts and Devereux (2000b) is used as a representative of this class of PTM models in this section. Other examples of models adopting PTM are Betts and Devereux (1996, 1997, 1999, 2000a); CKM (1998, 2000); and Bergin and Feenstra (1999, 2000). FEDERAL RESERVE BANK OF ST. LOUIS The Indeterminacy of the Steady State In the framework proposed by OR (1995), the current account plays a crucial role in the transmission of shocks. However, the steady state is indeterminate and both the consumption differential between countries and an economy’s net foreign assets are nonstationary. After a monetary shock, the economy will move to a different steady state until a new shock occurs. When the model is loglinearized to obtain closed-form solutions of the endogenous variables, one is approximating the dynamics of the model around a moving steady state. This makes the conclusions implied by the model questionable. In particular, the reliability of the log-linear approximations is low because variables wander away from the initial steady state. Many subsequent variants of the redux model de-emphasize the role of net foreign assets accumulation as a channel of macroeconomic interdependence between countries. This is done by assuming that (i) the elasticity of substitution between domestic and foreign goods is unity or (ii) financial markets are complete. Both of these assumptions imply that the current account does not react to shocks (see, for example, Corsetti and Pesenti, 2001, and OR, 2000a).8 While this framework achieves the desired result of determinacy of the steady state, it requires strong assumptions— (i) or (ii) above—to shut off the current account, which is unrealistic. In a sense these solutions circumvent the problem of indeterminacy, but they do not solve it. Ghironi (2000a) provides an extensive discussion of the indeterminacy and nonstationarity problems in the redux model. Ghironi also provides a tractable two-country model of macroeconomic interdependence that does not rely on either of the above assumptions in that the elasticity of substitution between domestic and foreign goods can be different from unity and that financial markets are incomplete, consistent with reality. Using an overlapping generations structure, Ghironi shows how there exists a steady state, endogenously determined, to which the world economy reverts following temporary shocks. Accumulation of net foreign assets plays a role in the transmission of shocks to productivity. Finally, Ghironi also shows that shutting off the current account may lead to large errors in welfare comparisons, which calls for rethinking of several results in this literature. The issue of indeterminacy of the steady state deserves further attention from researchers in this area. Preferences While the explicit treatment of microfoundations is a key advantage of new open economy macroeconomic models relative to the MFD model, the implications of such models depend on the specification of preferences. One convenient assumption in the redux model is the symmetry with which home and foreign goods enter preferences in the constant-elasticity-of-substitution (CES) utility function. Corsetti and Pesenti (2001) extend the redux model to investigate the effects of a limited degree of substitution between home and foreign goods. In their baseline model, the LOOP still holds and technology is described by a Cobb-Douglas production function, with a unit elasticity of substitution between home and foreign goods and constant income shares for home and foreign agents. The model illustrates that the welfare effects of expansionary monetary and fiscal policies are related to internal and external sources of economic distortion, namely, monopolistic supply in production and monopoly power of a country. For example, an unanticipated exchange rate depreciation can be “beggar-thyself” rather than “beggarthy-neighbor” since gains in domestic output are offset by losses in consumers’ purchasing power and a deterioration in terms of trade. Also, openness is not inconsequential: smaller and more open economies are more prone to inflationary consequences. Fiscal shocks, however, are generally “beggar-thy-neighbor” in the long run, but they raise domestic demand in the short run for given terms of trade. These results provide a role for international policy coordination, which is not the case in the redux model.9,10 An important assumption in the redux model is that consumption and leisure are separable. This 8 This is a problem often encountered in the international real business cycles literature. Note, however, that the role of current account dynamics in generating persistent effects of transitory shocks has often been found to be quantitatively unimportant in this literature. See the discussion on this point by Baxter and Crucini (1995) and Kollmann (1996). 9 Recall that the redux model has the unrealistic implication that the optimal monetary surprise is infinite, which is of course not the case in the Corsetti-Pesenti model. 10 Devereux (1999), Doyle (2000), Tille (1998a, b), Betts and Devereux (2000a), and Benigno (2001), among others, represent attempts to model explicitly international policy coordination in variants of the Corsetti-Pesenti model. See also OR (2000b). M AY / J U N E 2 0 0 1 27 REVIEW assumption is not compatible, however, with a balanced growth path if trend technical progress is confined to the market sector. As a country becomes richer, labor supply gradually declines, converging to a situation in which labor supply is zero, unless the intertemporal elasticity of substitution is unity. CKM (1998), for example, employ a preference specification with nonseparable consumption and leisure (which is fairly standard, for example, in the real business cycles literature). This preference specification is compatible with a balanced growth path and is also consistent with the high real exchange rate volatility that is observed in the data. A more elastic labor supply and a greater intertemporal elasticity of substitution in consumption generates more volatile real exchange rates. Hence, this preference specification provides more plausible implications for the short-run dynamics of several macroeconomic variables relative to the redux model and better matches some observed regularities.11 While the discussion in this subsection has focused on only two issues with regard to the specification of preferences (the degree of substitutability of home and foreign goods in consumption and the separability of consumption and leisure in utility), the results of models with explicit microfoundations may depend crucially on the specification of the utility function in other ways. Relaxing the symmetry assumption in the utility function and allowing for nonseparable consumption and leisure, for example, would yield more plausible and more general utility functions. Of course, there are other related important issues and, in this respect, the closed economy literature can lend ideas on how to proceed; see, for example, the large and growing closed economy literature on habit formation and home production. Financial Markets Structure The redux model assumes that there is international trade only in a riskless real bond, and hence financial markets are not complete. Deviations from this financial markets structure have been examined in several papers. CKM (1998) compare, in the context of their PTM model, the effects of monetary shocks under complete markets and under a setting where trade occurs only in one noncontingent nominal bond denominated in the domestic currency. Their results show that the redux model is rather robust in this case. In fact, incompleteness of financial markets appears 28 M AY / J U N E 2 0 0 1 to imply small differences for the persistence of monetary shocks.12 A related study by Sutherland (1996) analyzes trading frictions (which essentially allow for a differential between domestic and foreign interest rates) in the context of an intertemporal general equilibrium model where financial markets are incomplete and the purchase of bonds involves convex adjustment costs. Goods markets are perfectly competitive and goods prices are subject to Calvo-type sluggish adjustment. Sutherland shows that barriers to financial integration have a larger impact on output the greater the degree of price inertia. With substantial price inertia, output adjusts slowly and more agents smooth their consumption pattern via international financial markets. Sutherland’s simulations suggest that financial market integration increases the volatility of a number of variables when shocks originate from the money market but decreases the volatility of most variables when shocks originate from real demand or supply; these results also hold in the generalization of Sutherland’s model by Senay (1998). For example, a positive domestic monetary shock induces a domestic interest rate decline and, therefore, a negative interest rate differential with the foreign country. In turn, the negative interest rate differential produces a smaller exchange rate depreciation and a larger jump in relative domestic consumption. This implies that domestic output rises less in this model than in the baseline redux model. OR (1995) defend their assumptions regarding the financial markets structure of the redux model stating that it would seem incoherent to analyze imperfections or rigidities in goods markets, while at the same time assuming that international capital markets are complete. Indeed, one may argue that, if there were complete international risk sharing, it is unclear how price or wage rigidities could exist. Nevertheless, the assumption of full international capital integration is very controversial. While many economists would agree that the degree of financial integration has increased over 11 A further modification of the redux model considered by researchers involves the introduction of nontradables in the analysis, which typically implies an increase in the size of the initial exchange rate response to a monetary shock; see, for example, Ghironi (2000b), Hau (2000), and Warnock (1999). 12 Note, however, that none of the models discussed in this paper have complete markets in the Arrow-Debreu sense, with the possible exception of CKM (1998). FEDERAL RESERVE BANK OF ST. LOUIS time (at least across major industrialized countries), it is perhaps fair to say that there are frictions in financial markets (see Obstfeld, 1995). Given the controversies over what may constitute a realistic financial markets structure, the analysis of the impact of barriers to financial integration remains an avenue of research in its own right. The Role of Capital The literature has largely neglected the role of capital in new open economy models. For example, competitive models with capital can deliver effects of supply shocks similar to those typically found in monopolistically competitive models with endogenous utilization of capital (see, for example, Finn, 2000).13 CKM (1998, 2000) also argue that capital (omitted in the redux model and most subsequent variants of it) may play an important role because monetary shocks can cause investment booms by reducing the short-term interest rate and hence generate a current account deficit (rather than a surplus, as in the redux model). Explicitly allowing for capital in new open economy models is an important immediate avenue for future research. STOCHASTIC NEW OPEN ECONOMY MACROECONOMICS Recently, the certainty equivalence assumption that characterizes much of the literature discussed above (including the redux model) has been relaxed. While certainty equivalence allows researchers to approximate exact equilibrium relationships, it “precludes a serious welfare analysis of changes that affect the variance of output” (Kimball, 1995, p. 1243). Following this line of reasoning, OR (1998) first extend the redux model and the work by Corsetti and Pesenti (2001) to a stochastic environment. More precisely, the innovation in OR (1998) involves moving away from the analysis of only unanticipated shocks.14 Risk and Exchange Rates The OR (1998) model may be interpreted as a sticky-price monetary model in which risk has an impact on asset prices, short-term interest rates, the price-setting decisions of individual producers, expected output, and international trade flows. This approach allows OR to quantify the welfare tradeoff between alternative exchange rate regimes and to relate such tradeoff to a coun- try’s size. Another important finding of this model is that exchange rate risk affects the level of the exchange rate. Not surprisingly, as discussed below, the model has important implications for the behavior of the forward premium and for the forward discount bias. The setup of the OR (1998) model adds uncertainty to the redux model. Most results are standard and qualitatively identical to those of the redux model. However, one of the most original results of this approach is the equation describing the equilibrium exchange rate. To obtain the equilibrium exchange rate, OR (1998) assume that Home and Foreign have equal trend inflation rates (equal to the long-run nominal interest rates through the Fisher equation) and use conventional log-linearizations (in addition to the assumption that PPP holds) to obtain an equation of nominal exchange rate determination. This equation may be interpreted as a monetary-model-type equation where conventional macroeconomic fundamentals determine the exchange rate. Also, this exchange rate equation is the same as in the redux model, except for a time-varying risk premium term. Under the assumption of no bubbles, the solution of the model suggests that a level risk premium enters the exchange rate equation. In some sense, this model may explain the failure of conventional monetary models of exchange rate determination in terms of an omitted variable in the exchange rate equation, namely, exchange rate risk; a similar result was obtained by Hodrick (1989) in the context of a cash-in-advance flexibleprice exchange rate model. For example, less relative risk of investments in the Home currency induces a fall in the domestic nominal interest rate and an appreciation of the domestic currency, capturing the idea of a “safe haven” effect on the Home currency. For reasonable interest rates, a rise in Home monetary variability induces both a fall in the 13 Finn (2000) demonstrates that a theory of perfect competition, which views capital utilization as the avenue through which energy enters into the model economy, can explain the observed effects of energy price increases on economic activity, which Rotemberg and Woodford (1996) and several subsequent studies defined as inexplicable without a theory of imperfect competition. 14 Note, however, that I am using the term stochastic loosely here. Even in approximated dynamics with certainty equivalence, models are stochastic. Evaluations of the first-order effects of second moments (noncertainty equivalence) recognize an aspect of stochastic models that is often neglected, but this does not by itself define a stochastic model. M AY / J U N E 2 0 0 1 29 REVIEW level of the exchange rate risk premium and a fall in the forward premium (the latter fall is shown to be much larger in magnitude). This result contradicts the conventional wisdom that financial markets attach a positive risk premium to the currency with higher monetary volatility. The intuition is explained by OR (1998) as follows: [A] rise in Home monetary volatility may lead to a fall in the forward premium, even holding expected exchange rate changes constant. Why? If positive domestic monetary shocks lead to increases in global consumption, then domestic money can be a hedge, in real terms, against shocks to consumption. (The real value of Home money will tend to be unexpectedly high in states of nature where the marginal utility of consumption is high.) Furthermore—and this effect also operates in a flexible-price model—higher monetary variability raises the expectation of the future real value of money, other things equal. (p. 24) This result provides a novel theoretical explanation of the forward premium puzzle. Not only should high interest rates not necessarily be associated with expected depreciation, but the opposite may also be true, especially for countries with similar trend inflation rates. Nevertheless, the results produced by this model may well depend critically on the specification of the microfoundations and are, therefore, subject to the same caveats raised by the literature questioning the appropriateness of the redux specification. Thus, it is legitimate to wonder how adopting the other specifications (alternative specifications of utility, different nominal rigidities, etc.) described earlier would affect the results of the OR (1998) stochastic model. The next subsection discusses, for example, the changes induced by the introduction of PTM in this model. Related Studies The OR (1998) analysis described above is based on the following assumptions: (i) that producers set prices in their own currency, (ii) that the price paid by foreigners for home goods (and the price paid by domestic residents for foreign goods) varies instantaneously when the exchange rate changes, and (iii) that the LOOP holds. Devereux and Engel (1998) extend the OR (1998) analysis by 30 M AY / J U N E 2 0 0 1 assuming PTM and that producers set a price in the home currency for domestic residents and in the foreign currency for foreign residents. Hence, when the exchange rate fluctuates, the LOOP does not hold. The risk premium depends on the type of price-setting behavior of producers. Devereux and Engel compare the agent’s welfare between fixed and flexible exchange rate arrangements and find that exchange rate systems matter not only for the variances of consumption, real balances, and leisure but also for their mean values once risk premia are incorporated into pricing decisions. Since PTM insulates consumption from exchange rate fluctuations, floating exchange rates are less costly under PTM than under producer currency pricing. Consequently, a flexible regime generally dominates a pegged regime.15 Engel (1999) makes four points in summarizing the evidence on the foreign exchange risk premium in this class of general equilibrium models. First, while the existence of a risk premium in flexible-price general equilibrium models depends on the correlation of exogenous monetary shocks and aggregate supply shocks, the risk premium arises endogenously in sticky-price models. Second, the distribution of aggregate supply shocks does not affect the foreign exchange risk premium in sticky-price models. Third, given that the risk premium depends on the prices faced by consumers, when the LOOP does not hold there is no unique foreign exchange risk premium since producers set prices in consumers’ currencies. Fourth, standard stochastic dynamic general equilibrium models do not usually imply large risk premia. The common denominator in these models is that the exchange rate risk premium is an important determinant of the equilibrium level of the exchange rate. It remains an open question whether one could build a sticky-price model capable of convincingly explaining the forward premium puzzle. Nevertheless, this seems a promising avenue for future research. NEW DIRECTIONS: THE SOURCE OF NOMINAL RIGIDITIES AND THE CHOICE BETWEEN LOCAL AND FOREIGN CURRENCY PRICING OR (2000a) may have again set new directions for stochastic open economy models of the class 15 See also Bacchetta and van Wincoop (1998). FEDERAL RESERVE BANK OF ST. LOUIS discussed in this paper. They start by noting that the possibilities for modeling nominal rigidities are more numerous in a multicurrency international economy than in a single-money closed economy setting and that, in an international setting, it is natural to consider the possibility of segmentation between national markets. OR address the empirical issue of whether local currency pricing or foreign currency pricing is closer to reality. OR argue that, if imports are invoiced in the importing country’s currency, unexpected currency depreciations should be associated with improvements (rather than deteriorations) in the terms of trade. They then show that this implication is inconsistent with the data. Indeed, their evidence suggests that aggregate data may favor a traditional framework in which exporters largely invoice in home currency and nominal exchange rate changes have significant short-run effects on international competitiveness and trade. The main reservations of OR about the PTM– local currency pricing framework employed by several papers in this literature are captured by the following observations. First, a large fraction of measured deviations from the LOOP results from nontradable components incorporated in consumer price indices for supposedly traded goods (for example, rents, distribution services, advertising, etc.); it is not clear whether the extreme market segmentation and pass-through assumptions of the PTM–local currency pricing approach are necessary to explain the close association between deviations from the LOOP and exchange rates. Second, price stickiness induced by wage stickiness is likely to be more important in determining persistent macroeconomic fluctuations since trade invoicing cannot generate sufficiently high persistence. (Invoicing largely applies to contracts of 90 days or less.) Third, the direct evidence on invoicing is largely inconsistent with the view that exporters set prices mainly in importers’ currencies (see, for example, ECU Institute, 1995); the United States is, however, an exception. Fourth, international evidence on markups is consistent with the view that invoicing in exporters’ currencies is the predominant practice (see, for example, Goldberg and Knetter, 1997). OR (2000a) build their stochastic dynamic open economy model with nominal rigidities in the labor market (rationalized on the basis of the first two observations above) and foreign currency pricing (rationalized on the basis of the last two observations above). They consider a standard twocountry global economy where Home and Foreign produce an array of differentiated tradable goods (Home and Foreign have equal size). In addition, each country produces an array of differentiated nontraded goods. Workers set next period’s domestic-currency nominal wages and then meet labor demand in the light of realized economic shocks. Prices of all goods are completely flexible. OR provide equilibrium equations for preset wages and a closed-form solution for each endogenous variable in the model as well as solutions for variances and for utility. In particular, the solution for the exchange rate indicates that a relative Home money supply increase that occurs after nominal wages are set would cause an overshooting depreciation in the exchange rate. A fully anticipated change, however, causes a precisely equal movement in the wage differential and in the exchange rate. In this setup, OR show welfare results on two fronts. First, they show that constrained-efficient monetary policy rules replicate the flexible-price equilibrium and feature a procyclical response to productivity shocks.16 For example, a positive productivity shock that would elicit greater labor supply and output under flexible wages optimally induces an expansionary Home monetary response when wages are set in advance. The same shock elicits a contractionary Foreign monetary response, but the net global monetary response is always positive. Also, optimal monetary policy allows the exchange rate to fluctuate in response to crosscountry differences in productivity shocks. This conclusion is similar to the result obtained by King and Wolman (1996) in a rational expectations model where monetary policy has real effects because imperfectly competitive firms are constrained to adjust prices only infrequently and to satisfy all demand at posted prices. In the KingWolman sticky-price model, it is optimal to set monetary policy so that the nominal interest rate is close to zero (that is, neutralizing the effect of the sticky prices), replicating in an imperfectly competitive model the result that Friedman found under perfect competition. Under a perfect infla16 These monetary policy rules are (i) constrained since they are derived by maximizing an average of Home and Foreign expected utilities subject to the optimal wage-setting behavior of workers and price-setting behavior of firms described in the model, and (ii) efficient since the market allocation cannot be altered without making one country worse off, given the constraints. M AY / J U N E 2 0 0 1 31 REVIEW tion target, the monetary authority makes the money supply evolve so that a model with sticky prices behaves much like one with flexible prices. Second, OR calculate the expected utility for each of three alternative monetary regimes, namely, an optimal floating rate regime, world monetarism (under which two countries fix the exchange rate while also fixing an exchange rate– weighted average of the two national money supplies), and an optimal fixed rate regime. The outcome is that the expected utility under an optimal floating-rate regime is highest. This result is intuitively obvious given that optimal monetary policy in this model involves allowing the exchange rate to fluctuate in response to cross-country differences in productivity shocks. Fixed-rate regimes would only be worthwhile if productivity shocks at home and abroad were perfectly correlated.17 The OR (2000a) model addresses several theoretical and policy questions, including welfare analysis under alternative nominal regimes. The assumption that nominal exchange rate movements shift world demand between countries in the short run, which plays a crucial role in the traditional MFD model, is shown to be consistent with the facts and can reasonably be used as a building block in stochastic open economy models. Needless to say, this approach warrants further generalizations and refinements. In particular, note that the current account is shut off in OR (2000a) to avoid the indeterminacy problem discussed earlier. However, shutting off the current account makes the model less plausible from an empirical point of view since it distorts the dynamics of the economy being modeled. It is worth noting that the new open economy macroeconomics literature to date has (implicitly or explicitly) assumed that there are no costs of international trade. Nevertheless, the introduction of some sort of international trade costs (including, among others, transport costs, tariffs, and nontariff barriers) may be key in understanding how to improve empirical exchange rate models and in explaining several unresolved puzzles in international macroeconomics and finance. While the allowance of trade costs in open economy modeling is not a new idea and goes back at least to Samuelson (1954), OR (2000c) have recently stressed the role of trade costs in open economy macroeconomics. Indeed, OR (2000c) present something of a “unified theory” that helps elucidate what the profession may be missing when 32 M AY / J U N E 2 0 0 1 trying to explain several puzzling empirical findings using trade costs as the fundamental modeling feature, with sticky prices playing a distinctly secondary role. It is hoped that future research in new open economy macroeconomics follows the suggestion of OR (2000c) to make explicit allowance for non-zero international trade costs. CONCLUSIONS In this paper, I have selectively reviewed the recent literature on new open economy macroeconomics, which has been growing exponentially in the last five years or so. The increasing sophistication of stochastic open economy models allows rigorous welfare analysis and provides new explanations of several puzzles in international macroeconomics and finance. Whether this approach will become the new workhorse model for open economy macroeconomics, whether a preferred specification within this class of models will be reached, and whether this approach will provide insights on developing better-fitting empirical exchange rate models are open questions. Although the theory in the spirit of new open economy macroeconomics is developing very rapidly, there is little effort at present to test the predictions of new open economy models. Theorists working in this area should specify exactly which empirical exchange-rate equations they would have empiricists estimate. If there is to be consensus in the profession on a particular model specification, this theoretical apparatus has to produce clear estimable equations.18 Agreeing on a particular new open economy model is hardly possible at this stage. This is the case not least because it requires agreeing on assumptions which are often difficult to test directly (such as the specification of the utility function) or because they concern issues on which economists have strong beliefs on which they have not often been willing to compromise (such as whether nominal rigidities originate from the goods market or the labor market or whether 17 Indeed, the results suggest that the difference between the expected utility under an optimal floating-rate regime and the expected utility under an optimal fixed-rate regime may not be too large if the variance of productivity shocks is very small or the elasticity of utility with respect to effort is very large. 18 A first step toward new open economy macroeconometrics has been made, for example, by Ghironi (2000c). I am also currently investigating empirical exchange rate equations inspired by the new open economy macroeconomics literature. FEDERAL RESERVE BANK OF ST. LOUIS nominal rigidities exist at all). Achieving a new paradigm for open economy modeling is, however, a major challenge which lies ahead for the profession. While the profession shows some convergence toward a consensus approach in macroeconomic modeling (where the need for microfoundations, for example, seems widely accepted), it seems very unlikely that a consensus model will emerge in the foreseeable future. REFERENCES Andersen, Torben M. “Persistency in Sticky Price Models.” European Economic Review, May 1998, 42(3-5), pp. 593603. Bacchetta, Philippe and van Wincoop, Eric. “Does Exchange Rate Stability Increase Trade and Capital Flows?” Discussion Paper 1962, Centre for Economic Policy Research, September 1998. Backus, David K.; Kehoe, Patrick J. and Kydland, Finn E. “International Real Business Cycles.” Journal of Political Economy, August 1992, 100(4), pp. 745-75. ___________; ___________ and ___________. “Dynamics of the Trade Balance and the Terms of Trade: The J-Curve?” American Economic Review, March 1994, 84(1), pp. 84103. ___________; ___________ and ___________. “International Business Cycles: Theory and Evidence,” in Thomas F. Cooley, ed., Frontiers of Business Cycle Research. Princeton: Princeton University Press, 1995. Barro, Robert J. and Gordon, David B. “Rules, Discretion and Reputation in a Model of Monetary Policy.” Journal of Monetary Economics, July 1983, 12(1), pp. 101-21. Baxter, Marianne and Crucini, Mario. “Business Cycles and the Asset Structure of Foreign Trade.” International Economic Review, November 1995, 36(4), pp. 821-54. Benigno, Gianluca. “Real Exchange Rate Persistence with Endogenous Monetary Policy.” Unpublished manuscript, University of California, Berkeley, 1999. Benigno, Pierpaolo. “Optimal Monetary Policy in a Currency Area.” Unpublished manuscript, New York University, 2001. Bergin, Paul R. and Feenstra, Robert C. “Pricing to Market, Staggered Contracts and Real Exchange Rate Persistence.” Working Paper No. 99/01, University of California, Davis, February 1999. ___________ and ___________. “Staggered Price Setting, Translog Preferences, and Endogenous Persistence.” Journal of Monetary Economics, June 2000, 45(3), pp. 657-80. Betts, Caroline and Devereux, Michael B. “The Exchange Rate in a Model of Pricing-to-Market.” European Economic Review, April 1996, 40(3-5), pp. 1007-21. ___________ and ___________. “The International Monetary Transmission Mechanism: A Model of Real Exchange Rate Adjustment Under Pricing-to-Market.” Unpublished manuscript, University of British Columbia, 1997. ___________ and ___________. “The International Effects of Monetary and Fiscal Policy in a Two-Country Model.” Unpublished manuscript, University of British Columbia, 1999. ___________ and ___________. “International Monetary Policy Coordination and Competitive Depreciation: A ReEvaluation.” Journal of Money, Credit and Banking, November 2000a, 32(4), pp. 722-45. ___________ and ___________. “Exchange Rate Dynamics in a Model of Pricing-to-Market.” Journal of International Economics, February 2000b, 50(1), pp. 215-44. Blanchard, Olivier J. and Kiyotaki, Nobuhiro. “Monopolistic Competition and the Effects of Aggregate Demand.” American Economic Review, September 1987, 77(4), pp. 647-66. Calvo, Guillermo A. “Staggered Prices in a Utility-Maximizing Framework.” Journal of Monetary Economics, September 1983, 12(3), pp. 383-98. Chari, V.V.; Kehoe, Patrick J. and McGrattan, Ellen R. “Monetary Shocks and Real Exchange Rates in Sticky Price Models of International Business Cycles.” Unpublished manuscript, Federal Reserve Bank of Minneapolis, 1998. ___________; ___________ and ___________. “Sticky Price Models of the Business Cycle: Can the Contract Multiplier Solve the Persistence Problem?” Econometrica, September 2000, 68(5), pp. 1151-79. Corsetti, Giancarlo and Pesenti, Paolo. “Welfare and M AY / J U N E 2 0 0 1 33 REVIEW Macroeconomic Interdependence.” Quarterly Journal of Economics, 2001 (forthcoming). Incomplete Markets.” Unpublished manuscript, Federal Reserve Bank of New York, 2000a. Devereux, Michael B. “Do Fixed Exchange Rates Inhibit Macroeconomic Adjustment?” Unpublished manuscript, University of British Columbia, 1999. ___________. “U.S.-Europe Economic Interdependence and Policy Transmission.” Unpublished manuscript, Federal Reserve Bank of New York, 2000b. ___________ and Engel, Charles. “Fixed vs. Floating Exchange Rates: How Price Setting Affects the Optimal Choice of Exchange-Rate Regime.” Working Paper No. 6867, National Bureau of Economic Research, 1998. ___________. “Towards New Open Economy Macroeconometrics.” Staff Report 100, Federal Reserve Bank of New York, 2000c. Dixon, Huw and Rankin, Neil. “Imperfect Competition and Macroeconomics: A Survey.” Oxford Economic Papers, April 1994, 46(2), pp. 171-99. ___________ and Rebucci, Alessandro. “Monetary Rules for Emerging Market Economies.” Unpublished manuscript, Federal Reserve Bank of New York and International Monetary Fund, 2000. Dornbusch, Rudiger. “Expectations and Exchange Rate Dynamics.” Journal of Political Economy, December 1976, 84(6), pp. 1161-76. Doyle, Brian M. “Reputation and Currency Crises (or ‘Countries of a Feather Devalue Together’).” Unpublished manuscript, Board of Governors of the Federal Reserve System, 2000. ECU Institute. International Currency Competition and the Future Role of the Single European Currency. London: Kluwer Law International, 1995. Engel, Charles. “Real Exchange Rates and Relative Prices: An Empirical Investigation.” Journal of Monetary Economics, August 1993, 32(1), pp. 35-50. ___________. “Accounting for U.S. Real Exchange Rate Changes.” Journal of Political Economy, June 1999, 107(3), pp. 507-38. ___________ and Rogers, John H. “How Wide Is the Border?” American Economic Review, December 1996, 86(5), pp. 1112-25. Finn, Mary G. “Perfect Competition and the Effects of Energy Price Increases on Economic Activity.” Journal of Money, Credit and Banking, August 2000, 32(3), pp. 40016. Fleming, J. Marcus. “Domestic Financial Policies Under Fixed and Under Floating Exchange Rates.” International Monetary Fund Staff Papers, November 1962, 9(3), pp. 369-80. Ghironi, Fabio. “Macroeconomic Interdependence Under 34 M AY / J U N E 2 0 0 1 Goldberg, Pinelopi K. and Knetter, Michael M. “Goods Prices and Exchange Rates: What Have We Learned?” Journal of Economic Literature, September 1997, 35(3), pp. 1243-72. Hau, Harald. “Exchange Rate Determination: The Role of Factor Price Rigidities and Nontradables.” Journal of International Economics, April 2000, 50(2), pp. 421-47. Hodrick, Robert J. “Risk, Uncertainty, and Exchange Rates.” Journal of Monetary Economics, May 1989, 23(3), pp. 433-59. Jeanne, Olivier. “Generating Real Persistent Effects of Monetary Shocks: How Much Nominal Rigidity Do We Really Need?” European Economic Review, June 1998, 42(6), pp. 1009-32. Kimball, Miles S. “The Quantitative Analytics of the Basic Neomonetarist Model.” Journal of Money, Credit and Banking, November 1995, 27(4), pp. 1241-77. King, Robert G. and Wolman, Alexander L. “Inflation Targeting in a St. Louis Model of the 21st Century.” Federal Reserve Bank of St. Louis Review, May/June 1996, 78(3), pp. 83-107. Knetter, Michael M. “International Comparisons of Priceto-Market Behavior.” American Economic Review, June 1993, 83(3), pp. 473-86. Kollmann, Robert. “Incomplete Asset Markets and the Cross-Country Consumption Correlation Puzzle.” Journal of Economic Dynamics and Control, May 1996, 20(5), pp. 945-61. FEDERAL RESERVE BANK OF ST. LOUIS ___________. “The Exchange Rate in a Dynamic-Optimizing Current Account Model with Nominal Rigidities: A Quantitative Investigation.” Working Paper WP/97/07, International Monetary Fund, January 1997. Krugman, Paul R. “Pricing to Market When the Exchange Rate Changes,” in Sven W. Arndt and J. David Richardson, eds., Real-Financial Linkages Among Open Economies. Cambridge, MA: MIT Press, 1987. Lane, Philip R. “Inflation in Open Economies.” Journal of International Economics, May 1997, 42(3-4), pp. 327-47. ___________. “The New Open Economy Macroeconomics: A Survey.” Working Paper No. 2115, Centre for Economic Policy Research Discussion Paper, March 1999; forthcoming in Journal of International Economics. ___________. “Money Shocks and the Current Account,” in Guillermo Calvo, Rudiger Dorbusch, and Maurice Obstfeld, eds., Money, Capital Mobility, and Trade: Essays in Honor of Robert Mundell. Cambridge, MA: MIT Press, 2001. Lucas, Robert E., Jr. “Interest Rates and Currency Prices in a Two-Country World.” Journal of Monetary Economics, November 1982, 10(3), pp. 335-59. Mundell, Robert A. “The Appropriate Use of Monetary and Fiscal Policy for Internal and External Stability.” International Monetary Fund Staff Papers, March 1962, 9(1), pp. 70-79. ___________. “Capital Mobility and Stabilization Policy Under Fixed and Flexible Exchange Rates.” Canadian Journal of Economics and Political Science, November 1963, 29(4), pp. 475-85. Obstfeld, Maurice. “International Capital Mobility in the 1990s,” in Peter B. Kenen, ed., Understanding Interdependence: The Macroeconomics of the Open Economy. Princeton: Princeton University Press, 1995. ___________ and Rogoff, Kenneth. “Exchange Rate Dynamics Redux.” Journal of Political Economy, June 1995, 103(3), pp. 624-60. ___________ and ___________. Foundations of International Macroeconomics. Cambridge, MA: MIT Press, 1996. ___________ and ___________. “Risk and Exchange Rates.” Working Paper No. 6694, National Bureau of Economic Research, August 1998. ___________ and ___________. “New Directions for Stochastic Open Economy Models.” Journal of International Economics, February 2000a, 50(1), pp. 117-53. ___________ and ___________. “Do We Really Need a New International Monetary Compact?” Working Paper No. 7864, National Bureau of Economic Research, August 2000b. ___________ and ___________. “The Six Major Puzzles in International Macroeconomics: Is There a Common Cause?” Working Paper No. W7777, National Bureau of Economic Research, July 2000c; in Ben Bernanke and Kenneth Rogoff, eds., National Bureau of Economic Research Macroeconomics Annual 2000. Cambridge, MA: National Bureau of Economic Research and MIT Press, 2001 (forthcoming). Rotemberg, Julio J. and Woodford, Michael. “Imperfect Competition and the Effects of Energy Price Increases on Economic Activity.” Journal of Money, Credit and Banking, November 1996, 28(4), pp. 550-77. Samuelson, Paul. “The Transfer Problem and Transport Costs, II: Analysis of Effects of Trade Impediments.” Economic Journal, June 1954, 64(254), pp. 264-89. Sarno, Lucio. “Towards a New Paradigm in Open Economy Modeling: Where Do We Stand?” Unpublished manuscript, University of Warwick, 2000; available from Brian Doyle’s New Open Economy Macroeconomics, <http://www.geocities.com/brian_m_doyle/open.html>. ___________ and Taylor, Mark P., eds. New Developments in Exchange Rate Economics, Volumes I-II, Critical Writings in Economics series. Northampton, MA: Edward Elgar, 2001a (forthcoming). ___________ and ___________. Exchange Rate Economics. Cambridge, MA: Cambridge University Press, 2001b (forthcoming). Senay, Ozge. “The Effects of Goods and Financial Market Integration on Macroeconomic Volatility.” Manchester School, Supplement 1998, 66, pp. 39-61. Stockman, Alan C. “A Theory of Exchange Rate Determination.” Journal of Political Economy, August 1980, 88(4), pp. 673-98. M AY / J U N E 2 0 0 1 35 REVIEW ___________. “The Equilibrium Approach to Exchange Rates.” Federal Reserve Bank of Richmond Economic Review, March/April 1987, 73(2), pp. 12-30. Sutherland, Alan. “Financial Market Integration and Macroeconomic Volatility.” Scandinavian Journal of Economics, December 1996, 98(4), pp. 521-39. Svensson, Lars E.O. and van Wijnbergen, Sweder. “Excess Capacity, Monopolistic Competition, and International Transmission of Monetary Disturbances.” Economic Journal, September 1989, 99(397), pp. 785-805. Taylor, John B. “Aggregate Dynamics and Staggered Contracts.” Journal of Political Economy, February 1980, 88(1), pp. 1-23. Tille, Cédric. “Substitutability and Welfare.” Unpublished manuscript, Federal Reserve Bank of New York, 1998a. ___________. “The Welfare Effects of Monetary Shocks Under Pricing to Market: A General Framework.” Unpublished manuscript, Federal Reserve Bank of New York, 1998b. Velasco, Andres. “Multiplicity and Cycles in a Real Model of the Open Economy.” Unpublished manuscript, New York University, 1997. Walsh, Carl E. Monetary Theory and Policy. Cambridge, MA: MIT Press, 1998. Warnock, Francis E. “Idiosyncratic Tastes in a Two-Country Optimizing Model: Implications of a Standard Presumption.” Discussion Paper No. 631, International Finance Discussion Papers, Board of Governors of the Federal Reserve System, April 1999. 36 M AY / J U N E 2 0 0 1 FEDERAL RESERVE BANK OF ST. LOUIS A Simple Model of Limited Stock Market Participation Hui Guo he 1998 Survey of Consumer Finance data shows that only 48.8 percent of U.S. households owned stocks, either (i) directly or (ii) indirectly through mutual funds. In addition, there is a close relationship between shareholding and wealth. In 1998, 93 percent of the richest 1 percent of the population owned stocks; the richest 10 percent owned 85 percent of total stocks and mutual funds, compared with 51 percent of total savings deposits. Meanwhile, the average stock return is “abnormally” higher than the average government bond return.1 In this paper, I try to explain this shareholding puzzle—why many people do not hold stocks given that stocks outperform government bonds by a large margin. It is costly to collect and process information about stock markets. Bertaut (1997) finds that better-educated people are more likely to hold stocks, even after controlling for variables such as wealth, current income, and unemployment risk. He interprets education as a measure of the ability to process information about the market and investment opportunities. However, information costs are not the only reason for limited stock market participation. Rather, recent research emphasizes that people tend to hold fewer risky assets such as stocks in their portfolio if they are more vulnerable to income shocks. For example, borrowing constraints (Guiso, Jappelli, and Terlizzese, 1996), labor income risks (VissingJorgensen, 1998b), home ownership (Fratantoni, 1998), and entrepreneurial risks (Heaton and Lucas, 2000) are found to deter stock market entry. Moreover, these factors have smaller effects on people who have larger wealth. Holtz-Eakin, Joulfaian, and Rosen (1994) find that people are willing to take more risks if they receive a large inheritance. In this paper, I develop a life-cycle model to show how market imperfections may interact with T heterogeneous wealth to generate limited stock market participation. Many factors, such as successful entrepreneurial effort, life-cycle savings, precautionary savings, and inheritance, explain wealth inequality. To keep the model manageable, I focus on three key elements, namely, different investment opportunities (stocks and bonds), credit market imperfections, and inheritance. In the model, although the stock return is higher than the bond return, only people with wealth over a certain threshold own stocks. This occurs for two reasons. First, there is a fixed cost to entering the stock market. Second, people face a borrowing rate that is higher than the saving rate so that they cannot arbitrage by selling bonds and buying stocks. As a result, wealthy households accumulate more wealth and pass on a greater inheritance to their families than poor households do. In the long run, wealth is unequally distributed and wealthy households own almost all stocks. Some other mechanisms have explained limited stock market participation. Becker (1980) shows that the most patient agent owns all capital in the long run. Allen and Gale (1994) argue that the less risk-averse person is more likely to hold stocks. Constantinides, Donaldson, and Mehra (2000) stress the life-cycle pattern of shareholdings. Asset returns and limited stock market participation are two closely related issues. However, recent research (i.e., Constantinides, Donaldson, and Mehra, 2000; Polkovnichenko, 2000; and Yaron and Zhang, 2000) has had difficulty explaining the two simultaneously in general equilibrium models. Therefore, I address asset returns and limited stock market participation separately in this paper. First, the asset return is accepted as given when I explain why there is limited stock market participation. Then limited stock market participation is accepted as given when I discuss its effect on the asset return. Nevertheless, we ultimately need to develop a general equilibrium framework that explains both simultaneously, but this is beyond the scope of this paper. Limited stock market participation may have large effects on asset prices. The risk of the stock market return is measured by its covariance with shareholders’ consumption growth in the standard framework, the consumption-based Capital Asset 1 Hui Guo is an economist at the Federal Reserve Bank of St. Louis. Bill Bock provided research assistance. Mehra and Prescott (1985) argue for an equity-premium puzzle: the observed equity premium is too large to be explained by existing theories. M AY / J U N E 2 0 0 1 37 REVIEW Pricing Model. Mehra and Prescott (1985) calculate this covariance using aggregate consumption data and find that it is too small to explain the observed equity premium unless we believe that investors are extremely risk averse.2 This is the so-called equity premium puzzle. In their calculation, Mehra and Prescott assume that everyone holds stocks so that they can use aggregate consumption instead of shareholders’ consumption. This assumption is inconsistent with the data because not everyone holds stocks. Recent research finds that limited stock market participation does help explain the equity premium puzzle. Using the Panel Study of Income Dynamics data, Mankiw and Zeldes (1991) find that shareholders’ consumption is more volatile and more positively correlated with stock market returns than non-shareholders’ consumption. Brav and Geczy (1996) and Vissing-Jorgensen (1998a) document a similar phenomenon using the Consumption Expenditure Survey data. In contrast, Guo (2000) explores the connection between limited stock market participation and asset prices by calibrating a heterogeneous agent model in which only one type of agent holds stocks. Under reasonable parameterizations, the simulated data match the first two moments of the risk-free rate and the stock market return. A related issue is whether the most recent bull market is brought about by the increase in stock market participation. According to the Survey of Consumer Finance data, the stock market participation rate has increased from 31.7 percent in 1989 to 48.8 percent in 1998. However, stockholdings remain extremely concentrated. For example, the wealthiest 10 percent of U.S. households owned 85 percent of total stocks and mutual funds in 1998, only slightly lower than the 86 percent owned in 1989. Wolff (2000) also reports that the participation rate drops sharply if small shareholders are excluded. Therefore, there is little change in the concentration of stock ownership and the most recent bull market is unlikely to be explained by the increase in the stock market participation rate.3 Given that stock prices fluctuate widely in historical data, the most recent run-up in stock prices may be deviations from the trend. Limited stock market participation might also help reconcile some macroeconomic anomalies. For example, although rich people own almost all stocks, their consumption share is relatively small. This explains why aggregate consumption is not very responsive to the stock price fluctuation. How38 M AY / J U N E 2 0 0 1 ever, the effects of limited stock market participation on business cycles have not been fully explored yet. Future research along this line should improve our understanding of the economy. The paper is organized as follows. I first present some stylized facts and then use an overlapping-generations model to help explain limited stock market participation and wealth inequality. In the last section I discuss the implications for asset prices. SOME STYLIZED FACTS In this section, I summarize some stylized facts about financial markets and stock market participation. • The stock return is persistently higher than the risk-free rate over long horizons. • Very wealthy households own almost all stocks and other investment assets. • The share of wealth held by very wealthy households is positively correlated with stock prices. • The intergenerational transfer is an important channel through which wealth inequality is preserved over time. Stocks Outperform Risk-Free Assets Over Long Horizons The stock return is much higher than the riskfree rate. During the period 1871-1998, the real continuously compounding stock market return was about 7.0 percent per year and the risk-free rate was only 2.4 percent.4 The difference is dramatically amplified by the compounding effect. If you invested one dollar in large company stocks at year-end 1925, you would have had $1,370.95 by year-end 1996. On the other hand, investments of one dollar in short-term government bonds grew to only $13.54 over the same period (Ibbotson Associates, 1997). 2 See Kocherlakota (1996) for a survey of recent research on this issue. 3 Heaton and Lucas (1999) make a similar argument. They also explore some other explanations for the most recent stock price run-up. 4 These returns were calculated from the historical data constructed by Robert Shiller, which is available from his homepage <http://aida.econ.yale.edu/~shiller/>. The risk-free rate may be overestimated because it is the return on primary commercial paper in Shiller’s data. The annual real return on treasury bills is 0.6 percent for the period 1926-96 (Ibbotson Associates, 1997). FEDERAL RESERVE BANK OF ST. LOUIS Figure 1 Figure 2 Annualized Equity Premium Over a 30-Year Horizon Assets Held by the Richest 10 Percent, 1983-1998 Percent 0.12 Percent 91 Stocks and Mutual Funds Scale 0.10 0.08 Other Assets Scale 88 Investment Assets Scale 0.06 0.04 Percent 46 43 85 0.02 0.00 1900 1910 1920 1930 1940 1950 1960 1970 1980 1990 2000 SOURCE: Shiller’s Data. See footnote 4. Fama and French (1988), among many others, also document a mean-reverting process in stock prices. This is often interpreted as meaning that stocks are not as risky in the long run as they are in the short run. In Figure 1, I plot the annualized equity premium over a 30-year horizon. For example, the value corresponding to the year 1997 is the average equity premium over the period 1968-97. The equity premium over a long horizon is always positive, even for periods that include the 1929 stock market crash. However, it is not my intention to advocate stock market investing. First, Samuelson (1994), among others, is skeptical about these finite sample results. He argues that “if you adhere to the dogma that stocks must beat bonds in the long-enough run, there is no P/E level that the market averages out to at which you will take in sail. A Ponzi bubble is ever possible, and given past psychologies of boom and bust, ever-higher P/E ratios become a self-fulfilling prophecy…” (Samuelson, 1994, p.19). Second, the stock market could be very volatile in the short run, and it is rational for people not to participate in stock markets if their wealth is too little to absorb large shocks to stock prices. Heaton and Lucas (2000) argue that even wealthy households should hold fewer stocks if they have significant proprietary income. 82 1983 1986 1989 1992 1995 40 1998 SOURCE: Wolff (2000). Very Wealthy Households Own Most Stocks Stocks are highly concentrated in the hands of very wealthy households. In 1998, the richest 10 percent of U.S. households owned 85 percent of total stocks and mutual funds. They also held most other investment assets, including financial securities, trusts, business equity, and non-home real estate. In fact, they owned 86 percent of total investment assets. However, they had only 44 percent of the other assets, including principal residence, deposits, life insurance, and pension accounts. As shown in Figure 2, their shares of stocks and mutual funds, total investment assets, and other assets are relatively stable over the period 1983-98. Portfolio compositions are also quite different between the very wealthy and the average households, as shown in Figure 3. The principal residence is the most important asset for average U.S. households, which accounted for 65.9 percent of total assets for the poorest 80 percent of U.S. households in 1995. They also allocated 11.1 percent in liquid assets, 8.5 percent in pension assets, and 12.2 percent in investment assets. Conversely, the richest 1 percent put 78.5 percent of their total wealth in investment assets, 6.4 percent in their principal residence, 7.7 percent in liquid assets, and 4.7 percent M AY / J U N E 2 0 0 1 39 REVIEW Figure 3 Figure 4 Portfolio Compositions: Top 1% Richest and Bottom 80% Poorest Wealth Distribution and Detrended Stock Prices Percent Percent 45 80 Bottom 80% Wealth Share of Top 1% Scale Top 1% 60 U.S. Dollars 4 35 2 25 0 40 20 Detrended Stock Price Scale 0 Principle Residence Liquid Assets Pension Assets Investment Assets SOURCE: Wolff (1998). in pension assets. Therefore, very wealthy households have a larger share of risky assets in their portfolios than average households do. Wealth Distribution Over Time Wolff (1995) finds that wealth inequality moves closely with stock prices. His results are reproduced in Figure 4, which plots the share of wealth held by the richest 1 percent of U.S. households and the detrended stock prices for the period 1922-98. It is clear that these two variables move together, with a correlation coefficient of about 0.61. There are two reasons. First, the wealthiest 1 percent own almost all stocks, whereas the principle residence is the most important asset held by the average U.S. households. Second, stock prices are much more volatile than the prices of other wealth components, including principle residence. Changes in the valuation of existing assets are thus dominated by fluctuations in the stock market (Ludvigson and Steindel, 1999). Inheritance and Wealth Inequality There are many factors that explain wealth inequality, such as successful entrepreneurial effort, life-cycle savings, precautionary savings, and inheritance. Here, I want to stress the empirical relevance of inheritance, which is a key element of the model presented in the next section. 40 M AY / J U N E 2 0 0 1 15 1920 1930 1940 1950 1960 1970 1980 1990 -2 2000 SOURCE: Wolff (1995) and Wolff (2000) for wealth share; Shiller’s data for stock prices. Inhaber and Carroll (1992) argue that inheritance is one of the most important sources of wealth for the richest people, while it is a minor source of assets for most others. For example, 80 percent of the U.S. population claims never to have inherited any assets, and only 1 percent of the population admits to having inherited assets of $110,000 or more (Inhaber and Carroll, 1992, p. 73). Moreover, in both 1988 and 1989, more than one third of the 400 wealthiest Americans listed their primary source of wealth as inheritance, according to Forbes magazine. Kotlikoff and Summers (1981) argue that intergenerational transfers account for the vast majority of aggregate U.S. capital formation. Dynan, Skinner, and Zeldes (2000) also find that inheritance is crucial in explaining the different saving pattern between the rich and the poor. A LIMITED STOCK MARKET PARTICIPATION MODEL For simplicity, I adopt an overlapping generation model with bequest motives, which is similar to the model studied by Galor and Zeira (1993). While Galor and Zeira emphasized the importance of different education opportunities in explaining wealth inequality, I assume that households have different investment opportunities—they can invest in either stocks or bonds. The stock return FEDERAL RESERVE BANK OF ST. LOUIS is higher than the bond return; however, there is a fixed stock market entry cost. In the credit market, banks accept deposits and make loans and the household faces a borrowing rate that is higher than the saving rate. I show that, initially, only households with endowments over a certain threshold find it optimal to hold stocks because of the fixed entry cost and the wedge between the saving rate and the borrowing rate. Moreover, some households that initially hold stocks eventually leave the stock market because their relatively small endowments do not allow them to leave a large bequest. Rich households, however, always hold stocks and accumulate wealth faster than poor households do because they enjoy a higher rate of return on their assets. As a result, wealth is unequally distributed and rich people will hold all stocks in the long run. Model Setup There is a continuum of households in an economy that persists forever. At time t, each household, say i, has a new cohort born, h ti. The new cohort receives a bequest, M ti, from a parent, h ti–1, which can be invested in stocks or bonds. At time t+1, he receives labor income, L, and the payoff from his earlier investments.5 After leaving bequest Mti+1 to his one child, h ti+1, he consumes the rest of his wealth and exits the economy. It is costly to enforce loan contracts. Cohorts can save or borrow only through banks, which have the lowest enforcement costs. Banks raise money by issuing bonds, which promise a gross rate of return, Rb. I assume that the enforcement cost is proportional to borrowers’ leverage ratio and a cohort can borrow only at the rate Rb(1+D/W), where D is his outstanding debt and W is his net worth.6 A cohort can also invest in stocks, which offer a higher rate of return, Rs, than bonds do. However, there is a fixed stock market entry cost F>0. The fixed entry cost can be interpreted as informational costs and factors that affect stock market participation decisions, as discussed in the introduction. For simplicity, I assume that the gross stock return, Rs, and the gross bond return, Rb, are constant and that Rs>Rb in the baseline model. However, adding noise to stock returns does not change the results in any qualitative way as long as stocks are better investments than bonds, in the sense that mean returns to stocks are larger than mean returns to bonds. Lastly, there is a progressive tax, tb, on inheritance, which will be discussed in more detail in the next section. Maximization Problem Since cohorts differ only in their endowments and bequests, I ignore the superscript i and subscript t. Instead, I denote Mti, the bequests received by a t-cohort, as M; and I denote Mti+1, the bequests left by a t-cohort, as B. Cohorts have identical preferences, which depend on consumption, C, and bequests, B. The utility function is (1) Max{C ,B} a log(C ) + (1 - a ) log[(1 - t b ) B], where a is the relative weight given to consumption. The maximization of equation (1) is subject to budget constraints. If a cohort decides to stay out of stock markets and invest all his endowments in bonds, his budget constraints are (2) C + B £ MRb + L. Otherwise, he chooses to pay the fixed entry cost F to invest in stocks and his budget constraints are (3) C + B £ ( M + D - F ) Rs + L - DRb (1 + D ), M-F where D is the amount that he borrows from the bond market and Rb[1+D/(M – F)] is the rate at which he can borrow because his net wealth is M – F after he pays the fixed cost. The progressive estate tax is defined by equation (4), for which only bequests that exceed a – maximum level B are taxed: (4) tb = 0 if B £ B t if B > B . The maximization is done in two steps. A cohort first maximizes his total wealth, We, which is the sum of the payoff to the first period’s investments and labor income. He then decides how to 5 Galor and Zeira (1993) show that heterogeneity in labor income leads to wealth inequality. Here I want to stress the importance of heterogeneity in investment opportunities on wealth distribution. To ensure a clear demonstration, I assume that all cohorts receive the same labor income, L. Adding Galor and Zeira’s heterogeneous labor income should not change these results in any qualitative way. 6 In a general equilibrium setting, Bernanke, Gertler, and Gilchrist (1999) show that the cost of external funds depends negatively on firms’ net worth relative to the gross value of capital. M AY / J U N E 2 0 0 1 41 REVIEW allocate total wealth between consumption and bequests. Maximization of Wealth. A cohort does one of two things: either (i) invests all his endowments in bonds and his second period total wealth is given by We = MRb + L , (5) or (ii) pays the fixed entry cost, F, to participate in the stock market and his second period total wealth is given by D We = max{D} ( M + D - F ) Rs + L - DRb (1 + ) M-F (6) ( R - Rb )2 ( M - F ) = s + ( M - F ) Rs + L. 4 Rb Because We is a linearly increasing function of endowments, M, in equations (5) and (6), there exists a threshold level of endowment, M* = ( Rs + Rb )2 , ( Rs - Rb )( Rs + 3Rb ) at which the cohort is indifferent to these investment strategies. Moreover, he chooses to invest in bonds if his endowment is below M* and chooses to invest in stocks otherwise. The total wealth, We, is then given by equation (7), which is a monotonically increasing function of M: (7) We = MRb + L, if M £ M * ( Rs - Rb )2 ( M - F ) + ( M - F ) Rs + L, if M ≥ M * . 4 Rb Consumption and Bequests. A cohort chooses consumption, C, and bequests, B, to maximize equation (1), subject to the estate tax (equation (4)) and the reduced budget constraints (equation (8)): C + B £ We , (8) where We is defined by equation (7). Clearly, there exist We* and We**, such that We*<We**, and the optimal after-tax bequest is as follows: (9) (1 - a )We , if We £ We* if We* £ We £ We** B* = B, (1 - a )(1 - t )We , if We > We** . Denote We–1 as the inverse function of We (M). It is well defined because We (M) is a monotonically 42 M AY / J U N E 2 0 0 1 increasing function of M. Define M **=We–1(We*) and M ***=We–1(We** ). It is clear that M *** is greater than M **. After substituting equation (7) into equation (9), the optimal after-tax bequest is a function of the endowment M as in equation (10). Here we assume that M ** is greater than M * or that nonshareholders do not have to worry about the estate tax. (10) B* = (1 - a )( MRb + L), if M<M *; (1 - a )[ ( Rs - Rb )2 ( M - F ) + ( M - F ) Rs + L], 4 Rb if M * ≤ M < M **; B, if M ** ≤ M ≤ M ***; and ( Rs - Rb )2 ( M - F ) + ( M - F ) Rs + L], 4 Rb otherwise. (1 - a )(1 - t )[ The Dynamics of Wealth Equation (10) is a first-order difference equation of bequests. The phase diagram is plotted in Figure 5 under the following conditions: 1. (1 - a ) Rb < 1, È ( Rs - Rb )2 ˘ + Rs ˙ > 1, 2. (1 - a ) Í Î 4 Rb ˚ È ( Rs - Rb )2 ˘ ( 1 a )( 1 t ) + Rs ˙ < 1. 3. Í Î 4 Rb ˚ The first condition ensures at least one stable steady state, which is the same as in Galor and Zeira (1993). The second condition ensures that the wealth diverges in the long run. The third condition is required so that rich people’s wealth has a welldefined steady state; otherwise, it goes to infinity. These conditions hold under reasonable parameterizations. Let us assume that there are 30 years in each period. According to Shiller’s data, Rb is about 200 percent and Rs is about 760 percent. The current highest marginal estate tax rate is 60 percent. Under this parameterization, conditions 1 FEDERAL RESERVE BANK OF ST. LOUIS Figure 5 Figure 6 Dynamics of Bequest Dynamics of Bequest with Uncertainty B B State 1 State 2 ML M* MM M** M*** H M M through 3 hold for 0.78<a<0.91. If the population grows at 2 percent per year, conditions 1 through 3 hold for 0.60<a<0.84.7 The following results are shown clearly in Figure 5. First, cohorts with initial endowments less than M * do not hold stocks. Second, there are two stable steady states M L and M H. Rich households with initial endowments greater than M M converge to M H, and the remaining poor households converge to M L. Therefore, wealth is unequally distributed and only rich people hold stocks in the long run. Third, reductions in the fixed entry cost move M * toward the origin and therefore increase stock market participation. Due to its simplicity, the baseline model does not provide a complete description of data. In particular, the actual wealth distribution is not bimodal, people do move up and down the economic scale, and wealthy households also own a significant amount of bonds in the data. This is because random factors are assumed away in the baseline model. For example, entrepreneurial success or failure can generate mobility in wealth; rich people hold bonds to diversify risks. Incorporations of these considerations should improve our model’s prediction. As an example, I will show in the next section of the article that our model can generate a more realistic wealth distribution if stock returns are stochastic. Stochastic Stock Returns. For simplicity and L M M M M2 M H2 H M M M without loss of generality, I assume that stock returns are random realizations of two values and are not serially correlated. Also, the investment decision is made before the stock return is realized. Each cohort now maximizes the expected utility in his first period and his consumption/ bequest decision is the same as in the case of certainty. If conditions 1 through 3 hold in each state, the dynamic of bequests with stochastic stock returns is shown in Figure 6. Note, the portfolio decision is independent from the state because stock returns are not serially correlated. In the long run, the households with initial endowments less than M M converge to M L. The households with initial endowments greater than M M2 converge to the M H2/M H region. The other households’ wealth may fluctuate between M M and M M2, depending on the realizations of the stock return. The stochastic return model thus generates an additional middle class who owns 7 If there is a population growth, conditions 1 through 3 become as follows (where Rp is the growth rate of population): 1. (1 - a ) Rb 1 < 1, Rp È ( Rs - Rb )2 ˘ 1 + Rs ˙ > 1, Î 4 Rb ˚ Rp 2. (1 - a ) Í È ( Rs - Rb )2 ˘ 1 + Rs ˙ < 1. R 4 b Î ˚ Rp 3. (1 - a )(1 - t ) Í M AY / J U N E 2 0 0 1 43 REVIEW some stocks. Moreover, it predicts that the wealth inequality, or the share of wealth held by the rich, increases with stock prices, as observed in the data. IMPLICATIONS FOR ASSET PRICES The asset return is taken as given in the limited stock market participation model presented in the previous section. In this section, I discuss the effect of limited stock market participation on the asset return. Agents are usually assumed to be homogeneous in economic models for the sake of simplicity. However, asset pricing models with homogeneous agents do not provide a good description for the asset return. They fail to explain why the equity premium is so high (the equity premium puzzle) and why the stock price is so volatile (the excess volatility puzzle). In modern asset pricing models, agents are risk averse and prefer smooth consumption. The return to an asset thus depends on how well it can be used to smooth agents’ consumption. Intuitively, one dollar is more valuable in bad states when consumption is low than one dollar in good states when consumption is high. A stock is thus unattractive, and shareholders demand a positive premium to hold such a stock if its return is low (high) when shareholders’ consumption is low (high). Also, the more risk-averse shareholders are, the larger the risk premium they require. It can be shown that the equity premium is equal to gsr,s in a frictionless economy, where g is a measure of relative risk aversion and sr,s is the covariance between the stock return and the shareholder’s consumption growth. This is the so-called consumption-based Capital Asset Pricing Model. Assuming that everyone holds stocks, Mehra and Prescott (1985) calculate this covariance using aggregate consumption and find (i) that it is too small to explain the observed equity premium or (ii) that there is an equity premium puzzle. It also can be shown that asset price Pt is equal to the sum of the expected cash flow, Dt+i, weighted by stochastic discount factor Rt+i or • Pt = Et [ Â Rt + i Dt + i ] . i =1 In the homogeneous agent model, Rt+i is equal to the intertemporal marginal rate of substitution bi 44 M AY / J U N E 2 0 0 1 U (Ct + i )¢ , U (Ct )¢ where b is the time discount factor, U ´ is the marginal utility, and Ct is the aggregate consumption at time t. Variations in asset prices thus come from two sources: shocks to the cash flow, Dt+i , and shocks to the stochastic discount factor, Rt+i , which in turn are caused by aggregate consumption shocks. Shiller (1981) finds that dividends are too smooth to explain many variations in stock prices. Similarly, Campbell (1991) finds that most variations in stock prices come from innovations in the stochastic discount factor. However, the aggregate consumption is too smooth to generate the volatile stochastic discount factor implied by the financial data. This is the excess volatility puzzle. The preference is assumed to be time separable in the examples given above. Constantinides (1990) shows that it is possible to use aggregate consumption to generate a volatile stochastic discount factor as well as a large and volatile equity premium in a habit formation model, in which utility depends on both current and past consumption. However, the risk-free rate is very volatile in his model because it is also priced by the same volatile stochastic discount factor. This is easy to understand. Cash flows are stochastic on stocks and are predetermined on bonds. Given that dividends are smooth in the data, this difference is rather small. Stocks and bonds should thus exhibit similar properties, i.e. means and/or variance if they are priced by the same stochastic discount factor. However, the stock return is much higher and much more volatile than the bond return in the data. Therefore, stocks and bonds are not likely to be priced by the same stochastic discount factor. This poses a serious challenge to the homogeneous agent model.8 Recent research by Guo (2000) suggests that these puzzles might be related to the fact that only a few wealthy people own almost all stocks. He shows that a heterogeneous agent model of limited stock market participation can replicate these phenomena. There are two types of agents in his model: one is a shareholder and the other is a nonshareholder. They both receive labor incomes, but only the shareholder receives dividends. Labor incomes and dividends follow stochastic processes. Both agents use bonds to diversify the income risk; 8 One exception is Campbell and Cochrane (1999), who avoid this problem by choosing a particular habit form so that the risk-free rate is constant. However, they need a very large risk-aversion to explain the puzzles mentioned above. Therefore, they do not really solve the equity premium puzzle. FEDERAL RESERVE BANK OF ST. LOUIS for example, an agent buys (sells) bonds when his income is above (below) the trend. However, they can borrow from the bond market only up to a limited amount or there are borrowing constraints. The model is calibrated using income processes estimated by Heaton and Lucas (1996), and the simulated data match the mean and variance of stock returns and bond returns under reasonable parameterizations. Unlike the homogeneous agent model, stocks and bonds may be priced by different stochastic discount factors, as in the model by Guo (2000), because of limited stock market participation. In particular, while bonds are priced by the intertemporal marginal rate of substitution of a nonconstrained agent(s), stocks are always priced by the shareholder’s intertemporal marginal rate of substitution. Stocks and bonds are thus priced by different intertemporal marginal rates of substitution when the shareholder’s borrowing constraints are binding. The stochastic discount factor for bonds is max {b i U (Cts+ i )¢ i U (Ctn+ i )¢ ,b } U (Cts )¢ U (Ctn )¢ and for stocks it is bi Cts Ctn U (Cts+ i )¢ , U (Cts )¢ are consumption of the sharewhere and holder and the non-shareholder, respectively. It is clear that the former is larger and smoother than the latter because borrowing constraints put a lower bound on the discount factor of bonds, but not stocks. Therefore, in Guo’s model, the bond return is low and smooth while the stock return is high and volatile, as observed in the data. Intuitively, bonds are desirable and are priced at a premium because they can be used to diversify income shocks. Stocks are not desirable because they cannot be used to diversify income shocks. The precautionary saving motive thus lowers only the risk-free rate, but not the stock return. This echoes Weil’s (1989) argument that the equity premium puzzle is indeed a risk-free rate puzzle. To see this, the equity premium in Guo (2000) is equal to gsr,s+rsf–min{rsf,rnf}, where g is the relative risk aversion coefficient, sr,s is the covariance between shareholder’s consumption and the stock return, and rsf (rnf ) is the shadow risk-free rate of the shareholder (non-shareholder). The equity premium is larger in Guo’s model than in the representative agent model for two reasons. First, there is an extra nonnegative term, rsf–min{rsf,rnf}, reflecting the fact that bonds (stocks) can (cannot) be used to hedge income shocks. This term can be interpreted as a liquidity premium. Second, the covariance between shareholders’ consumption and the stock return is larger than the covariance between aggregate consumption and the stock return. CONCLUSION The recent Survey of Consumer Finance data show that stocks are highly concentrated in the hands of a few wealthy people. In this paper, I used an overlapping-generations model to help explain this limited stock market participation and discussed its effect on asset prices. However, other implications, such as its effect on business cycles, have not been fully explored yet. Future research along this direction should improve our understanding of the economy. REFERENCES Allen, Franklin and Gale, Douglas. “Limited Market Participation and Volatility of Asset Prices.” American Economic Review, September 1994, 84, pp. 933-55. Becker, Robert. “On the Long-Run Steady State in a Simple Dynamic Model of Equilibrium with Heterogeneous Households.” The Quarterly Journal of Economics, 1980, 95, pp. 375-82. Bertaut, Carol. “Stockholding Behavior of U.S. Households: Evidence from the 1983-1989 Survey of Consumer Finances.” Review of Economics and Statistics, May 1998, 80(2), pp. 263-75. Bernanke, Ben; Gertler, Mark and Gilchrist, Simon. “The Financial Accelerator in a Quantitative Business Cycle Framework.” Working Paper No. 6455, National Bureau of Economics Research, 1998. Brav, Alon and Geczy, Christopher. “An Empirical Resurrection of the Simple Consumption CAPM with Power Utility.” Memo, University of Chicago, 1996. Campbell, John. “A Variance Decomposition for Stock Returns.” Economic Journal, March 1991, 101, pp. 15779. ___________ and Cochrane, John. “By Force of Habit: A Consumption-Based Explanation of Aggregate Stock M AY / J U N E 2 0 0 1 45 REVIEW Market Behavior.” Journal of Political Economy, April 1999, 2(107), pp. 205-51. Constantinides, George. “Habit Formation: A Resolution of the Equity Premium Puzzle.” Journal of Political Economy, June 1990, 98(3), pp. 519-43. ___________; Donaldson, John and Mehra, Rajnish. “Junior Can’t Borrow: A New Perspective on the Equity Premium Puzzle.” Working Paper No. 6617, National Bureau of Economics Research, 2000. Dynan, Karen; Skinner, Jonathan and Zeldes, Stephen. “Do the Rich Save More?” Working Paper No. 7906, National Bureau of Economics Research, 2000. Fama, Eugene and French, Kenneth. “Permanent and Temporary Components of Stock Prices.” Journal of Political Economy, April 1988, 96(2), pp. 246-73. Fratantoni, Michael. “Homeownership and Investment in Risky Assets.” Journal of Urban Economics, July 1998, 44(1), pp. 27-42. Galor, Oded and Zeira, Joseph. “Income Distribution and Macroeconomics.” Review of Economic Studies, 1993, 60(1), pp. 35-52. Guiso, Juigi; Jappelli, Tullio and Terlizzese, Daniele. “Income Risk, Borrowing Constraints and Portfolio Choice.” American Economic Review, March 1996, 86(1), pp. 158-72. Guo, Hui. “Business Conditions and Asset Prices in a Dynamic Economy.” Working Paper 2000-31A, Federal Reserve Bank of St. Louis, 2000. Heaton, John and Lucas, Deborah. “Portfolio Choice and Asset Prices: The Importance of Entrepreneurial Risk.” Journal of Finance, June 2000, 55(3), pp. 1163-98. ___________ and ___________. “Stock Prices and Fundamentals.” NBER Macroeconomics Annual, 1999, pp. 213-42. Inhaber, Herbert and Carroll, Sidney. How Rich is Too Rich? Income and Wealth in America. New York: Praeger Publishers, 1992. Kocherlakota, Narayana. “The Equity Premium: It’s Still a Puzzle.” Journal of Economic Literature, March 1996, 34(1), pp. 42-71. Mankiw, Gregory and Zeldes, Stephen. “The Consumption of Stockholders and Nonstockholders.” Journal of Financial Economics, March 1991, 29(1), pp. 97-112. Kotlikoff, Laurence and Summers, Lawrence. “The Role of Intergenerational Transfers in Aggregate Capital Accumulation.” Journal of Political Economy, August 1981, 89(4), pp. 706-32. Ludvigson, Sydney and Steindel, Charles. “How Important is the Stock Market Effect on Consumption?” Federal Reserve Bank of New York Economic Policy Review, July 1999, 5(2), pp. 29-51. Mehra, Rajnish and Prescott, Edward. “The Equity Premium: A Puzzle.” Journal of Monetary Economics, March 1985, 15(2), pp. 145-61. Polkovnichenko, Valery. “Heterogeneous Labor Income and Preferences: Implications for Stock Market Participation.” Working Paper, University of Minnesota, 2000. Samuelson, Paul. “The Long-Term Case for Equities.” Journal of Portfolio Management, October 1994, 21(1), pp. 15-24. Shiller, Robert. “Do Stock Prices Move Too Much to Be Justified by Subsequent Changes in Dividends?” American Economic Review, June 1981, 71(3), pp. 421-36. Schwert, William. “Why Does Stock Market Volatility Change Over Time?” Journal of Finance, 1989, 44(5), pp. 1115-53. Vissing-Jorgensen, Annette. “Limited Stock Market Participation.” Working Paper, MIT, 1998a. ___________ and ___________. “Evaluating the Effects of Incomplete Markets on Risk Sharing and Asset Pricing.” Journal of Political Economy, June 1996, 104(3), pp. 44387. ___________. “An Empirical Investigation of the Effect of Non-Financial Income on Portfolio Choice.” Working Paper, MIT, 1998b. Holtz-Eakin, Douglas; Joulfaian, David and Rosen, Harvey. “Entrepreneurial Decisions and Liquidity Constraints.” Rand Journal of Economics, July 1994, 25(2), pp. 334-47. Weil, Philippe. “The Equity Premium Puzzle and the RiskFree Rate Puzzle.” Journal of Monetary Economics, November 1989, 24(3), pp. 401-21. 46 M AY / J U N E 2 0 0 1 FEDERAL RESERVE BANK OF ST. LOUIS Wolff, Edward. “Recent Trends in Wealth Ownership, 1983-1998.” Working Paper No. 300, Jerome Levy Economics Institute, 2000. ___________. “Recent Trends in the Size Distribution of Household Wealth.” Journal of Economic Perspectives, July 1998, 12(3), pp. 131-50. ___________. Top Heavy: A Study of the Increasing Inequality of Wealth in America. New York: The Twentieth Century Fund Press, 1995. Yaron, Amir and Zhang, Harold. “Fixed Costs and Asset Market Participation.” Revista de Analisis Economico, 2000, 15, pp. 89-109. M AY / J U N E 2 0 0 1 47 REVIEW 48 M AY / J U N E 2 0 0 1