View original document

The full text on this page is automatically extracted from the file linked above and may contain errors and inconsistencies.

FOURTH QUARTER 2015  FEDERALRESERVE RESERVEBANK BANKOF OFRICHMOND RICHMOND FEDERAL  DEALING WITH DISASTERS What steps should we take to avert catastrophe, whether it’s a threat to a city or the entire planet?  UNCERTAINTYMitigationREBUILD  Probability INSURED LOSSES ADAPTATION Risk management Prevention DAMAGE RECOVERY Putting a Price on Traffic  Liquidity Requirements Through History  Interview with Eric Leeper  VOLUME 19 NUMBER 4 FOURTH QUARTER 2015  COVER STORY  10  Dealing with Disasters From hurricanes to asteroids, how should we determine what steps to take to avert catastrophe? FEATURES  DIRECTOR OF RESEARCH  Kartik Athreya EDITORIAL ADVISER  Aaron Steelman EDITOR      15  Getting Unstuck Washington, D.C., is notorious for congestion. Can smarter pricing provide a way out of clogged highways, packed parking, and overburdened mass transit?   Econ Focus is the economics magazine of the Federal Reserve Bank of Richmond. It covers economic issues affecting the Fifth Federal Reserve District and the nation and is published on a quarterly basis by the Bank’s Research Department. The Fifth District consists of the District of Columbia, Maryland, North Carolina, South Carolina, Virginia, and most of West Virginia.  20  Goodbye, Globalization? Why trade growth has slowed down — and what it might mean for the global economy   Renee Haltom SENIOR EDITOR  David A. Price MANAGING EDITOR/DESIGN LEAD  Kathy Constant STAFF WRITERS  Helen Fessenden Jessie Romero Tim Sablik EDITORIAL ASSOCIATE  Lisa Kenney  ­  CONTRIBUTORS  Charles Gerena Karl Rhodes Michael Stanley Sonya Ravindranath Waddell DESIGN  Janin/Cliff Design, Inc.  DEPARTMENTS  	 1		 President’s Message/The Fed-Bank Relationship Under Scrutiny 	 2		 Upfront/Regional News at a Glance 	 3		 Policy Update/Equity Crowdfunding: A Piece of the Action 	 4		 Federal Reserve/Liquidity Requirements and the Lender of Last Resort 	 8		 Jargon Alert/Market Power 	 9		 Research Spotlight/Procuring Innovation 	24		Interview/Eric Leeper 	29		 The Profession/Scrambling for Economists: The Ph.D. Job Search 	30		 	Economic History/Conflagration in Baltimore 	34			Around the Fed/Long Hours and High Turnover — For What? 	35			Book Review/Unequal Gains: American Growth and 			 Inequality Since 1700  	36		 District Digest/Predicting Economic Activity through 			 Richmond Fed Surveys 44	 Opinion/The Importance of Researcher Independence  Published quarterly by the Federal Reserve Bank of Richmond P.O. Box 27622 Richmond, VA 23261 www.richmondfed.org www.twitter.com/ RichFedResearch Subscriptions and additional copies: Available free of charge through our website at www.richmondfed.org/publications or by calling Research Publications at (800) 322-0565. Reprints: Text may be reprinted with the disclaimer in italics below. Permission from the editor is required before reprinting photos, charts, and tables. Credit Econ Focus and send the editor a copy of the publication in which the reprinted material appears. The views expressed in Econ Focus are those of the contributors and not necessarily those of the Federal Reserve Bank of Richmond or the Federal Reserve System. ISSN 2327-0241 (Print) ISSN 2327-025x (Online)  PRESIDENT’SMESSAGE  The Fed-Bank Relationship Under Scrutiny  L  ast fall, as Congress was trying to find a way to pay for a comprehensive transportation bill, it did something unusual: It looked to the Fed to close the financing gap. Lawmakers elected to transfer about $19 billion from the Fed’s capital surplus account as well as reduce the dividend that the Fed pays to member banks, redirecting the money to the Treasury Department. Altogether, these changes will amount to $36 billion over five years. This action set two new precedents: It mandated the first-ever cap on the size of the surplus account, requiring that any funds in excess of $10 billion be transferred to Treasury and used for the transportation bill. It also was the first congressional change to the formula the Fed has applied to its dividend payments to member banks since its founding in 1913. Under the traditional framework, member banks had to buy stock in their regional Reserve Bank equal to 3 percent of their capital and surplus (the “paid in” amount), while another 3 percent was “on call.” Since this paid-in stock wasn’t generating returns for member banks, the Fed paid an annual dividend of 6 percent. The new law, however, cuts the dividend for large banks from 6 percent to the annual yield of the 10-year Treasury note, which presently is below 2 percent. When news of these changes broke, senior Fed officials rightly pointed out that the changes risked blurring the line between fiscal and monetary policy. Moreover, many observers have noted that the maneuvers were deceptive on an accounting level since they provide no net new revenue to the Treasury. Beyond these issues, tinkering with the Fed’s capital structure threatens to unravel the hybrid public-private governance framework that is so crucial to monetary policy independence. To understand why, we need to look back to how banking worked before the Fed was established in 1913, when banks formed clearinghouses in major cities to clear and settle payments. These clearinghouses served a public-private purpose: They managed the supply of currency and reserves in response to fluctuating needs, but they were owned and overseen by member banks, usually through an elected board of directors. They operated with a fair degree of independence, with member banks working jointly to ensure the model worked for all parties. From the outset, the Fed-bank relationship was based on a similar hybrid model. The Fed’s governance structure is partly public in that all members of the Board of Governors are appointed by the U.S. president, and three members of each Reserve Bank’s nine-person board of directors are appointed by the Board. Moreover, Reserve Bank presidents must be approved by the Board after being selected by the local board of directors. But the governance structure is also partly private: Six out of the nine directors  are elected by member banks, with three representing banks (“Class A”) and three representing the public (“Class B”). All directors oversee many important Fed operational functions, but they also face restrictions meant to prevent conflicts of interest. For example, directors have no role in the oversight of bank supervisory or regulatory decisions, and Class A directors representing banks no longer play a role in the appointment of Reserve Bank presidents, a change enacted in the 2010 Dodd-Frank reform. This hybrid governance model has come to play an important role in the independence of monetary policy. The nature of our political system — with a high frequency election cycle — makes it natural for elected officials to weight short-run gains more heavily than long-run costs. This can lead to a preference for monetary policy actions that boost employment over those that contain inflation. Yet history shows that once higher inflation has set in, it is difficult and costly to bring it down. Political independence allows monetary policy to place greater weight on the long-term benefits of low and stable inflation. So what does it mean now that the larger member banks will get a reduced dividend? Some banks have already broached the possibility of discontinuing their Fed memberships. Meanwhile, a proposal is circulating in Congress that would reduce the “paid in” requirement. Whether banks leave or stay and pay in less capital, this change could lead some to argue that banks’ role in Fed governance be reduced or eliminated. This would dovetail with proposals to reduce the private aspects of the Fed’s public-private governance structure — for example, that Reserve Bank leadership be appointed by the U.S. president. This would be a grave mistake, in my view. The current Fed governance structure may not be ideal. But until there is a proposal that preserves the monetary policy independence that is so vital to the Fed’s mandate, we should stick to what we have. EF  JEFFREY M. LACKER PRESIDENT FEDERAL RESERVE BANK OF RICHMOND  E CO N F O C U S | F O U RT H Q U A RT E R | 2 0 1 5  1  UPFRONT  Regional News at a Glance  BY L I S A K E N N E Y  MARYLAND — In February, the $200 million Maryland Proton Treatment Center in Baltimore began receiving cancer patients. The facility — the first of its kind in the state — specializes in a type of radiation therapy that uses a thin beam of protons to treat tumors without damaging surrounding tissue. The center is one of only 23 currently operating proton therapy centers in the United States. It is expected to be at full operating capacity in 2017, employing more than 170 workers and treating 2,000 patients annually. NORTH CAROLINA — An expansion of the state sales tax took effect on March 1, and lawmakers say it is an attempt to reduce reliance on income tax revenue in an increasingly service-oriented economy. The expanded sales tax requires businesses that already tax products to now tax repair, maintenance, and installation services. For example, an auto body shop that sells oil filters will now also tax the labor of the oil change; however, a garage door repair by a company that does not sell garage doors (or other retail products) would not incur the new tax. SOUTH CAROLINA — The Port of Charleston has leased rooftop space on two of its cargo terminals to SolBright Renewable Energy for solar panels that will generate 3.7 megawatts of electricity. The project, to be completed in the summer, will be the largest rooftop installation of solar panels in South Carolina. The 25-year lease will generate a total of $1.85 million for the State Ports Authority. The panels will help the SPA reduce its usage of conventional energy, and SolBright will sell power back to South Carolina Electric & Gas during peak demand times. VIRGINIA — In early March, Gov. Terry McAuliffe signed a law permitting fantasy sports websites to operate in the state if they follow certain guidelines. Several states have taken measures to block these sites, saying they violate state gambling laws. The Virginia law contains consumer protections — like age verification and separating player funds from company operational funds — and requires the websites to register with the state and pay a $50,000 licensing fee. The law takes effect in July and applies to any fantasy sports game, daily or seasonlong, that requires an entry fee. WASHINGTON, D.C. — Prudential Financial invested $1.7 million to prevent polluted stormwater runoff from D.C. streets into the Anacostia and Potomac rivers. Prudential is partnering with the Nature Conservancy and investment firm Encourage Capital to construct permeable pavement, rain gardens, and other green projects. These projects are expected to qualify for D.C.’s Stormwater Retention Credit Trading Program, which allows property owners to generate credits for voluntary green infrastructure and then trade those credits on the open market to other companies who use them to meet regulatory requirements for retaining stormwater. WEST VIRGINIA — The state legislature in February overrode two of Gov. Earl Ray Tomblin’s vetoes, paving the way for West Virginia to become a right-to-work state in July and also repealing the state’s prevailing wage law effective in mid-May. West Virginia becomes the 26th right-to-work state, meaning workers in unionized workplaces can opt out of paying union dues while working under a union-negotiated contract. The state’s prevailing wage law sets a minimum wage for workers on state-funded construction projects.  2  E CO N F O C U S | F O U RT H Q U A RT E R | 2 0 1 5  POLICYUPDATE  A Piece of the Action BY C H A R L E S G E R E N A  C  rowdfunding — financing a project with lots of sorts through potential deals and brings the best ones to the small donations — dates back centuries. Part of the group in exchange for a share of the upside. money for the Statue of Liberty’s pedestal came Such curation of potential deals helps to address a from more than 100,000 donors who responded to a newsproblem that arises with equity crowdfunding. “When paper campaign. Today, the global reach of the Internet has you’re going online and trying to invest in startups, the taken crowdfunding to a new level, generating more than $16 key issue is asymmetric information,” says Catalini. “It’s billion in 2014 and an estimated $34 billion in 2015 to finance very hard to evaluate some of the companies — it’s not like countless gadgets, creative works, and even the making of a an entrepreneur comes with a rating.” But small investors bowl of potato salad. have little incentive to perform the costly due diligence on Now this practice has entered the world of corporate a potential deal that a venture capitalist would do. “It’s not finance. Under Title III of the Jumpstart Our Business worth your time.” Startups (JOBS) Act of 2012 and rules issued by the Securities Because Title III doesn’t permit syndication, and in and Exchange Commission (SEC) that go into effect in May view of the accounting requirements and limitations on 2016, entrepreneurs can pursue “equity crowdfunding.” Will the amount of capital that can be raised, Catalini doesn’t this new source of startup money help spur innovation and foresee equity crowdfunding under Title III becoming generate lots of new jobs as lawmakers intended? a major source of capital for the next Facebook — a Title III enables a company high-growth, tech startup with to raise capital from any investhe potential to create a large The desire to protect unaccredited number of jobs in the future. tor without registering with the SEC, which can be cost prohibihe sees equity crowdinvestors is why it has taken four Rather, tive. Still, the process is governed funding being used by small years to implement Title III. by a relatively extensive set of filbusinesses like restaurants or ing and disclosure requirements real estate developers. in order to limit the risks for Many small businesses may so-called “unaccredited” investors, who do not meet the not reap the benefits of equity crowdfunding, however. SEC’s income and wealth requirements to participate in Some observers believe that it could follow the evolutionary large-scale transactions. path of crowdfunding in general, a path that has taken the For example, there are limits on how much equity can financing approach from the province of dreamers into the be sold by a company over a 12-month period ($1 million) boardrooms of corporate America. and how much equity can be purchased by an individual When crowdfunding first emerged on the Internet in (for those whose annual income or net worth is less than the 2000s, any person who needed money to realize an idea $100,000, the greater of $2,000 or 5 percent of income or could appeal directly to those who believe in the same idea wealth; 10 percent of income or wealth for everyone else). and value non-pecuniary benefits, such as early access to a Also, issuers must disclose their finances and file an annual product or unique rewards for their donations. Musicians report with the SEC, while intermediaries that facilitate have produced CDs without a record label and authors have crowdfunding transactions have to follow their own set of released books without a publisher. requirements. Today, larger, more established companies have turned The desire to protect unaccredited investors is why it to crowdfunding as a way to gauge consumer demand, has taken four years to implement Title III, according to obtain feedback, or generate publicity for a new product. Christian Catalini, an assistant professor of technological Crowdfunding isn’t just for the “little guy” anymore. innovation, entrepreneurship, and strategic management Indeed, many of the most successful crowdfunded projat the Massachusetts Institute of Technology. In conects are associated with celebrities or others who are widely trast, Title II of the JOBS Act has allowed companies to known and have built a fan base. In Catalini’s view, crowdsolicit money from accredited investors without registerfunding could transform equity markets “into a market for ing since 2013. reputation.” Another difference between Title II and the new Title In short, those who expect the JOBS Act to have a III is that the former allows for the use of a syndicate model far-reaching impact may have to adjust their expectations. to facilitate equity crowdfunding. In this model, accredited Research has shown that only a small fraction of entrepreinvestors entrust their money with a lead investor, typically neurial, innovative projects account for the majority of funds an experienced venture capitalist or “angel” financier who raised through crowdfunding. EF E CO N F O C U S | F O U RT H Q U A RT E R | 2 0 1 5  3  FEDERALRESERVE  Liquidity Requirements and the Lender of Last Resort  4  T  The financial crisis of 2007-2008 was just the latest chapter in a long debate over how to minimize the risk of bank runs and other liquidity crunches  here’s a scene from It’s a Wonderful Life in which George Bailey is en route to his honeymoon when he sees a crowd gathered outside his family business, the Bailey Brothers’ Building and Loan. He finds that the people are depositors looking to pull their money out because they fear that the Building and Loan might fail before they get the chance. His bank is in the midst of a run. Bailey tries, unsuccessfully, to explain to the members of the crowd that their deposits aren’t all sitting in a vault at the bank — they have been loaned out to other individuals and businesses in town. If they are just patient, they will get their money back in time.  An early 20th century bank run in progress at 19th Ward Bank in New York City.  In financial terms, he’s telling them that the Building and Loan is solvent but temporarily illiquid. The crowd is not convinced, however, and Bailey ends up using the money he had saved for his honeymoon to supplement the Building and Loan’s cash holdings and meet depositor demand. It’s a scene that would have been familiar to many moviegoers when the film debuted in 1946. Bank runs were a regular occurrence in the United States during the 19th and early 20th centuries. But the scene is also reminiscent of what  E CO N F O C U S | F O U RT H Q U A RT E R | 2 0 1 5  happened during the financial crisis of 2007-2008. In that crisis, though, it wasn’t ordinary depositors who were in line. Creditors that had supplied banks and other institutions with short-term funding suddenly questioned the health of the institutions they were lending to. Just as depositors sought to pull their money out of banks during the panics of previous centuries, creditors pulled their funding out of the market, leaving banks and institutions suddenly short on the cash needed to fund their operations. As the movie hints at, the liquidity risk that banks face arises, at least to some extent, from the services they provide. At their core, banks serve as intermediaries between savers and borrowers. Banks take on short-maturity, liquid liabilities like deposits to make loans, which have a longer maturity and are less liquid. This maturity and liquidity transformation allows banks to take advantage of the interest rate spread between their short-term liabilities and their long-term assets to earn a profit. But it means banks cannot quickly convert their assets into something liquid like cash to meet a sudden increase in demand on their liability side. Banks typically hold some cash in reserve in order to meet small fluctuations in demand, but not enough to fulfill all obligations at once. Should banks hold more liquid assets in reserve? If policymakers were willing to do away with fractional reserve banking entirely, banks could be required to hold enough cash to fully back all their deposits and other liabilities. But while the Swiss government recently proposed a referendum on implementing such full-reserve banking for its institutions, most banking scholars think that it would do more harm than good. “Doing so would be both unprofitable and socially undesirable,” former Fed Chairman Ben Bernanke said in a 2008 speech. “It would be unprofitable because cash pays a lower return than  PHOTOGRAPHY: GEORGE GRANTHAM BAIN COLLECTION, LIBRARY OF CONGRESS, PRINTS & PHOTOGRAPHS DIVISION, LC-USZ6-1530  BY T I M S A B L I K  other investments. And it would make up the Basel Committee be socially undesirable, because on Banking Supervision recomIf it is not desirable to entirely an excessive preference for liquid mended such requirements. The eliminate liquidity risk, is there an Basel III Accord included two assets reduces society’s ability to fund longer-term investments new liquidity requirements for effective way to manage it? that carry a high return but canbanks. The first, the Liquidity not be liquidated quickly.” Coverage Ratio (LCR), requires But if it is not desirable to entirely eliminate liquidity banking institutions to maintain a buffer of highly liquid risk, is there an effective way to manage it? It’s a question assets equal to some portion of their total assets. The secthat policymakers and economists have wrestled with for ond, the Net Stable Funding Ratio (NSFR), requires that decades. institutions hold some amount of liabilities that can reliably  be converted into liquidity during a crisis, reducing their Managing Liquidity Risk exposure to liquidity risk. Why would banks not voluntarily hold enough liquidity to Requiring banks to hold certain liquid assets is not a protect themselves against the risk of runs? As Bernanke new idea. In fact, the LCR closely resembles bank reserve and others have noted, holding liquid assets is less profrequirements, a tool that was used — unsuccessfully — to itable, so banks have an incentive to hold only as many try to prevent banking panics before the creation of the Fed. as they think they may need. But some economists have also suggested that the financial system as a whole may be Lessons from the Past too illiquid as a result of externalities. Negative externaliAfter the Panic of 1837, three states — Virginia, Georgia, ties occur when the economic costs of a decision are not and New York — experimented with using reserve requireentirely borne by the decision-maker. Some banks may opt ments to prevent liquidity crises. At the time, bank notes to maintain inadequate liquidity, gambling that liquidity were typically redeemed for gold or silver (specie), so these will be available from other institutions when needed — a laws required banks to hold specie equal to some proportion gamble that creates risks for the system. of the currency they had in circulation. This practice might function perfectly well in normal According to a 2013 working paper by Mark Carlson of times, but during a crisis, illiquid firms place additional the Federal Reserve Board of Governors, reserve requirepressure on the more liquid firms. Those firms, which also ments were slow to catch on: Only 10 states had adopted must meet their own demands, may be unable or unwilling to such laws by 1860. Reserve requirements became more prevlend their reserves to other firms. As the market for liquidalent following the National Bank Act of 1863. All national ity breaks down, the financial system as a whole suddenly banks were required to hold reserves equal to a percentage becomes much less liquid than it initially appeared under of their deposits, which varied based on their location. noncrisis conditions. “Country banks” — those outside of major cities — had One solution to this problem is to have a central bank the lowest requirements and could keep a portion of their that acts as a “lender of last resort” (a phrase associated with reserves as deposits at banks in larger reserve cities. Those 19th century British banking theorists Henry Thornton and reserve city banks in turn were allowed to hold a portion of Walter Bagehot, though neither used those exact words) their reserves at banks in so-called “central reserve” cities, when the private market for liquidity fails. During the crisis initially just New York and later Chicago and St. Louis. of 2007-2008, the Federal Reserve did just that. Through a Banks in reserve cities had higher reserve requirements variety of lending programs, the Fed and some other central than country banks, to account for the fact that they would banks supplied liquidity to firms that were unable to obtain face withdrawal demands from country banks during a it from the market. widespread crisis. In practice, however, allowing interbank In the aftermath of the crisis, however, some questioned deposits to count as reserves created an unstable pyramid whether central banks had lent too freely. If a central bank structure of liquidity that collapsed in times of crisis. like the Fed were to lend to firms that were insolvent due “A bank could deposit cash in another bank and count to imprudent business practices, it would create an incenthat deposit in its reserve while the second bank counted tive for firms to engage in more risky activity like maturity the cash in its reserve,” Carlson wrote. “The second bank transformation and offload that risk onto the central bank could then deposit the cash in a third bank and compound (and ultimately, the taxpayers). It also removes much of the the process. A withdrawal of reserves by the bottom of the incentive for firms to hold highly liquid assets of their own, pyramid during a panic could thus result in a rapid depletion contributing to the illiquidity of the financial system. of reserves within the banking system.” One way to potentially avoid this moral hazard problem is Because banks couldn’t be sure that they could obtain to require financial firms to hold more liquid assets, allowing liquidity from the system during a crisis, they tended to them to get through a crisis without the help of a central hoard liquid assets during a crisis rather than lending them bank. After the financial crisis of 2007-2008, the group of out, making the problem worse. Clearinghouses in some international banking officials and financial regulators that cities like New York attempted to address this problem. E CO N F O C U S | F O U RT H Q U A RT E R | 2 0 1 5  5  They were groups of banks that banded together to fill the role of a lender of last resort for each other. But their tools were limited and, as Ellis Tallman of the Cleveland Fed and Jon Moen of the University of Mississippi found in a 2013 paper, they were not always successful at preventing a crisis. Perversely, reserve requirements may have also contributed to externality problems because they did not apply to every bank in the system. “The national banks were required to hold all this cash, but many state banks were not,” says Carlson. “So when a crisis hit, newspaper reports claim that the state banks would turn to the national banks for help.” This additional pressure that would emerge once a crisis hit meant that the system was even less liquid than it appeared. Another problem regulators faced was persuading banks to actually use their required reserves to meet depositor demand during a crisis. In order to ensure that banks complied with the reserve requirements, the National Bank Act allowed the comptroller of the currency to punish delinquent banks by prohibiting them from making loans and paying dividends until they corrected their deficiency. In cases of extended delinquency, the comptroller could place banks into receivership. But would a comptroller punish banks for falling below their reserve requirements if they were using those reserves for their intended purpose to stave off a crisis? Carlson found that the answer to this question was not entirely clear. The comptroller had flexibility to decide when to enforce punishments for violating reserve requirements, allowing him to suspend penalties during a crisis. But in practice, banks were reluctant to test that possibility — to the point that banks suspended operations even in instances where they had sufficient reserves to continue operating. Rather than ensure sufficient liquidity was available during a panic, reserve requirements in the national banking era seem to have largely contributed to its scarcity.  Toward a Lender of Last Resort The Panic of 1907 was the final nail in the coffin for relying solely on reserve requirements. By themselves, they had failed to stop liquidity crises from happening, and Congress created the National Monetary Commission to study the defects of the U.S. financial system and recommend reforms. In its 1912 report, the Commission identified 17 flaws. First on the list was the lack of a central entity that could provide liquidity to the whole system. “We have no provision for the concentration of the cash reserves of the banks and for their mobilization and use whenever needed in times of trouble,” the Commission wrote. “Experience has shown that the scattered cash reserves of our banks are inadequate for purposes of assistance or defense at such times.” Congress created such a lender of last resort in 1913 with the Federal Reserve Act. With the Fed providing a central reservoir of liquidity for the entire system, it seemed duplicative for banks to maintain large buffers of their own liquid assets. Reserve requirements were gradually lowered 6  E CO N F O C U S | F O U RT H Q U A RT E R | 2 0 1 5  for banks that were part of the Federal Reserve System. By the 1930s the Fed no longer viewed reserve requirements as an important liquidity tool, according to a 1993 article by Joshua Feinman of the Federal Reserve Board of Governors. Rather, they became a means of influencing the supply of bank credit in the system — an early tool of monetary policy. The adoption of deposit insurance during the Great Depression also reduced the likelihood of bank runs by depositors, further reducing the need for banks to hold as much liquidity. By the 1940s and 1950s, liquidity crises seemed to have become a thing of the past. The international community also moved away from reliance on liquidity requirements. In the first Basel Committee meeting in 1975, then-Chairman George Blunden argued that developing rules to ensure bank liquidity should be one of the group’s primary objectives — but that goal ultimately took a back seat to that of ensuring bank solvency. The first two Basel Accords introduced international standards for capital requirements but included no liquidity requirements. Capital is the difference between a firm’s assets and liabilities, so requiring a firm to hold more capital would make them more likely to remain solvent in times of stress. “There was a view that if you had strong enough capital requirements, so that bank solvency was reasonably well assured, then there would be no liquidity problems at all,” says Charles Goodhart, an economist and banking scholar at the London School of Economics who wrote a 2011 book on the history of the Basel Committee. “A bank that was solvent could presumably always raise funds in wholesale markets. So the members of the Basel Committee thought that the capital requirements were in some ways a substitute for the need for liquidity.” This thinking seemed sound for a time. Over the next several decades, banks and other financial firms came to rely almost entirely on liquidity obtained from the market rather than on their own holdings of liquid assets, says Goodhart, and there were no major liquidity crises — until 2007-2008. That crisis revealed the danger of relying too heavily on outside funding sources to provide liquidity. Just as the pyramid of bank reserves collapsed during panics in the 19th century, short-term funding markets dried up in 2007-2008 as soon as creditors began questioning the solvency of the firms they were lending to. The Basel capital requirements were intended to prevent those questions from arising in the first place, but during the financial crisis, they turned out not to be the ironclad guarantee that regulators had envisioned. “Many of the banks, indeed perhaps most of the banks, that failed were more than Basel II compliant,” says Goodhart. “When there was a sufficient concern about the solvency of banks, the wholesale money market simply dried up. So funding liquidity collapsed just at the time that people were desperate to get liquidity.” With few liquid assets of their own, financial firms turned en masse to the lender of last resort — the Fed — inviting the risk of moral hazard that regulators had hoped to avoid.  Everything Old is New Again If the banking panics of the 19th and early 20th centuries revealed the pitfalls of relying solely on liquidity requirements to prevent crises, and if the financial crisis of 20072008 raised concerns about relying too heavily on a lender of last resort, could the two tools be combined into something greater than either alone? That was what policymakers hoped. Stephen Cecchetti of the Brandeis International Business School was the chief economist at the Bank for International Settlements in Basel, Switzerland, and worked on numerous aspects of the financial regulatory reform, including Basel III. He says that “there was a lot of conversation about what the role of the central bank should be with the LCR. Could we construct the LCR in such a way that the central bank is really a lender of last resort and not a lender of first resort?” The challenge is finding the right balance. Having liquidity requirement that are too high comes with a cost. “If you make banks hold all cash, then they can’t actually make loans,” says Carlson. “Moreover, you don’t really want banks to be self-insuring against the really big systemic shocks. At some point, the lender of last resort needs to step in and expand the pool of liquid assets.” Unfortunately, economic theory does not provide a lot of guidance for how to balance these two tensions. With liquidity requirements largely absent from regulatory discussions for decades, few economists had put much thought into what their optimal form might be. Since the release of the LCR, however, a few banking economists have proposed theoretical frameworks for thinking about the issue. A 2016  working paper by Douglas Diamond and Anil Kashyap of the University of Chicago found that the optimal solution is a rule that “induces a bank to hold excess liquidity but allows access to it during a run.” Under their framework, a lender of last resort would lend against liquid assets in a crisis and ensure that banks complied with their liquidity requirements by imposing a penalty for noncompliance on bank management. Diamond and Kashyap note that there is actually a precedent for this type of arrangement: the original Federal Reserve Act. Banks had reserve requirements, and those that violated the requirements were prohibited from paying dividends. The penalty ensured that bank managers would comply with the rules during normal times, but it was not so severe that it would deter banks from using their reserves during a crisis. Liquidity requirements like the LCR can also aid a central bank by giving it “time to consider the best and most appropriate line of response during a crisis,” says Goodhart. This may help minimize the moral hazard attached to the lender of last resort by providing more time to assess the solvency of individual firms. “I think of it as the ‘Be Kind to Central Banks Ratio,’ ” says Goodhart. At the same time, it’s unclear whether these new liquidity requirements will exhibit some of the same shortcomings as the old ones. Will banks actually use their liquid assets during a crisis if it means violating their LCR? Will financial firms that are not subject to the new rules attempt to free ride on those that are, introducing hidden liquidity strains into the financial system? It will likely take another crisis to know for sure. EF  Readings Carlson, Mark. “Lessons from the Historical Use of Reserve Requirements in the United States to Promote Bank Liquidity.” International Journal of Central Banking, January 2015, vol. 11, no. 1, pp. 191-224. Cecchetti, Stephen G. “The Road to Financial Stability: Capital Regulation, Liquidity Regulation, and Resolution.” International Journal of Central Banking, June 2015, vol. 11, no. 3, pp. 127-139. Diamond, Douglas W., and Anil K. Kashyap. “Liquidity Requirements, Liquidity Choice and Financial Stability.” National Bureau of Economic Research Working Paper No. 22053, March 2016.  Feinman, Joshua N. “Reserve Requirements: History, Current Practice, and Potential Reform.” Federal Reserve Bulletin, June 1993, pp. 569-589. Goodhart, Charles. The Basel Committee on Banking Supervision: A History of the Early Years 1974-1997. New York: Cambridge University Press, 2011. Tarullo, Daniel K. “Liquidity Regulation.” Speech at the Clearing House 2014 Annual Conference, New York, Nov. 20, 2014.  Economic Brief publishes an online essay each month about a current economic issue. April 2016 The Role of Option Value in College Decisions May 2016 Do Net Interest Margins and Interest Rates Move Together?  To access the Economic Brief and other research publications, visit www.richmondfed.org/publications/research/  Economic Brief  May 2016, EB16  -05  Do Net Intere st Margins an d Interest Ra Move Togeth tes er? By Huberto M. Enni  s, Helen Fessende  n,  and John R. Walt Many marke er t par ticipants assume that, monetary pol as the Federa icy, and marke l Reserve tigh t rates increa better off bec tens se in respon ause their net se, banks wil inte to understan rest margins l be d the origins will also increa of this expect se. As a way look at the rela atio n, in this Eco tionship bet nomic Brief we ween the fed net interest eral funds rate margin for U.S and the averag . banks since relationship the mid-1980s e is not as clear-c . We find tha ut as one mig t the ht suspect. As  economists deb ate whether Federal Reserve and how far the should continue Given how broa rates off of thei to raise interest d these claim r record-low s are, one wou expect that a levels, there to be at leas ld simple plot of seems t one widely the average interest mar accepted prem net about the imp gin and the fed ise act of monetar funds rate ove time would show y policy norm tion: as interest r signs of the pres alizarates go up, so relationship. ume d stro too will banks’ net interest mar ng This Economic gins —an indi Brief will inve this link base stigate cator of the difference betw d on data for een what ban the United Stat the last 31 year 1 ks bring in and es in they pay out s. Rather than what in interest. As exhibiting a clea relationship, one headline Financial Time a first pass at r in the s declared last the data sugg that the September, high statements abo ests rates are “gre at news” for ve miss a mor er plicated pict the banking e comand could offe ure. There are, sector r “redemption in fact hike , cases of rate s that did not .” Mar tin Gru berg, chairma see a correspo enn of the Fede in ndin the average g increase ral Deposit Insu Corp., predicted net interest mar rance last November gin, and som times higher that higher rate will “create opp erates have prod s ortunities for uced shrinking interest margins banks to incr margins and net for banks. The ease generate grea se preliminary findings sugg ter returns.” Acco ing to one estim est that more rdate highlighted research is nee to understand tional Business ded in the Internathe effe ct of monetar y Times, released ing on systemtightenlast September before the Fed’ wide bank profi s first 25-basistability and in particular net point increase, top five banks interest margins the could reap a . $10 billion wind in one year if fall the federal fund The Importa s rate increased 1 percentage nce of Maturity Mism by point. Due to frequen atch t confusion betw and net interest een bank profi ts margins, it is important to EB16-05 - Feder review al Reserve Bank  E CO N F O C U S | F O U RT H Q U A RT E R | 2 0 1 5  of Richmond  Page 1  7  JARGONALERT Market Power  M  arket power is when a firm has the ability to raise prices above the marginal cost of production. In competitive markets, such behavior would drive customers to other firms. Thus, market power is characterized by a lack of competition. Raising prices above the competitive level transfers wealth from consumers to producers — but this is not what primarily concerns economists. Rather, the problem is that it reduces economic efficiency because it results in too little of the good being produced. That is, firms that exercise market power prevent the good from arriving in the hands of individuals who value it as much as or more than it costs to produce it. In its place, society produces relatively more of goods that are valued less, and society is poorer as a result. The most extreme case of market power is that of a monopoly, a single seller of a good or service. A firm need not be a monopoly to exhibit market power, however. An oligopoly — a market with a small group of sellers — may also be a source of market power. Market power can exhibit itself in ways other than higher prices. The Justice Department’s antitrust case against Microsoft in the late 1990s, for example, argued that the computer giant exercised market power by “bundling” its goods — namely, forcing the installation of its Internet browser on any computer that operated the Windows platform — to enhance the market share of its browser. Cartels are one possible source of market power, though it is rare that firms can get away with colluding to keep prices high, both because cartels are illegal and because they are difficult to sustain due to the incentive to renege. Even within the OPEC oil cartel, member countries have diverging interests and reneging sometimes occurs. “Natural monopolies,” another source of market power, occur when it is profitable for only one or a few firms to produce because of large upfront costs that prevent competitors from entering, as with public utilities. Finally, market power is perhaps most often the result of government policy itself, as with occupational licensing or patents. The nation’s first attempt to limit market power was the Sherman Act of 1890, followed by the 1914 Clayton Act that was more specific about the acts considered to be socially harmful. The latter law includes some types of price discrimination (when firms charge different prices to different consumers), bundling, and mergers that substantially reduce competition. The policymakers supporting these laws had 8  E CO N F O C U S | F O U RT H Q U A RT E R | 2 0 1 5  the traditional notion of monopolies in mind but with little economic justification for how and why monopolies might harm social welfare. The economics subfield of “industrial organization” emerged in part as a way to analyze how real-world markets depart from the assumption of perfect competition. What was previously perceived as harmful monopoly behavior often proved instead to be the result of departures from the assumptions of perfect competition — assumptions such as perfect information, low transactions costs, and low barriers to entry. This work led to a more nuanced understanding about where inefficiencies resulting from market power truly existed. One thing this work proved was that such instances are not always obvious. Prices that would prevail under perfect competition are not observable. One method, called the Lerner Index, attempts to measure the difference between price and a firm’s marginal cost. Marginal costs are difficult to measure, however, as are alternative indicators of market power such as demand elasticities, which measure consumers’ responsiveness to changes in price. Moreover, market power doesn’t always result in socially destructive behavior. Research in industrial organization has shown that bundling can enable innovation and output by allowing the sale of one good to subsidize production of another — as Microsoft’s attorneys argued. And when competitors collaborate, it can lead to innovation, not necessarily collusion. Industry concentration doesn’t always lead to higher profits, a symptom of market power, and can yield cost reductions. Overall, the influence of the economics profession — along with the increasing complexity of industry generally — has been to increase the extent to which antitrust cases focus on actual losses in social welfare rather the mere existence of market power itself. Assuming that socially destructive market power has occurred, it is not always straightforward to address it by, for example, capping prices. Economist Jean Tirole of the Toulouse School of Economics won the 2014 Nobel Prize in economics in part for his theoretical work on this question. In the 1980s, he and the late Jean-Jacques Laffont showed that antitrust policymakers can set optimal prices through a scheme that allows the firm to choose its own pricing solution. But perhaps most importantly, Tirole’s work emphasized the importance of adapting the regulatory response to the industry or market in question — proving that there is no one-size-fitsall method for evaluating or addressing market power. EF  ILLUSTRATION: TIMOTHY COOK  BY R E N E E H A LT O M  RESEARCH SPOTLIGHT  Procuring Innovation  I  BY DAV I D A . P R I C E  n late 1958, a startup company called Fairchild exploited the fact that procurement information from Semiconductor in Palo Alto, Calif., had a serious probthe U.S. General Services Administration procurement lem. It had contracted to produce transistors for the database includes the industry classification of each conMinuteman missile program, which required transistors that tract in the form of NAICS (North American Industry were hundreds of times more reliable than the state of the art. Classification System) codes. But Fairchild’s devices were randomly failing. Testing revealed The authors test a theoretical model in which the a force as gentle as a pencil tap could dislodge specks of metal technology intensity of public procurement has a positive that would cause an electrical short. The frantic efforts of relationship with private research and development. They Fairchild’s engineers to solve the problem led to an invention test this model using regressions that evaluate relationships early the next year: the planar process, the first commercially between the amount of company-funded research and develpractical process for making integrated circuits. opment spending in a state, on one hand, and a number of Thus, although the integrated circuit wasn’t the product variables they theorize to be relevant. In their main regression, of a federal research lab or a research grant, the federal govthese variables include the technology intensity of federal proernment indirectly had a hand curement within the state in the in it through its procurement previous year (roughly speaking, “Does the Technological Content of spending. The electronics techthe federal government’s spendGovernment Demand Matter for Private nologies Fairchild created as a ing on high-tech industries in R&D? Evidence from U.S. States.” subcontractor for the missile the state as a share of all its proprogram became the foundacurement in the state), the total Viktor Slavtchev and Simon Wiederhold. tion of the semiconductor chips amount of federal procurement American Economic Journal: Macroeconomics, that are ubiquitous today. in the state in the previous year, April 2016, vol. 8, no. 2, pp. 45-84. The case isn’t an isolated and the state’s population the one: Economic research has previous year. indicated that public procurement spending can induce Slavtchev and Wiederhold find that company-funded research and development spending and stimulate innoresearch and development has a positive and statistically sigvation — adding to research and development, not just nificant relationship with the technology intensity of federal redirecting activity that would have taken place anyway. For procurement. In particular, the researchers estimate that example, a 2012 paper by economist Mirko Draca, now of “each dollar that the government takes away from low-tech the University of Warwick, found that the procurement industries to spend it in high-tech industries relates to an spending of the Reagan administration’s military buildup sigincrease in private R&D of about 21¢.” nificantly boosted both patenting (which is often used as a On the basis of further analysis, they conclude that the measure of innovation) and research and development activity. relationship is causal: Shifts between low-tech and high-tech But what kind of public procurement has the greatin the government’s shopping basket brought about the est effect on research and development? Intuition — and changes in research and development spending. history — might suggest that the answer is spending on The authors acknowledge, however, that it is unclear high-technology goods and services. A recent article in whether a strategy of increasing private research and develAmerican Economic Journal: Macroeconomics by two German opment through high-tech public procurement would be economists, Viktor Slavtchev of the Halle Institute for more efficient than other policies, such as direct subsidies Economic Research and Simon Wiederhold of the Ifo or favorable tax treatment. Moreover, they note that to Institute, finds empirical support for this idea. the extent the government skews its spending in favor of Slavtchev and Wiederhold created a dataset of companyhigh-tech products and services for the sake of stimulating sponsored private research and development expenditures research and development, rather than looking only at in the United States at the state level for 1999-2009 as its own needs in deciding what to buy, the government’s well as the “technological intensity” of federal procurecost-efficiency in providing public services would be hurt. ment spending in each state during the same period. They In addition, they point out that there is a question of which included all federal prime contracts valued at more than industries such a strategy should target — and the govern$2,500. For information on research and development ment has a mixed record in picking winners. Consequently, spending, they relied on the U.S. Survey of Industrial R&D, they indicate, federal spending as a tool for promoting innoa National Science Foundation survey. To determine the vation could push research and development resources in the technological intensity of procurement contracts, they wrong direction. EF E CO N F O C U S | F O U RT H Q U A RT E R | 2 0 1 5  9  UNCERTAINTY MitigationREBUILD  Probability INSURED LOSSES ADAPTATION Risk management DAMAGE Prevention RECOVERY  DEALING WITH DISASTERS From hurricanes to asteroids, how should we determine what steps to take to avert catastrophe? BY TIM SABLIK  W  hen Hurricane Hugo struck Charleston, S.C., in September 1989, it became the first natural disaster in the United States to cause more than $1 billion in insured losses. Today, after adjusting for inflation, it doesn’t even make the top 10 costliest U.S. disasters eight of which have occurred since 2000 alone. Indeed, disaster costs have been trending up worldwide over the last three decades (see chart). This may partly be explained by growth in coastal areas, which are at greater risk of damage from recurring natural disasters like severe storms and flooding. Development of these areas is not necessarily a bad thing, as Stéphane Hallegatte, senior economist in the World Bank’s Climate Change Group, explained in a  10  E CO N F O C U S | F O U RT H Q U A RT E R | 2 0 1 5  2011 paper. Coastal cities are popular tourist destinations and are natural hubs for industry and trade thanks to their access to waterways. As a result, greater development in those areas is to be expected as a country’s GDP increases, despite the risks. “The challenge is not to reduce risk-taking at all costs,” says Hallegatte. “It’s about good risk management.” But are households, cities, countries, or the world as a whole doing enough to manage disaster risks? Through an economics lens, deciding the right level of spending on disaster risks seems straightforward: Just compare the marginal costs of disaster mitigation to the marginal benefits to determine which measures are worth undertaking.  Should individuals or communities take steps to prepare for possible disasters or wait until after disaster strikes to respond? Investment in prevention or mitigation can be particularly attractive for areas where disasters are statistically somewhat predictable over the long term, especially areas exposed to repeated disaster risks from natural phenomena. Indeed, the bulk of disaster-related damage worldwide is caused by reoccurring weather events, like hurricanes or tornadoes. In many cases, preventing or blunting disaster — for example, building levees in New Orleans to prevent flooding or designing buildings and bridges in San Francisco to withstand earthquakes — can be much more cost effective than picking up the pieces after the fact. The Federal Emergency Management Agency (FEMA) estimates that every $1 spent on mitigation saves $4 in disaster relief spending. Despite such attractive cost savings, federal spending in the United States leans heavily toward the latter. In 2014, FEMA spent $25 million on its pre-disaster mitigation fund, compared to over $6 billion spent on its disaster relief program. The 2016 budget proposes increasing funding for mitigation to $200 million, but that is less than the anticipated increase for the relief fund. This allocation of resources may be questionable economics, but it seems to be consistent with the desires of the electorate. A 2009 article in the American Political Science Review by Andrew Healy of Loyola Marymount University and Neil Malhotra of Stanford University found that voters were much more likely to reward politicians who responded by offering relief after a disaster than those who invested in preventative measures in the first place. The fact that people are reluctant to take precautions to avert costs that may occur in the future could partly reflect cognitive biases. Psychologist Daniel Kahneman shared the 2002 Nobel Prize in economics for his research on how people make decisions. When confronted with uncertain future events like a disaster, people tend to rely on their own experiences or heuristics rather than actual probabilities. This is true of preventative measures as well as taking steps to insure against bad outcomes. An experiment conducted by Howard Kunreuther, co-director of the Wharton Risk Management and Decision Processes Center at the University of Pennsylvania, Christian Schade of Humboldt University of Berlin, and Philipp Koellinger of Erasmus University Rotterdam, found that individuals purchased disaster insurance based on their own subjective level of worry, even when the probability of disaster was clearly stated. Kunreuther says that many people view disaster insurance as an expensive investment with uncertain payoffs. In some cases, governments have subsidized disaster insurance, in  400 350 300 250 200 150 100 50 0 1960 1962 1964 1966 1968 1970 1972 1974 1976 1978 1980 1982 1984 1986 1988 1990 1992 1994 1996 1998 2000 2002 2004 2006 2008 2010 2012 2014  An Ounce of Prevention  Total Economic Damages Caused by Natural Disasters Worldwide  $BILLIONS  While this is true in theory, the uncertainties surrounding disasters make such calculations anything but simple. And in the wake of such uncertainty, coordinating a response locally — let alone globally — can be a monumental challenge.  NOTE: Damages are in 2014 dollars. SOURCE: D. Guha-Sapir, R. Below, Ph. Hoyois, EM-DAT: The International Disaster Database, Centre for Research on the Epidemiology of Disasters, University Catholique de Louvain, Brussels, Belgium  part to make it more palatable. The National Flood Insurance Program (NFIP) provides insurance to homeowners living in floodplains at below actuarial rates. But economists have argued that this subsidy masks the true flood risks of those areas, leading to more development than would otherwise occur and actually increasing flood-related damages. “There’s a real trade-off,” says Carolyn Kousky, a fellow at Resources for the Future, a nonpartisan think tank devoted to natural resource and environmental issues. “If you want people to buy, then you don’t want it to be too expensive. But if you’re not pricing it at a risk-based level, then it’s not going to be a fiscally sound program.” Indeed, the NFIP was forced to borrow roughly $18 billion from the U.S. Treasury to cover claims from Hurricane Katrina. Insurers have looked for ways to make disaster insurance more affordable while also encouraging individuals to reduce their exposure to risk. For example, FEMA offers discounts on flood insurance for homeowners who elevate their homes above expected flood levels. But the core problem seems to be that, for better or worse, most people simply do not worry too much about disaster risks. Kousky found that even the spike in demand for insurance that usually follows disasters is largely driven by a requirement that individuals purchase insurance to receive federal disaster aid rather than a sudden feeling of vulnerability. Even disaster experts are not immune to this mentality. During a recent blizzard that struck Washington, D.C., Kousky’s family lost power at their house and she was forced to borrow a neighbor’s generator. “And I thought, I study disasters for a living! Why haven’t I gotten my family a generator?” says Kousky. “But it’s just a classic example of how human behavior works. When it’s a sunny day and there are other things to do, you don’t think about it.”  Coordinating Global Action Convincing individuals to take steps to prepare for a disaster when the costs and timing are fairly well understood can be hard enough. Adding more uncertainty and more people to the equation only makes it that much more difficult. E CO N F O C U S | F O U RT H Q U A RT E R | 2 0 1 5  11  The Challenge of Estimating Disaster Costs Predicting future events is fraught with uncertainty. This is particularly true in the case of rare disasters like climate change, where there is little prior experience to draw from. This chart depicts estimates of the economic damages from global warming taken from different studies. The solid line represents the best-fit for these estimates, or the most likely outcome given available data. The shaded region is a range of possible scenarios based on these estimates. For more extreme warming scenarios, it becomes much more difficult to estimate the likely effects. That uncertainty is depicted by the widenening shaded region. COST TO SOCIETY (PERCENT CHANGE IN INCOME)  5.0 2.5 0.0 -2.5 -5.0 -7.5 -10.0 -12.5 -15.0 -17.5 -20.0 -22.5 -25.0  -1.5 -1.0 -0.5 0.0 0.5 1.0 1.5 2.0 2.5 3.0 3.5 4.0 4.5 5.0 5.5 6.0 6.5 GLOBAL WARMING (˚C)  SOURCE: Richard S. J. Tol, “Economic Impacts of Climate Change,” University of Sussex Working Paper Series No. 75-2015.  Disasters like climate change, asteroid strikes, or pandemics of new infectious diseases have occurred rarely in human history, making it hard to estimate the benefits of action versus the costs of inaction. Climate change, for example, is characterized by deep uncertainties. Last December at a climate change summit in Paris, 195 nations pledged to take measures to limit overall warming to less than 2 degrees Celsius. Many scientists argue that crossing that threshold would result in a great deal of harm, but “it’s also a threshold in terms of how much we know,” says Hallegatte. “We have been through 0.8 degrees of climate change in the last century. So we have experience, in a way, for limited climate change. But when you go beyond 2 degrees, you get into a very different climate, and the uncertainty increases a lot.” For levels of warming below 2 degrees Celsius, some economists estimate that global warming would actually have net positive effects, due in part to the benefits of longer growing seasons in some parts of the world. But beyond that point, estimates diverge wildly, with models forecasting anywhere from “moderate” losses due to more frequent flooding in coastal regions, more severe weather phenomena, and greater prevalence of tropical diseases, to more extreme events, like a shift in the Gulf Stream that warms Western Europe (see chart). Avoiding catastrophes like the latter scenario means coordinating preventative steps on a global level. Such mitigation is a “public good,” which means it is impossible to exclude people from enjoying its benefits and their use of 12  E CO N F O C U S | F O U RT H Q U A RT E R | 2 0 1 5  it does not diminish its availability to others. This means every participant will have an incentive to contribute less and “free ride” on the contributions of others. The “correct” action from the perspective of society as a whole might be for everyone to contribute to preventing a disaster, but if you suspect others may contribute enough on their own to avert the worst-case scenario, you have less incentive to act. “If I know everyone else has contributed, I’m probably going to be tempted to free ride if doing so is only going to increase the probability of disaster by a tiny bit,” says Scott Barrett, an economist at Columbia University who studies international cooperation to prevent disasters. Governments can sometimes address this free-rider problem at a local level by collecting taxes to pay for disaster defenses. But Barrett notes that international institutions have historically had a much more difficult time doing the same thing on a global level. The Paris Agreement and the Kyoto Protocol that preceded it both relied on voluntary action from participants to reduce greenhouse gas emissions. And that opens the door for free riding. There are some exceptions. For example, Barrett says that the Montreal Protocol agreement to ban the use of ozone-depleting chemicals was a success partly because it identified a specific, easily attainable goal (the costs of shifting away from those chemicals were relatively low). The agreement also reduced uncertainty regarding participation by threatening trade sanctions against countries that failed to take action. “Our ability to avert disaster depends very heavily on the characteristics of the disaster itself and how they relate to our institutions,” says Barrett. One solution for dealing with the uncertainties of something like climate change, he says, is to focus global efforts on achieving a single goal, like adopting a specific technology that will reduce emissions, rather than attempting to gain cooperation on a set of nebulous long-term policies.  Choosing a Global Response Getting countries to agree to address global disasters is one thing; choosing the right course of action is another. This is especially important if a disaster-related measure at the national level makes a global response less likely. In the case of infectious diseases, for example, countries often stockpile vaccines or treatments for their residents to receive in the case of an outbreak. While this allows individual countries to mitigate damages to their citizens, it could be more efficient from a global perspective for those same countries to instead form a shared stockpile of medicines to treat outbreaks at their source. The National Academy of Medicine recommends such a plan in a 2016 book, blaming the haphazard nature of the international response to the 2014 Ebola outbreak in Africa for “economic costs that were far greater than they could have been.” A preventative approach to global disasters may often seem like the most efficient response in hindsight, but it is not always so clear beforehand. Prevention of some global threats,  like climate change, may demand serious sacrifices or lifestyle changes. Curbing worldwide greenhouse gas emissions, perhaps indefinitely, would entail long-running productivity costs. In developed nations, that has implications for the wealth of both current citizens as well as future generations, possibly making them poorer in return for uncertain benefits. Future generations have also historically been wealthier than their parents, suggesting that they might be in a better position to afford costly mitigation efforts — provided that there is still enough time for them to act. In developing nations, forgoing cheap fossil fuels may inhibit their ability to industrialize and pull themselves out of poverty. An alternative approach could be for countries to make more short-run investments to prepare for eventual climate change. This might include measures like building  levees to protect against rising sea levels or developing new agricultural methods to cope with higher temperatures. Developing nations are more exposed to these damages, as their economies tend to be more reliant on agriculture. But Nobel Prize-winning economist Thomas Schelling has argued that instead of focusing entirely on prevention, developed nations could devote resources to helping boost the economies of their less-developed neighbors, making them more resilient to climate change-related disasters. “One way to make people less vulnerable to disasters is to make them richer,” says Hallegatte. As with regularly reoccurring disasters, determining the most efficient measures for rare or theorized disasters that might occur on a global scale is largely a cost-benefit exercise. But the infrequency of these types of disasters  Asteroid Defense and Types of Public Goods In 1908, an asteroid roughly 60 meters in diameter exploded over Siberia with a force a thousand times more powerful than the nuclear bomb dropped on Hiroshima. Fortunately, the event occurred over a largely uninhabited forest; had it happened above a major city, the losses would have been catastrophic. While intercepting deadly asteroids seems like something from a movie, the idea is not confined to the realm of science fiction. The National Aeronautics and Space Administration (NASA) has successfully landed a spacecraft on an asteroid and used another to intercept and collide with a comet. These open the possibility of developing spacecraft designed specifically to deflect asteroids. Thanks to the great distances involved, diverting an object in space by just a small amount would generally be enough to prevent impact — provided the intervention occurs far enough in advance. Both the United States and the United Kingdom have made some efforts at tracking “near Earth objects” (NEOs) that could pose a threat. But to date, scientists have discovered only a fraction of the asteroids in our solar system. As recently as 2013, astronomers were caught by surprise when an asteroid roughly 20 meters in diameter exploded as it entered the atmosphere over Russia, damaging thousands of buildings in six cities and injuring as many as 1,500 people. “People tend to think about the really big asteroids that would destroy everything, like in the movies,” says Scott Barrett, an economist at Columbia University. “But the much bigger risk is the medium-size asteroids because they’re more common.” Like other types of disaster defense, protection against asteroids is a public good. Indeed, George Mason University economists Tyler Cowen and Alex Tabarrok devoted an episode of their popular online economics program, Marginal Revolution University, to asteroids as a case study in why markets tend to undersupply public goods. In the early 1980s, economist Jack Hirshleifer at the  University of California, Los Angeles proposed categories for public goods. One type is “summation” goods, which depend on the collective effort of all participants to succeed. An example would be reducing greenhouse gasses in the atmosphere: Action taken by one country to cut emissions would not be sufficient if other countries continue to pollute. This is the classic public good, and economic theory predicts that it will be underprovided by voluntary participants due to the presence of free riding. In contrast, what Hirshleifer calls a “best-shot” good can be successfully provided by one party acting alone. Asteroid defense is an example of this; only one successful interception is necessary to protect everyone. In theory, this could make the provision of such a good more likely. Wealthy nations have the most to lose economically from an asteroid strike and are in a better economic position to fund defensive measures unilaterally. Other factors certainly play a role in such decisions, but developed nations like the United States and the United Kingdom and the broader European Union have been the most active in funding efforts to track and defend against NEOs. On the other hand, free-riding problems could be even more pronounced with best-shot goods, as Hirshleifer found in experiments conducted with Glenn Harrison of Georgia State University. But in a bit of good news, Hirshleifer and Harrison also found that individuals contributed more to all classes of public goods than simple theory would have predicted.  — Tim Sablik  E CO N F O C U S | F O U RT H Q U A RT E R | 2 0 1 5  13  makes the calculation even more difficult. Economists “discount” the expected costs of disasters that could occur in the distant future to compare them in real terms with the costs of response measures undertaken today. If the costs of taking action today are less than the expected cost of a future disaster (taking into account the probability of its occurrence), then taking action is economically preferable. Of course, such calculations are highly sensitive to the chosen discount rate. Lower rates will make future benefits seem larger in present value, making costly responses today more attractive. For the very long time horizons involved in phenomena like climate change, even small changes in the discount rate can result in very different recommendations. Traditionally, economists have used the rate of return on an alternative investment, like bonds or private capital, as a discount rate. But in the case of climate change, economists have proposed using discount rates ranging from as low as about 1 percent to nearly 5 percent. Because of this uncertainty, trying to choose one optimal response may not be the best approach. In the case of climate change, Hallegatte and his colleagues at the World Bank have argued that developed nations can help developing countries grow their economies in a way that makes them resilient to climate change while also helping reduce global emissions. By using more efficient, greener technologies from the start, developing nations can “leapfrog” over older means of industrialization in much the same way that many of them skipped landlines and went straight to cellphones. “These countries have a fantastic opportunity today to build things right in the first place and avoid the type of difficult retrofits that we’re considering in developed countries at the moment,” he says. Robert Lempert, director of the Frederick S. Pardee Center for Longer Range Global Policy and the Future Human Condition at RAND Corporation, a policy think tank, has also advocated flexibility. He and his colleagues at RAND developed a model for disaster response that flips the typical approach on its head. Rather than start from an intractable problem and attempt to determine the best solution, their model tests different solutions under a variety of possible scenarios to find the one that performs the best across a wide range of possible futures. “It becomes easy to get hung up on not knowing the shape or timing of potential disasters and getting locked into a discussion over these uncertainties as opposed to focusing on  the actions that one can take to make the system more robust, more resilient, and tuning it to do the best job possible of handling a wide range of even extreme disasters,” says Lempert.  Preparing for (Possible) Doomsday Just how much should we worry about really extreme disasters? The extinction-level asteroid (see sidebar), the climate change so severe it cripples world food production, or the new infectious disease that becomes a worldwide pandemic? These events might seem to belong more in the realm of summer blockbusters than serious policy discussion, but some, like Harvard University economist Martin Weitzman, argue they are not as rare as many people assume. Disasters in general suffer from what economists call a “fat-tail” problem. In a normal statistical distribution, a classic bell curve, divergences from the mean in either direction are both increasingly rare and do not differ too drastically from the average. This is not true of fat-tail distributions. While extreme events are still rarer than the average, they can deviate from that average by much larger amounts, meaning that the next event could be orders of magnitude worse than the record holder up to that point. In extreme cases, there is essentially no limit to how bad the next disaster could be. Under such conditions, Weitzman says traditional cost-benefit analysis breaks down. It could be correct to spend any amount of resources on prevention if doing so means averting a true catastrophe. That doesn’t necessarily provide a useful framework for making decisions, though. Weitzman allows that such large uncertainties may make it impossible to obtain agreement on an optimal solution before the risks become more apparent — at which point it may be too late to implement those solutions. With climate change, for example, cutting carbon emissions is not an effective plan to reduce global temperatures once they have already risen significantly. Given the reluctance to devote significant resources to avert theoretical future catastrophes, accepting suboptimal responses after the fact may be the best we can hope for, Weitzman has written. “We tend to be unwilling to take strong steps to avert a crisis, but then after the crisis occurs we are more willing to do what we should have done all along,” says Barrett. In the case of global threats, “you need to convince the whole world to do what it wouldn’t want to do normally. And that is unprecedented.” EF  Readings Barrett, Scott, and Astrid Dannenberg. “Negotiating to Avoid ‘Gradual’ versus ‘Dangerous’ Climate Change: An Experimental Test of Two Prisoners’ Dilemmas.” CESifo Working Paper No. 4573, January 2014. Hallegatte, Stéphane. “How Economic Growth and Rational Decisions Can Make Disaster Losses Grow Faster Than Wealth.” World Bank Policy Research Working Paper No. 5617, March 2011.  14  E CO N F O C U S | F O U RT H Q U A RT E R | 2 0 1 5  Harrison, Glenn W., and Jack Hirshleifer. “An Experimental Evaluation of Weakest Link/Best Shot Models of Public Goods.” Journal of Political Economy, February 1989, vol. 97, no. 1, pp. 201-225. Kousky, Carolyn. “Informing Climate Adaptation: A Review of the Economic Costs of Natural Disasters.” Energy Economics, November 2014, vol. 46, pp. 576-592. Schade, Christian, Howard Kunreuther, and Philipp Koellinger. “Protecting Against Low-Probability Disasters: The Role of Worry.” Journal of Behavioral Decision Making, December 2012, vol. 25, no. 5, pp. 534-543.  GETTING UNSTUCK Washington, D.C., is notorious for congestion. Can smarter pricing provide a way out of clogged highways, packed parking, and overburdened mass transit? BY H E L E N F E S S E N D E N  F  HOURS PER YEAR LOST TO CONGESTION  or millennia, philosophers have wrestled with the to produce more efficient outcomes. Washington, D.C., question, “What is time?” For economists, finding can provide a textbook example of both the challenges and the answer is a bit easier: Time is whatever people are potential solutions. willing to pay for it, whether it’s a hotel or flight during peak season, an Uber cab on a busy Friday or Saturday night, or Free Riders express package delivery. To economists, one basic reason for the congestion criIn Washington, D.C., however, the challenge of valuing sis is a market failure. Any road, as long as it’s un-tolled, time has become an acute problem that affects everyone: trafpresents a classic problem of externalities: All drivers can fic chaos. In the last decade, metro D.C. has ranked close to access it without fully bearing the additional costs that arise or at the top of national congestion surveys. According to the when that particular road gets crowded. Each added driver most recent annual study by the Texas A&M Transportation imposes externalities on others by adding to congestion Institute and INRIX, Inc., for example, the District continthat slows traffic and cuts into productive working hours. ues to beat Los Angeles, San Francisco, and New York as the In addition to externalities imposed on other drivers, there national leader in gridlock. The report calculated that the are other costs imposed on society via higher emissions that average commuter in the D.C. region who drives during peak hurt the environment. (By some estimates, driving accounts times frittered away 82 hours, or almost three and a half days, for a third of carbon emissions from energy use.) in 2014 due solely to congestion. Many residents assume that increased gridlock is a D.C. Traffic Congestion: A Comparison price to pay for several positive trends in the last two Despite a slight improvement since 2010, the average Washington-area commuter still decades, namely, strong population and job growth. In loses more hours per year to traffic jams than commuters in the next three most conthe greater D.C. region, the population surged from gested very large urban areas, defined as those with more than 3 million in population. 4.2 million in 1990 to 6 million in 2014, while total 90 employment jumped from 2.9 million to 4.1 million. Helped by falling crime rates in the city and, until 80 recent years, robust government spending and plen70 tiful federal jobs, the local economy also held up far better than most cities during the recession. 60 In principle, the region’s extensive network of 50 mass transit options could help absorb some of these stresses. The D.C. Metrorail system is the 40 second-busiest in the nation. The area is also served by regional rail and local and commuter bus options. 30 Around 700,000 riders use Metrorail daily, while 20 another 700,000 use bus or regional rail. But transit ridership is actually falling, amid widespread woes 10 with Metro service, reliability, and safety. And the aggregate rise in congestion suggests that the transit 0 1995 2000 2005 2010 2014 capacity that has been built out hasn’t been enough Washington, D.C. Los Angeles to handle rising demand and evolving commuting San Francisco New York City patterns, including for those residents in farther reaches of D.C.’s suburbs. Economists have long SOURCE: Texas A&M Transportation Institute and INRIX, Inc. Annual Mobility Scorecard, 2015 argued that putting a price on congestion is the way E CO N F O C U S | F O U RT H Q U A RT E R | 2 0 1 5  15  In short, the market failure occurs because drivers are underpaying for that good by not fully internalizing the social costs of their decisions. For their part, planners could meet higher demand for roadways with extra supply by building more lanes, but those solutions take money (from the taxpayer) and require years to execute — and more importantly, additional lanes generally don’t ease congestion in the long run because they don’t correct the market failure. Finally, there is the issue of parking, which suffers from a similar set of issues: A driver who searches for an open spot produces externalities while cruising around (more emissions and more traffic). Addressing these inefficiencies, then, many economists and planners focus on the demand side — namely, establishing a pricing system that requires people to internalize the costs they impose on others when they commute. This way, a scarce resource is allocated more efficiently to those who value it the most. In both the United States and abroad, experiments in demand management have been underway for decades, but advances in technology, such as smartphones and GPS, now give people far more information to use in making transportation decisions. And these innovations are taking root in the Washington metro region, as are efforts to overhaul mass transit so that it’s more responsive and efficient as an alternative.  Name Your Price The origins of demand management go back about a century, in the work of economists Arthur Pigou and Frank Knight. Pigou formalized the idea of externalities and proposed tolls as a solution for restoring efficiency on a road suffering from congestion externalities, such as wasted time and productivity, wear and tear on roads, and more accidents. Knight built on this idea but argued for private road tolling as a way to force drivers to pay the marginal cost that they impose on others. If private firms owned these roads, he argued, a proper application of property rights would set toll pricing efficiently. In the 1950s and 1960s, their work influenced a new generation of transportation economists, including William Vickrey, who promoted congestion pricing for public transit and, later, for roads. In contrast to Knight, he saw a government role in setting the toll and argued that efficient pricing should, among other things, reflect the trip’s impact on all other traffic from start to finish. Tolling, in other words, makes the driver pay a price closer to the social cost of road maintenance, plus externalities such as emissions and congestion affecting others. Congestion arises not just from tangible factors such as population growth, city size, or even density, but also from the failure to manage demand across existing capacity. Generally speaking, any given mode of transportation isn’t being used to full capacity all the time, whether it’s by highways, buses, or bike paths. Even in the case of roads, Federal Highway Administration research shows that more than half of rush-hour drivers are not commuters, but people with some discretion as to when and how to travel. The same research concludes that you don’t need to remove many of 16  E CO N F O C U S | F O U RT H Q U A RT E R | 2 0 1 5  those noncommuting drivers to make a difference — diverting just 5 percent of vehicles from a clogged roadway can substantially improve traffic.  More Lanes, More Problems? For decades, the most popular solution to congestion was building additional lanes or roads. The problem is that creating additional road capacity doesn’t reduce traffic in the long run, because it simply encourages more people to take the roads rather than seek alternatives to road commuting, a dilemma known as “the fundamental law of road congestion.” A study by economists Gilles Duranton at the University of Pennsylvania and Matthew Turner at Brown University estimated that a 10 percent expansion of interstate lanes causes, over time, a roughly equal percentage increase in the vehicle-kilometers traveled, and that any congestion-reduction benefit gained by a new lane tends to disappear after 10 years. In addition, expanding lanes is expensive, between $10 million and $15 million per mile in urban areas. Still, the approach remains politically appealing, including in the D.C. region. As a case in point, Virginia lawmakers recently struck a deal in which I-66, one of the busiest highways in the area, will get one more lane inside the Beltway, possibly costing up to $140 million, as part of a mix of enhancements intended to better regulate traffic. This is where demand management comes in. One way to shape demand is to give incentives for drivers to carpool, in exchange for faster speeds. Across the country, many states have established high-occupancy vehicle (HOV) lanes to discourage single-occupancy driving and take more vehicles off the road. In HOV lanes, only vehicles with multiple passengers, such as carpools, vanpools, and buses, are allowed access during peak times, while all other traffic is confined to general-purpose lanes. HOV lanes are now widespread, but they pose new problems. Catching cheaters can be difficult, for example. But the biggest challenge is that HOV lanes are often underutilized while the general-purpose lanes remain congested. One reason: HOV rules affect only a small subset of drivers — those who are willing or able to carpool. A far greater share of the population lives alone, has a commute that doesn’t lend itself to sharing, or simply prefers driving alone. Another solution is tolling, popular with economists but widely hated by drivers. In some international cases, such as London, Singapore, and Stockholm, an anti-congestion “cordon” toll applies to all drivers heading into those cities during peak times. This solution has little political backing in the United States, however. Meanwhile, interstates have certain restrictions in using federal public money to set up new lanes that are “pure” tolls. At the same time, cash-strapped states are keen to find revenue for infrastructure maintenance and improvements. So policymakers are taking a new approach: using variable pricing for designated lanes on high-demand roadways. These are most commonly known as “high occupancy tolling” or HOT lanes. In some cases, they are also termed “express lanes.”  Early morning traffic on I-95 near Washington, D.C., splits off into toll and free lanes.  PHOTOGRAPHY: TRANSURBAN  Some Like It HOT Under this approach, a lane is designated as an HOV/toll lane, but the toll varies constantly during peak times, depending on how full the road is. HOV drivers may still use the lane without paying, but solo drivers now have a choice to either pay for that lane or stay in the general-purpose lane. Typically, that driver has a few minutes to see the real-time fare and decide which lane to take. Payment and enforcement is handled through transponders (such as an E-ZPass) so that traffic is not held up at toll booths. In effect, a certain amount of congestion in the general lanes is required to incentivize at least some drivers to leave the general-purpose lanes. But in theory, welfare should improve for the entire driving population, because all lanes are better utilized once the HOV/HOT lanes absorb more traffic. Private companies generally manage these schemes but frequently some revenue is set aside for the public, often for improving mass transit. One well-known case is San Diego’s I-15, which saw sharp jumps in bus ridership and carpooling after it adopted HOT lanes as part of a mix of improvements. Proponents note that a core element of this strategy was adding more transit options to help people who don’t have a car — including low-income groups and the nondriving elderly — which in turn raised popular support for the tolling component. The Intercounty Connector in Maryland has used allelectronic, variable tolling since 2011. In Northern Virginia, some of the busiest arteries have converted, or will soon convert, their HOV lanes into HOT lanes. In 2014, the Virginia Department of Transportation (VDOT), along with a private firm, Transurban, transformed the 29-mile barrier separating HOV lanes on I-95 into HOT lanes south of the Beltway, collecting variable tolls during all hours. In 2015, VDOT and Transurban issued a preliminary “snapshot” study showing  that average speeds during peak hours did rise substantially in the un-tolled lanes while staying largely unchanged (i.e., relatively fast) in express lanes. In some stretches, especially farther out from D.C., that speed increase ranged from 57 to 81 percent. VDOT and Transurban are proposing to extend the HOT lanes northward on I-395 inside the Beltway, and VDOT is also moving forward with HOT lanes on I-66. There remains, however, the question of whether tolling is economically fair in light of its distributional effects. The time savings that congestion pricing brings are likely to be worth more to affluent individuals, who tend to have a higher opportunity cost of time in terms of wages. For lower-income individuals, the toll they are forced to pay is more likely to exceed the benefit they receive from reduced congestion. A highway divided into both HOT and general-purpose lanes addresses this by giving drivers the choice between paying with time versus paying with money, although this trade-off may strike some as unfair. These distributional effects can be offset when the revenues are used to fund commuting alternatives, including those that benefit lower-income groups, and this helps gain public support as well. In a 2013 survey, the Metropolitan Washington Council of Governments and National Capital Region Transportation Planning Board found that participants in both upper- and lower-income groups supported congestion pricing by substantial majorities, provided that it offered in return more transportation options that made a difference to their commute.  The Myth of Free Parking Once drivers finish their trips, an all-too-common problem in any city is crowded metered street parking. The traditional on-street pricing approach sets a flat hourly rate, payable at all meters all day long. But this price doesn’t adjust to demand at peak times. Drivers then encounter blocks E CO N F O C U S | F O U RT H Q U A RT E R | 2 0 1 5  17  and blocks of full parking, forcing them to spend extra time and fuel looking for a spot. Economist Don Shoup at the University of California, Los Angeles has spent decades researching the inefficiencies of the parking market — including the high cost of minimum parking requirements — but he is probably best known for his work on street parking. In 2011, San Francisco applied his ideas in a pilot project to set up “performance pricing” zones in its crowded downtown, and similar projects are now underway in numerous other cities — including, later this spring, in D.C. To Shoup, the optimal rate, or “right price,” as he calls it, for on-street parking responds to demand, similar to the approach behind variable tolls. The right price for on-street parking is the lowest price that will leave one or two spaces open on every block, thereby dramatically reducing the amount of time spent cruising, a chief source of urban congestion. “I had always thought parking was an unusual case because meter prices deviated so much from the market prices,” says Shoup. “The government was practically giving away valuable land for free. Why not set the price for on-street parking according to demand, and then use the money for public services?” Taking a cue from this argument, San Francisco converted its fixed-price system for on-street parking in certain zones into “performance parking,” in which rates varied by the time of day according to demand. The idea was that as demand rose during peak times on popular blocks, and fell during off-peak times on less popular blocks, drivers would factor parking prices into their decisions about where to park and how long to stay. If prices were too high for drivers on some blocks, they could park on lower-priced blocks nearby.  Hitting the Target In its initial run, the project, dubbed SFpark, equipped its meters with sensors and divided the day into three different price periods, with the option to adjust the rate in 25-cent increments, with a maximum price of $6 an hour. The sensors then gathered data on the occupancy rates on each block, which the city analyzed to see whether and how those rates should be adjusted. Its goal was to set prices to achieve target occupancy — in this case, between 60 percent and 80 percent — at all times. There was no formal model to predict pricing; instead, the city adjusted prices every few months in response to the observed occupancy to find the optimal rates. The results: In the first two years of the project, the time it took to find a spot fell by 43 percent in the pilot areas, compared with a 13 percent fall on the control blocks. Pilot areas also saw less “circling,” as vehicle miles traveled dropped by 30 percent, compared with 6 percent on the control blocks. Perhaps most surprising was that the experiment didn’t wind up costing drivers more, on net, because demand was more efficiently dispersed. Parking rates went up 31 percent of the time, dropped in another 30 percent of cases, and stayed flat for the remaining 39 percent. The overall average rate actually dropped by 4 percent. 18  E CO N F O C U S | F O U RT H Q U A RT E R | 2 0 1 5  SFpark has become the most well-known of these experiments, but other cities, especially in California, have also adopted this approach. And this spring, Washington will join the list as well. The neighborhood of Penn Quarter/ Chinatown will soon launch a pilot project similar to SFpark but with fewer sensors; it will use a broader mix of parking data from spot sampling, parking enforcement data, and cellphone payment data to estimate pricing per block. A driver can use an app to see what the probability of finding a spot would be on any given block, and rates will be adjusted every three months if needed. “Penn Quarter is an ideal environment because we can study the interaction between performance parking and an array of modes — whether Metro, bus, or bike-share,” explains Soumya Dey, director of research and technology transfer at the District Department of Transportation. “And as part of this, we’re also doing a study to see just how much congestion in D.C. is caused by cruising.”  Incentivizing Mass Transit Once people opt to leave their cars, of course, they need mass transit or other alternative modes, such as biking, walking, or car-sharing. And in D.C., where a large plurality of city residents use transit daily and substantial numbers use it to commute from the suburbs, transit is an essential part of daily life. This is one reason why the increasing woes of Metrorail — frequent delays due to deferred maintenance issues, declining reliability, and safety concerns — have dominated headlines. Under a new general manager, the Washington Metropolitan Area Transit Authority (WMATA) is launching an initiative to rebuild ridership and restore reliable service. Following a system-wide safety audit, it is launching a yearlong overhaul addressing deferred maintenance that will require disruptions. For all of these problems, however, the presence of such an extensive transit system opens up a way for economists to look at the challenge of externalities and demand management in reverse. For example, there is no additional cost to adding one more rider to an underutilized, half-empty subway or bus during off-peak hours. Furthermore, transit can produce a positive externality by reducing passengers’ carbon footprint and taking vehicles off the road. By extension, demand management can work the other way by encouraging riders with more flexible schedules to take transit at different times, including at peak-shoulder and off-peak times. This approach, in theory, could not only take potential drivers off the roads, but also spread out transit ridership more evenly. Metrorail has long used a variable pricing system that takes both distance traveled and peak/off-peak times into account. And a few years ago, it temporarily tried a “peak of peak” plan that added an extra pricing tier for the busiest times, both to shape demand and bring in extra revenue. The plan was unpopular and seen as overly complicated, so it was dropped. But now, WMATA is launching a pilot project to see how a discounted, unlimited-access pass will  work in lieu of raising fares, with the chief aim being to increase ridership. In April, WMATA began offering a new product called SelectPass in which a passenger determines the price of his or her typical daily round trip, multiplies it across 18 days, and then pays that amount as the blanket fare for the entire month. As long as any given trip, no matter when it’s taken, doesn’t exceed this preset estimate, the cost is covered for the month. (Only if the passenger takes a longer trip is there any additional charge.) The idea is that a passenger taking transit every workday should save at least 20 percent compared to standard fares paid out over the same period, and he or she can adjust daily travel around the benefit of unlimited Metro travel during the day. WMATA is running this pilot project through June and will then assess longer-term strategy, including how to price different tiers of passes. But ideally, in the long term, its proponents say the convenience and cost factors may even grow the ridership population as more people will have an incentive to use Metro “for free,” in effect, with their SelectPasses rather than take their cars. In the numerous European cities that have tried similar strategies on discounted blanket pricing, both aggregate ridership and revenue have risen as a result. “The problem is that Metro does have an all-access rail pass, but it’s priced at the maximum fare, so it’s prohibitively expensive for most riders,” explains Mark Schofield, WMATA’s director of financial planning and analysis. “So this pilot will try to address this cost issue in order to grow ridership.”  Too Much Of A Good Thing? Another demand-management issue for economists is how employers structure commuter benefits. In some major cities, including New York, San Francisco, and D.C., an employee in a firm with 20 or more employees can opt for a pre-tax deduction to cover parking or transit; in addition, many employers offer a mix of benefits such as free or discounted parking and transit subsidies. To see how these options interact, two researchers at Virginia Tech, doctoral candidate Andrea Hamre and associate professor Ralph Buehler, recently analyzed data on a representative sample of more than 4,600 commuters from the urban core and inner suburbs of the D.C. metro region: About 70 percent drove alone, 24 percent used transit, and 6 percent walked or biked. The research question: Is it enough to offer incentives to take alternatives  to driving, or do you need to directly discourage driving itself if you want employees to take mass transit, bike, or walk? The results suggested that free parking overwhelmed all other benefits. For example, if commuters were offered both free parking and transit benefits, the probability that they would still opt to drive alone to work was 83 percent — a higher probability, in fact, than if the employer offered no transit benefits at all (76 percent). In the entire mix of benefit transit options, driving alone won out every time as long as free parking was offered. Conversely, if the employer took away free parking but offered help on transit, the probability of driving alone fell to 23 percent, with the rest choosing transit. As these results and similar findings become better known, some transportation experts are promoting parking “cash out” options for employers to offer employees. Under these schemes, employees who waive their parking benefits get cash back directly. Some groups are working with the D.C. City Council in hopes of having legislation introduced on this proposal later in the year. “Transit benefits seem to be most effective at encouraging mode shift when they are offered in the absence of free parking,” says Hamre. “In the United States, we’ve done a good job of steadily increasing benefits for alternatives to driving, but we need to put those benefits for alternatives within the overall context of relative prices across all modes — and this means recognizing how they compare to the cost of car parking, and how commuters may respond when offered benefits for both driving and alternatives.” Washington’s congestion crisis took years to develop and will likely take years to address. But there are signs of progress in tandem with these new experiments. The National Capital Region’s Transportation Planning Board released a survey in early 2016 showing that the percentage of commuters opting for transit, biking, and telecommuting jumped from 15 percent to 21.4 percent between 2000 and 2014, while the share of those driving alone even dropped slightly, from 67.7 percent in 2000 to 65.1 percent in 2014. The growth of car-sharing, the popularity of expanded bike paths, and the prospect of more express bus routes are likely to change commuting dynamics even more in coming years. “As we look at all these challenges, we see the need to do more pilots, get more experience, and be willing to fail if necessary,” says WMATA’s Schofield. “This is a brave new world.” EF  Readings “Approaches to Making Federal Highway Spending More Productive.” Congressional Budget Office, February 2016. “Congestion Pricing: A Primer: Overview.” U.S. Department of Transportation Federal Highway Administration, October 2008. Duranton, Gilles, and Matthew A. Turner. “The Fundamental Law of Road Congestion: Evidence from US Cities.” American Economic Review, October 2011, vol. 101, no. 6, pp. 2616-2652.  Hamre, Andrea, and Ralph Buehler. “Commuter Mode Choice and Free Car Parking, Public Transportation Benefits, Showers/ Lockers, and Bike Parking at Work: Evidence from the Washington, DC Region.” Journal of Public Transportation, 2014, vol. 17, no. 2, pp. 67-91. “SFpark: Pilot Project Evaluation Summary.” SFMTA Municipal Transportation Authority, June 2014.  E CO N F O C U S | F O U RT H Q U A RT E R | 2 0 1 5  19  GOODBYE, GLOBALIZATION? Why trade growth has slowed down — and what it might mean for the global economy  T  he first container ship sailed from Newark, N.J., to Houston, Texas, in 1956, marking the beginning of a revolution in global shipping and transportation. Thirteen years later, ARPAnet sent its first message from a computer at the University of California, Los Angeles to a computer at Stanford University, sparking the modern Internet. Over the next several decades, further advances in transportation and communications would make the world increasingly interconnected and enable goods to be shipped all over the world. Today, if you’re like most consumers, the shirt you’re wearing is made out of cotton grown in the southern United States, milled into fabric in India or China, and cut and sewn into clothing in Bangladesh. But after decades of rapid growth, trade suffered its greatest drop in the postwar era during 2008 and 2009, an episode known as the “Great Trade Collapse.” Today, growth rates are still well below the previous trend. The reasons for this sluggishness are unclear: Are there lingering effects from the global financial crisis and recession, or has some fundamental change occurred in the world economy? Either way, the answer has important implications for development — and maybe for world peace.  Why Trade Boomed For much of the postwar era, world trade grew faster than world GDP. Between 1950 and 2007, the value of world goods exports increased an average of 11 percent per year, compared to average GDP growth of 3.6 percent (calculated at market exchange rates), according to data from the World Trade Organization (WTO). The value of exports is highly sensitive to changes in prices and exchange rates, so economists also measure exports by volume to account for these changes. The volume of world goods exports also increased more quickly than GDP, averaging 6 percent per year. Goods make up the majority of total world exports. Between 1960 and 2008, according to the World Bank, the 20  E CO N F O C U S | F O U RT H Q U A RT E R | 2 0 1 5  world exports-to-GDP ratio increased from 12 percent to 29 percent (see chart). The World Bank’s measure includes both goods and services. Several factors contributed to rapid growth in trade. One was the world’s increasing openness to trade. There was a proliferation of new trade agreements during the 1990s, including the Uruguay round of negotiations under the General Agreement on Tariffs and Trade (the precursor to the WTO) and the North American Free Trade Agreement. By 2001 there were more than 200 regional trade agreements, although not all of them lasted. Another factor was the dissolution of the Soviet Union in 1991, which started a process of economic liberalization in Eastern European countries and allowed them to begin trading with the world. But perhaps the most crucial entrant into the global economy was China. In 1978, China’s new leader, Deng Xiaoping, announced an “open door policy” to begin opening up China to the world market. Over the next several years, he set up Special Economic Zones to encourage foreign direct investment, laying the groundwork for China to become the world’s factory. In just over a decade, China almost doubled its share of world trade, moving from being 32nd in the world in export volume to 13th. By 2014, China was the world’s largest exporter and second largest importer of goods. Overall, China exports about 12 percent and imports about 10 percent of the worlds’ goods. It’s no coincidence that the rise of China in world trade coincided with a rise in “global value chains” (GVCs), in which a country imports intermediate goods to produce goods for export, rather than for domestic consumption. (See “American Made,” Region Focus, Fourth Quarter 2011.) This vertical specialization, as the process is called, accelerated in the 1990s as decreased transportation and communications costs made it feasible and profitable for companies to split the production of their goods across different countries, depending on where a step or  PHOTOGRAPHY: ISTOCK.COM/PETER SPIRO  BY JESSIE ROMERO  World Exports-to-GDP Ratio  Why Trade Busted The era of rapid trade growth came to a crashing halt in 2008. Between April of that year and May of 2009, total world merchandise trade volumes fell 20 percent, according to the World Trade Monitor published by the CPB Netherlands Bureau of Economic Policy Analysis — the largest decline since the 1930s and the steepest decline in history. (Trade fell by a larger percentage during the Great Depression, but that decline took several years.) The decline in trade was significantly larger than the decline in world industrial production, which fell 12 percent between April 2008 and April 2009 and began to tick back upward in May 2009. World GDP declined about 2 percent in 2009.  35 30 PERCENT  25 20 15 10 5 2012  2008  2004  2000  1996  1992  1988  1984  1980  1976  1972  1968  1960  1964  0  NOTE: Exports includes goods and services. Shaded areas denote global recessions as defined by the IMF. SOURCE: World Bank World Development Indicators  Growth in World Export Volume and World GDP 15 10 5 PERCENT  0 -5 -10  GDP  Export Volume 2014  2011  2008  2005  2002  1999  1996  1993  1990  1987  1984  1981  1975  1978  1972  1969  1966  -15 1950-63  component was cheapest. Quite often, that was in China. The increase in GVCs significantly increased trade, but the way trade is measured might have made that increase appear greater than it was. “Measured trade depends on and is affected by the back and forth movement of these intermediate inputs,” says Aaditya Mattoo of the World Bank. “Since the 1990s was when the great global fragmentation of production took place, that’s why we saw that as a period of dramatically faster trade growth compared to GDP growth.” Because goods produced via a GVC cross borders multiple times, gross measures of trade include double counting. “Imagine a semiconductor gets made in Malaysia, and then shipped to Taiwan to have some component added, and then shipped to China where it’s added to something else, and then shipped to the United States where it’s finally consumed. That little semiconductor is being counted every time it’s jumping,” says Caroline Freund, a senior fellow at the Peterson Institute for International Economics. Value-added trade, in contrast, counts only the value added in each country. For example, if the semiconductor was worth $50 when it left Malaysia, $100 when it left Taiwan, and then left China embedded in a $300 smartphone, the gross value of trade would be $450. The value added would be just $300: $50 of value added in Malaysia, $50 in Taiwan, and $200 in China. In a 2014 article, Robert C. Johnson of Dartmouth College and Guillermo Noguera of the University of Warwick found that the ratio of value-added trade to gross trade has declined from 85 percent in the early 1970s to about 75 percent today. Put another way, about 25 percent of gross trade could be double counted. The rise of GVCs also appears to have made trade more responsive to changes in income. Economists refer to this as the income elasticity of trade, that is, the percent change in trade for a 1 percent change in GDP. In a 2002 article, Douglas Irwin of Dartmouth College calculated long-run elasticities for 1870-2000. Between 1870 and 1985, the elasticity fluctuated between about 1 and 1.6, meaning that a 1 percent increase in world GDP was associated with between a 1 percent and 1.6 percent increase in world export volume. Between 1985 and 2000, a period that coincides with the adoption of GVCs, the elasticity increased to 3.39.  NOTE: Export volume is merchandise exports. GDP is calculated at market exchange rates. Percent change is year-over-year. Shaded areas denote global recessions as defined by the IMF. SOURCE: World Trade Organization, International Trade Statistics 2015  Trade typically declines by a greater percentage than GDP during a global downturn, according to research by Freund, and then rebounds equally sharply. Trade did rebound significantly in 2010; the volume of world exports increased 14 percent that year, according to the WTO. But unlike in previous periods, trade growth slowed again in 2011, and since then it has barely kept pace with GDP growth (see chart). As of 2013, the most recent year for which the World Bank has data, the exports-to-GDP ratio was stuck at its 2008 level. Why did trade fall so steeply in 2008 and 2009? Largely, it was due to weak demand. About 70 percent of the decline can be explained by changes in demand, according to a 2010 article by Rudolfs Bems of the International Monetary Fund, Dartmouth College’s Johnson, and Kei-Mu Yi of the University of Houston. The drop in demand translated disproportionately to a drop in trade as a result of “composition effects”: During recessions, businesses and consumers tend to cut back more on investment and durable goods, such as new equipment or cars, than they do on consumption goods. But durable goods tend to be much more heavily traded than nondurable goods and also rely more on imported inputs for production. As a result, declines in investment and durable goods purchases can have an outsized effect on trade. E CO N F O C U S | F O U RT H Q U A RT E R | 2 0 1 5  21  Why is Trade Growth Still Slow? Weak demand can explain much of the Great Trade Collapse. But why, after a brief rebound, is trade growth still slow? In part, trade growth might be slow because GDP growth in advanced economies is still relatively slow. Recent research by Patrice Ollivaud and Cyrille Schwellnus, economists at the Organization for Economic Co-operation and Development, found that trade growth since the crisis is close to predicted values based on certain ways of measuring global GDP growth. Weak demand from European countries might be having an especially large effect on measures of global trade growth. Overall, the 19 euro area countries have averaged just 0.8 percent GDP growth between 2010 and 2015, compared with 2.2 percent between 2000 and 2007, according to data from the International Monetary Fund. A fall in European demand has a disproportionate impact on world trade numbers since it reduces both imports from outside the euro area and intra-euro area trade, which is 10 percent of global trade. In Ollivaud and Schwellnus’ analysis, this is because the members of the euro area are treated as separate countries for the purposes of measuring trade, despite the fact that intra-eurozone trade is akin to intra-national trade in that there are no tariffs, the currency is the same, and transportation costs are low. Ollivaud and Schwellnus found that if intra-eurozone trade is excluded, post-crisis global trade intensity (measured as the ratio of import volume to GDP volume) is only slightly below its pre-crisis trend. Weak demand, along with a strong yuan, also has depressed exports from China, and there are signs of longer-term changes in the Chinese economy. “Two dimensions of the Chinese economy have changed,” says the University of Houston’s Kei-Mu Yi. “First, as they’ve become more technologically proficient, they can make a lot of the intermediate inputs themselves, and to the extent they do, their demand for imports would fall. Second, as their economy has gotten bigger, they are selling more domestically rather than exporting.” Just as China’s entry into the global market boosted trade for the world as a whole, a persistent decrease in China’s trade could depress global trade growth.  Have We Reached Peak Trade? Just how much trade elasticity has declined, and when that decline started, is the subject of considerable debate among economists. But some research suggests the process actually started well before the global financial crisis. With Cristina Constantinescu and Michele Ruta, also of the World Bank, Mattoo found that the trade elasticity started falling around 2001, to about half of what it was between 1986 and 2000. According to their analysis, this decrease in elasticity explains about half of the trade slowdown in 2012 and 2013. The authors pointed to a slowdown in the adoption of GVCs as one major reason the trade elasticity has decreased. Comparing the elasticity of gross trade to the elasticity of 22  E CO N F O C U S | F O U RT H Q U A RT E R | 2 0 1 5  value-added trade, which has been relatively stable over time, they find the measures have converged since the early 2000s, suggesting a slower pace of vertical specialization. Partly, that’s just mathematical. “When offshoring is new, you end up with this big boost in gross trade as you’re increasing the round-tripping of the parts,” says Freund. “But once global value chains are established, the base is so much bigger that growth is going to look a lot slower.” But it also could reflect that businesses have become slower to adopt GVCs or are pulling back from them altogether. First, the returns might have shrunk, as companies have already adopted GVCs for the products where gains are most likely to be realized. In addition, rising labor costs in developing countries could alter the calculation; hourly manufacturing wages in China, for example, have increased on average 12 percent per year since 2001. Natural disasters such as the Fukushima earthquake also could make managers nervous about having long supply chains. Anecdotally, a number of American companies have been “reshoring” manufacturing to the United States. The Reshoring Initiative, an advocacy group, estimates that about 248,000 jobs that left the United States have returned since 2010. While Constantinescu and her co-authors pinpointed 2000 as the beginning of the decline in the trade elasticity, other research has found that the decline did not occur until the Great Trade Collapse. In this view, the decline is still attributable to a pullback from vertical specialization, but that itself might be for cyclical reasons. Whether vertical specialization — and with it the trade elasticity — will accelerate when and if global demand picks back up remains to be seen. “When you look at what’s been happening in the global economy over the past decade, it’s possible to be a little pessimistic and conclude that the globalization movement since World War II is not just an inevitable force that won’t be stopped,” says Yi. Still, there are factors that could lead to faster trade growth in the future. For example, technology has made it increasingly possible for small and medium-sized enterprises (SMEs) to reach customers around the world. (International organizations generally define a medium-sized enterprise as one with fewer than 250 employees and a small enterprise as one with fewer than 50.) SMEs continue to account for only a small portion of trade relative to their share of businesses in the economy; in the United States, for example, SMEs are more than 99 percent of all businesses, while accounting for only about 15 percent of exports and 10 percent of imports. Policy changes that make it easier for SMEs to participate in international trade, such as raising the threshold above which an importer must pay customs duties, reducing trade compliance costs, or harmonizing postal systems, could help boost trade growth. Another potential source of trade growth is trade in services, such as computer programming or accounting. Services trade has grown more quickly than merchandise trade since the 1980s and equaled about 13 percent of world GDP in 2014 — still small relative to services’ 70 percent share of the  world economy. “The scope for liberalization in services is still quite large,” says Mattoo. Reductions in barriers to trade in services, such as the Trade in Services Agreement currently being negotiated by 23 members of the WTO (including the United States), could lead to greater trade growth. Finally, it’s possible that other developing countries could eventually increase their manufacturing base and their participation in world trade. “South Asia, Latin America, and Africa haven’t really participated in the finer and finer international division of labor that has been made possible by global fragmentation,” says Mattoo. “So there is the potential to expand supply chains elsewhere in the world. That could unleash another burst.”  Trade Matters Underlying the debate about whether or not trade growth will accelerate is the question, does the amount of trade matter? “It matters to the extent it improves our standard of living,” says Yi. “What ultimately matters is consumption, how much people are eating, spending, and enjoying life. Trade plays a significant role in increasing consumption. But that doesn’t necessarily require global trade to be growing faster than global GDP.” At the same time, says Yi, “The period when the global economy did really well happened to be the period when globalization increased a lot. There is clearly a link between these two forces, but just how strong is that link?” There is a strong consensus among economists, dating back to Adam Smith, that trade is beneficial because it allows countries to specialize in producing those goods for which they have a comparative advantage. In 1776, Smith wrote, “If a foreign country can supply us with a commodity cheaper than we ourselves can make it, better buy it of them with some part of the produce of our own industry employed in a way in which we have some advantage.” Trade also gives firms access to new markets and can increase productivity via technology spillovers from imports, as well as competitive pressures. Slower trade growth thus could limit an important channel for productivity growth. In addition, research suggests that trade can be an important avenue of economic growth, especially for developing countries. “From that perspective,” says Freund, “trade slowing down bodes ill for the developing countries. We’ve seen a lot of countries that have grown primarily through trade, and if trade is really slowing down it makes it harder to follow  that model.” Between 1981 and 2010, for example, China’s growth pulled nearly 700 million people out of poverty. The presence of GVCs in particular might be important for developing countries, because they allow a country to industrialize without having to develop a diversified manufacturing base from scratch. As Richard Baldwin of the Graduate Institute Geneva described it in a 2011 paper, countries can join a supply chain rather than build an entirely new one. In addition, it has long been conventional wisdom in some branches of political science that trade promotes peace because it increases the opportunity cost of armed conflict. This view underpinned the formation of European Economic Community in the 1950s and continued to motivate European leaders even decades later. As Jacques Delors, former president of the European Commission, stated regarding the introduction of the euro, “The argument in favor of the single currency should be based on the desire to live together in peace.” There is some empirical evidence to support this view. For example, between 1950 and 2000, wars occurred only about one-tenth as frequently as between 1820 and 1949. While a variety of political, technological, and economic changes occurred during this period, the decrease could be attributed to the increasing density of international trade networks, according to a 2015 article by Matthew Jackson and Stephen Nei of Stanford University. Using game theory, Jackson and Nei compared alliances based on military incentives alone to alliances augmented by international trade and found that the latter are significantly more stable. The authors also found that the regions with the most armed conflicts, such as central Africa, have relatively few trade ties, which suggests that countries could benefit from more than the development opportunities afforded by trade. Still, trade doesn’t necessarily prevent war. The “first wave” of globalization occurred between 1870 and 1913, and “Many pundits thought economic ties between the European nations were too strong to have a war,” says Yi. “But of course they were wrong.” The many benefits of trade are why the Great Trade Collapse of 2008-2009 — and sluggish trade growth thereafter — attracted so much attention from economists and policymakers. And while economists have largely reached a consensus that the initial collapse was the result of weak demand, there is still considerable debate about why trade growth today remains slow and what it might mean for the future. EF  Readings Bems, Rudolfs, Robert C. Johnson, and Kei-Mu Yi. “Demand Spillovers and the Collapse of Trade in the Global Recession.” IMF Economic Review, December 2010, vol. 58, no. 2, pp. 295-326. Hoekman, Bernard, ed. The Global Trade Slowdown: A New Normal? London: Center for Economic Policy Research Press, 2015. Irwin, Douglas A. “Long-Run Trends in World Trade and Income.” World Trade Review, March 2002, vol. 1, no. 1, pp. 89-100.  Jackson, Matthew, and Stephen Nei. “Networks of Military Alliances, Wars, and International Trade.” Proceedings of the National Academy of Sciences, December 2015, vol. 112, no. 50, pp. 15277-15284. Johnson, Robert C. “Five Facts about Value-Added Exports and Implications for Macroeconomics and Trade Research.” Journal of Economic Perspectives, Spring 2014, vol. 28, no. 2, pp.119-142.  E CO N F O C U S | F O U RT H Q U A RT E R | 2 0 1 5  23  INTERVIEW  Eric Leeper Editor’s Note: This is an abbreviated version of EF’s conversation with Eric Leeper. For the full interview go to our website: www.richmondfed.org/publications  u  EF: You and Jon Faust argued at the 2015 Jackson Hole conference that macroeconomics hasn’t paid enough attention to something you called “disparate confounding dynamics.” Can you explain that view? Leeper: DCDs have always been an important part of actual policy analysis, but they tend not to show up in the formal analyses economists do. Our formal analyses tend to focus on little fluctuations of inflation and the output gap around some long-run steady-state growth path. We’re really good at doing that kind of analysis, but that’s probably pretty small potatoes compared to longer-term trends: things like large swings in relative prices, declines in the labor share of income, very low frequency movements in demographics, and I would 24  E CO N F O C U S | F O U RT H Q U A RT E R | 2 0 1 5  throw fiscal policy and a lot of financial imbalances into that category, for example, household debt. Those longer-term trends are what we called “disparate confounding dynamics,” and they are big factors affecting welfare in any economy. The crisis brought all of this stuff to the forefront, and central banks have been paying a lot more attention to these lower frequency phenomena than they had before. But the problem is that they aren’t really incorporated into our models, so it becomes very difficult to say anything precise about them. My view is that central banks have put far too many resources into understanding tiny fluctuations and too few resources into the things that actually matter. EF: Is factoring them into policy analysis, then, necessarily at odds with the idea of following monetary policy rules? Leeper: I think there are some misconceptions about rules. To me, what a rule means is that policy is behaving in some systematic fashion that anchors private sector expectations. That doesn’t mean that policy is following some simple rule, and a simple rule doesn’t necessarily mean that you’re being systematic, because there’s so much else going on in the economy. You might be behaving systematically in response to, say, inflation and an output gap, but nonsystematically in response to all that other stuff. Something like the basic Taylor rule doesn’t really serve as a useful litmus test for what policy is doing in the face of these DCDs, so it’s a little bizarre to me that a lot of central banks routinely calculate what the path of the interest rate would be with a simple Taylor rule as if that’s a useful benchmark. It’s not obvious to me what that’s a benchmark for. Central banks can behave systematically in response to DCDs without having to say, “Here’s our rule.” They can  PHOTOGRAPHY: ERIC RUDD, INDIANA UNIVERSITY  Fiscal policy and monetary policy are distinct government functions. Fiscal policy is the government’s decisions about how to tax and how to spend the proceeds. Monetary policy is often described as the central bank’s actions to influence interest rates and the economy’s supply of money to affect economic conditions. How fiscal and monetary policies interact is a bit murkier. Some aspects of this question have been best answered the hard way, through experience — for example, when central banks print money to finance government spending, it can result in hyperinflation, as most famously experienced in 1920s Germany. Much less well understood are ways in which policymakers might design fiscal and monetary policies to work together to achieve desirable debt and inflation outcomes. Economist Eric Leeper of Indiana University hopes to change that. The area is underdeveloped in part, he says, because the economics profession’s understanding of fiscal policy is alarmingly poor. He also argues that mainstream monetary policy research tends to omit essential components of the economy’s dynamics. As these ideas would suggest, Leeper has been willing to question conventional wisdom when it comes to policy analysis — often with a dose of humor and a passion for spreading ideas to broader audiences. Leeper is also a member of the Research Council of the Bundesbank (Germany’s central bank) and an external advisor to the Sveriges Riksbank (Sweden’s central bank). Renee Haltom interviewed him in his office in Bloomington, Ind., in February 2016.  recognize that, for example, as expectations. We argued that a the population ages, that’s going VAR may be perfectly valid for I don’t like the language of “fiscal to have certain effects on savstudying certain kinds of intertheory” or “quantity theory,” because ing and consumption behavior. ventions, whereas for other kinds it’s not as though there’s got to be Now, whether you can address of interventions it wouldn’t be. only one theory about how the price that in a really formal, quantitaI think we’ve got to extend that level gets determined. tive way is an open question. But way of thinking to the microit’s going to have certain effects founded models that everyone on real interest rates in the econclaims are “deep.” omy that should be brought into the analysis. What Troy Davig and I show in a paper on generalizing During the crisis, it was blatantly obvious that what the Taylor principle is that if you can move between two Jon and I called the NICE [non-inflationary, consistently kinds of Taylor-type rules, then the nature of the equilibexpansionary] models were of almost no value. While we rium changes quite dramatically. Even if you now are under could jury-rig those models to tell a story, nobody was really rule A, so long as you put some probability on rule B in the persuaded by those stories. Central banks recognized the future, those effects are going to spill over through expectalimitations of those models and brought other considertions formation into what the current equilibrium looks like. ations in, and that was good. The evidence for that comes Presumably the data that we observe reflect the beliefs that from speeches by monetary policymakers, in the Fed and people have about what future policy rules might look like elsewhere, that actually bring these DCDs into the picture. and the probabilities of them. So from an empirical standChair Janet Yellen, for example, has talked about the decline point, it seems to me that this gives you a better approach in the labor share of income, and that’s a signal that they’re to data than just assuming there’s one rule and everyone thinking about these things. believes it’s going to be there forever. But a lot of what I hear coming out of the Fed these days, about normalization and so forth, sounds an awful lot like EF: Can you describe the basic concept of “active” verthe old New Keynesian way of thinking about things. It’s sus “passive” fiscal and monetary policies? not obvious to me the extent to which the Fed has brought the realities post-crisis into their analysis of how changes in Leeper: A general definition of the terms is that an “active” the federal funds rate and interest on reserves affect all interpolicy authority is free to pursue its objectives and a “pasest rates, quantities, and prices in the economy. sive” authority is constrained by the behavior of the active authority and the price sector. This definition takes on speEF: What do we know about the extent to which policycific meaning depending on the context. makers can deviate from policy rules — defined as you At the most fundamental level, macro policy, by which did, meaning systematic policy — without changing I mean monetary and fiscal, has two tasks. One is to deterbeliefs about what the policy rule is? mine inflation, and the other is to make sure government debt is stable. This isn’t an argument that those are the only Leeper: Unfortunately, I don’t think we know a lot. I two things governments do, but if they’re not doing those two think that’s partly because the profession seemed to have things, they can’t do much else. responded to the Lucas critique in one of two ways. One was There are two different mixes of monetary and fiscal polparalysis, which stemmed from the iconoclastic view that, icy that can deliver those two tasks. The first is the way that “Oh my God, we can’t do policy analysis.” That argument most of the profession thinks about this: You have a central was that vector autoregressions (VARs) were of no value for bank that aggressively targets inflation by raising the nomipolicy analysis because if you change the policy rule, then nal interest rate sharply whenever inflation goes up, and then all the parameters of that estimated model will change and you tell the fiscal authority, “Now it’s your job to make sure therefore the old parameters are of no value for predicting that any time government debt rises, everyone expects that the effects of that policy. The second reaction was, “OK, you’re going to raise budget surpluses in the future to finance now we have all these micro-founded models, which is what that debt.” That policy mix, which is “active monetary/pasLucas told us we needed, so we can sally forth and fine-tune sive fiscal,” will achieve the two goals. the way we always wanted to.” People are still resistant to the idea that there is another A more constructive response to the Lucas critique is way you can achieve exactly those two objectives. The other to ask exactly the question that you asked: When there are way flips the assignments. If fiscal policy is active, it sets the unexpected policy interventions, how can we tell which surplus largely independent of the state of government debt aspects of the model we should continue to trust? I don’t and the state of inflation — maybe it’s trying to do counterthink that kind of analysis has been done very much. Some cyclical policy or fight a war. The price level will end up getyears ago, Tao Zha and I wrote a paper called “Modest Policy ting determined through fiscal behavior, and what stabilizes Interventions.” We argued that if people really believe that debt is that the central bank lets surprise changes in inflation policy can change, then they incorporate that belief into their and bond prices revalue government debt so that the market E CO N F O C U S | F O U RT H Q U A RT E R | 2 0 1 5  25  Eric Leeper  Leeper: People make that argument all the time. It doesn’t really hold ➤ Present Positions up very well when you think about Rudy Professor, Indiana University; the political process. The fact is that Research Associate, National Bureau of Congress can change the Federal Economic Research Reserve Act, and has, even since the ➤ Selected Past Positions crisis. Senior Economist (1991-1994) and I think a more general point is that Research Officer (1995), Federal there is a tendency for economists to Reserve Bank of Atlanta; Economist, want to wall things off. I have a paper Federal Reserve Board (1987-1991) where I talk about optimal monetary ➤ Education and fiscal policy, and the first slide is B.S. (1980), George Mason University a picture of the Great Wall of China Ph.D. (1989), University of Minnesota with monetary policy on one side and fiscal policy on the other. That’s kind ➤ Selected Publications of how our policy institutions have “Fiscal Foresight and Information evolved. Flows,” Econometrica, 2013 (with Todd The thing is, there’s not a lot of theB. Walker and Shu-Chun Susan Yang); “Generalizing the Taylor Principle,” oretical justification for creating these American Economic Review, 2007 walls. What we’re finding more and (with Troy Davig); “Modest Policy more is that there’s always some role in Interventions,” Journal of Monetary optimal policy for using surprise inflaEconomics, 2003 (with Tao Zha); “What tion to revalue debt and bond prices, Does Monetary Policy Do?” Brookings so long as there is some maturity to Papers on Economic Activity, 1996 (with government debt. The mechanism Christopher A. Sims and Tao Zha); that’s at work is the fiscal theory of “Equilibria Under ‘Active’ and ‘Passive’ the price level, that alternative regime Monetary and Fiscal Policies,” Journal of of passive monetary/active fiscal. Monetary Economics, 1991 It’s extremely controversial to proEF: How does the active/passive pose something like that. The basis framework relate to the “fiscal theory of the price often used is the political economy concern that really bad level”? They often get used interchangeably, perhaps monetary outcomes tend to come from having a fiscal authorincorrectly. ity lean on the central bank to print money. People think there’s this slippery slope in that if the Federal Reserve starts Leeper: All the active/passive framework is saying is that for to pay attention to debt, then the next thing you know we’re different values for the parameters of monetary and fiscal polgoing to be the Weimar Republic. And maybe it is a slippery icy, the way the price level gets determined is different. In one slope once you’re in the political realm. But from an academic of them, the active monetary/passive fiscal, things look like perspective, if your objective is to arrive at a rule that would they’re governed by a quantity theory of money or the whole be mechanically followed by a central bank, then there’s no New Keynesian way of thinking about monetary policy. This harm in having fiscal variables enter that rule. That isn’t going other region, where you’ve got passive monetary/active fiscal, to lead to a hyperinflation by construction. I think we want to has been dubbed the “fiscal theory of the price level.” really understand how policies interact, and then we can think I don’t like the language of “fiscal theory” or “quantity about the institutional problem of implementation. theory,” because it’s not as though there has to be only one But what has happened by and large in monetary research theory about how the price level gets determined. A broader is it starts with the wall, and so boom, it never goes over to term that encompasses the two policy mixes would be “the that joint monetary-fiscal world. Central bank models impose fiscal financing theory of the price level” because ultimately priors that don’t let the parameters go there, so there’s never it’s how nominal government liabilities get financed that any horse race about which regime is a better description of matters for determining the price level. the data. The slippery slope is more about following a completely different rule than what the optimal policy is suggestEF: Central banks generally have mandates to keep ing. And again, the big problem is that independence is fluid. inflation low and stable. So it would seem that the cenIt can go away. If the Fed loses independence, then there is no tral bank would want to be in the active position — for wall. And then I think you really do have problems. example, to make a credible commitment to stabilizing By the way, the active/passive dichotomy has been useful inflation to force the fiscal policymaker to stabilize for my thinking so long as I stay in sufficiently simple moddebt. Is that not the right way to think about it? els where you get this clear separation between the role of value of government debt equals the present value of surpluses. It doesn’t try to fight inflation. The primary insight is that the vast majority of government debt that advanced economies issue is nominal. Nominal debt is literally just a claim to more dollars in the future. Real debt — for example, inflation-indexed debt — by contrast, is actually a claim to goods. The government then has to come up with the goods, and the only way for it to do so for sure is by raising taxes. So the original regime — active monetary/passive fiscal — treats debt as real debt and forces fiscal policy to always stabilize it by changing its real backing — primary surpluses — accordingly. The alternative regime, which is passive monetary/active fiscal, recognizes that debt is nominal and that surprise changes in bond prices and in inflation can change the market value of that debt so it’s consistent with what people are expecting the real backing of debt will be.  26  E CO N F O C U S | F O U RT H Q U A RT E R | 2 0 1 5  monetary policy and the role of fiscal policy. But once you get into more complicated models, or you start thinking about jointly optimal policy, there is no clear separation. There are elements of both kinds of behavior by both the monetary authority and the fiscal authority. EF: Have we ever experienced an episode of inflation resulting from a passive monetary/active fiscal phase with no money printing? Leeper: That also is a hard question, but I think the answer is yes. I’ve been looking at the recovery from the Great Depression in 1933 when Roosevelt took the United States off the gold standard. Going off the gold standard converted government debt from effectively real debt to nominal debt because the price level under the gold standard was beyond the control of the government. At the same time, the fiscal actions Roosevelt undertook were what nowadays we would call an unbacked fiscal expansion. It was really the first time anybody had said, “Let’s increase government spending and not try to balance the budget.” Of course, FDR was too smart a politician to actually say that. Instead, he kept the people focused on the need to reflate the economy and get people back to work. He also cleverly created two classes of government expenditures: “regular” and “emergency.” He liked to claim he balanced the regular budget, while making clear that the emergency spending was temporary until the economy recovered. This is like a fiscal rule that says the government will run deficits until the price level recovers to some pre-depression level. And the Fed was just keeping the interest rate flat. So it looked a lot like passive monetary/ active fiscal. In a paper with Margaret Jacobson and Bruce Preston, we’re comparing what happened in the United States, which had a very substantial recovery both in inflation and real activity, to what happened in the United Kingdom, where they went off gold two years before and did not have that huge run up in the price level. We’re still looking at data, but our conjecture is that they didn’t have the fiscal component that the United States did. What if I turn your question on its head and ask, “What has been going on in the United States for the last seven to eight years?” The federal funds rate has been effectively pegged, we’ve had an explosive increase in reserves, an explosive increase in government debt, and squat has happened to inflation and to expected inflation. Explain that sequence of outcomes in conventional New Keynesian models that do not explicitly include fiscal policy. The conventional model says if you peg the nominal interest rate, you get indeterminacy and you could easily have self-fulfilling inflation or deflation. At a minimum, you’d get volatility in inflation. We didn’t see that. The Fed pegged the interest rate for 20 years, from the 1930s until the 1951 Treasury-Fed Accord, and we didn’t see explosive inflation. So I think there are a lot of anomalies if you try to interpret the data in the conventional active monetary/passive fiscal light.  EF: What do you see as the role for fiscal policy in a situation where monetary policy faces a recession when it is at or near the zero lower bound on nominal interest rates? Leeper: The dominant view seems to be that the only way to get monetary and fiscal policy to work together is to have the central bank print money to buy debt and, therefore, indirectly, to use money to pay for the goods that the government buys. Alternatively, we could think about joint monetary/fiscal plans designed to anchor expectations on desired outcomes. Initially, it seemed that this is what Abenomics [the colloquial name of the policies of Shinzō Abe, Japan’s prime minister] aimed to achieve through its three arrows: monetary expansion, fiscal stimulus, and structural reform. Then the Japanese government capitulated to external pressure and raised the consumption tax in 2014, effectively ending any progress Abenomics made. Now the finance minister, Tarō Asō, is confirming the government’s plan to raise taxes again in 2017. This is a classic example of a government being unwilling to decide if its priority is to get the economy going or to reduce government debt. Think about what this kind of behavior does to fiscal expectations — it sure isn’t anchoring them on expansion. Suppose the government were to announce a fiscal policy of running primary deficits until inflation rises to some threshold, while the central bank continues to avoid raising interest rates sharply in the face of rising inflation. This is the FDR policy of reflation. Once the threshold is achieved, the government could move to running small primary surpluses on average. Theory tells us that this ought to work because it is a way to implement an unbacked fiscal expansion. Of course, one would need to check in a formal model whether this delivers the desired outcomes, but the logic seems to be right. It operates off of a type of fiscal forward guidance because the announcement tells people not to expect the deficits to be offset by subsequent surpluses. Making fiscal policy actions contingent on economic outcomes may seem unusual, but that’s only because fiscal policy is generally so arbitrary. The idea isn’t any different than when the central bank announces it will maintain zero interest rates until some measurable economic outcomes occur, a proposal that several Fed presidents have made in recent years. It probably isn’t politically feasible in any of the austerity-obsessed advanced economies. But this obsession, I think, also stems from a misunderstanding about fiscal sustainability. The press and politicians do not seem to appreciate the distinction between the face value and the market value of government debt. Sustainability says that the real market value of outstanding debt must equal the expected discounted present value of primary surpluses. It isn’t about the face value of debt. This policy, if it works, would raise expected inflation, which depresses bond prices, and maybe raise current inflation, which depresses the real value of outstanding debt. Measured in the economically E CO N F O C U S | F O U RT H Q U A RT E R | 2 0 1 5  27  relevant way, as the real market value, there would not be a huge run-up in debt. There would be a run-up in nominal debt. EF: Have the sovereign debt crises of recent years taught us anything about fiscal limits — the point at which financial markets will no longer allow the government to add to its debt burden — that we didn’t know previously? Leeper: One thing the eurozone crisis should’ve taught us is that one-size-fits-all policies don’t make sense. There are these ideas of thresholds for the ratio of debt to GDP, like 90 percent, where you go to hell in a handbasket if you get to 91. Countries can get into trouble at very different levels of debt. Japan is at around 240 percent, if you believe that number, and there’s no evidence of any fiscal crisis there. The idea for fiscal limits that I employ was formalized in the dissertation of a former graduate student of mine, Huixin Bi, who is now at the Kansas City Fed. This approach emphasizes that it’s the distance between the level of debt and the fiscal limit that matters for how risky debt is. Because the fiscal limit is a probability distribution and it can shift around a lot — with shocks that are hitting the economy or changes in political party or what have you — you could be thinking you’re in pretty good shape and then something happens. That’s part of why these crises can come on quickly. But it also works the other way: If you do a certain kind of fiscal reform, that should be pushing the fiscal limit far away and things ought to be safe. Slovakia has a fiscal council that tried to compute the fiscal limit distribution for their country. They did two things to connect it to their economy. One was they said that productivity shocks have a fat tail — if you get a bad shock, there’s a higher probability you’ll get another bad shock. The second thing is they geared the expectations about transfers to the population to their demographics. They end up concluding that their country shouldn’t go beyond a 40 percent debt-to-GDP ratio, in contrast to the Maastricht Treaty’s 60 percent limit for the eurozone. I think that’s a good example of the kind of analysis that could be done in a lot of countries. Sure, there are lots of issues with it and you may not buy that number, but at least the thought process is coherent. For me, what thinking about fiscal limits has done is point to all these things that we need to be thinking about. Some you may be able to quantify, some you may not be able to. But you at least need to be thinking about them. EF: What are you working on next? Leeper: I mentioned some historical work that is trying to see if there is a fiscal interpretation to the recovery in 1933 in the United States and contrasting that to the United Kingdom. I think there’s interesting stuff to be done about the gold standard, a lot of nostalgia about it that is really 28  E CO N F O C U S | F O U RT H Q U A RT E R | 2 0 1 5  misplaced. There are some people who look at the price level in 1823 and in 1870, note they were the same, and conclude the gold standard is therefore price level targeting, but it wasn’t at all. The gold standard wasn’t created for that purpose; it was entirely about international trade. But one aspect that hasn’t been talked about is that there were pretty severe fiscal restrictions associated with the gold standard, and not just that fiscal policy had to be passive and eventually pay off the debt. If the government was short on gold, how was it going to acquire more? It seems like there has to be some sort of tax backing. I’m stunned that there is no canonical modern model of the gold standard that you can turn to. There are two other projects I want to mention. Markus Brunnermeier, who’s at Princeton, and I are both on the Bundesbank Research Council. We’ve proposed the creation of a network to study the interactions among monetary policy, fiscal policy, and financial stability. The idea is to try to bring academics and policymakers together who are using really different methodologies and looking at really different data to try to address some common sets of questions. I keep telling you, “We don’t know the answer to that.” This is designed to identify what the relevant questions are and how we can answer them. The second project is with John Cochrane and Tom Coleman. I don’t really want to call it a project on the fiscal theory, but that’s sort of what it is. I like to think of it as trying to understand how the price level gets determined. We’re trying to bring a disparate group of people together, some known fiscal theorists and some known skeptics of the fiscal theory. We’ve got Tom Sargent, Chris Sims, John Cochrane, Stephen Williamson, Narayana Kocherlakota, and a bunch of other people. Getting young economists and graduate students involved is the key — we want to get young researchers really excited about this. Part of the problem is that we don’t even have the data: You need to have the market value of government debt, the maturity structure of government debt, good measures of the primary surplus, and good measures of real discount rates. You can’t go to FRED and download this stuff. We want to try to build some datasets that would look across countries and time and start to answer some of these questions about which policy regime prevailed. We also want to ask where the holes are in the theory. A huge one is: How is the price level determined in Europe? I don’t have any idea. You’ve got very different inflation processes in all these countries, and what’s determining those? That’s a pretty fundamental question. You might think we would’ve figured that out before creating a monetary union. One objective is to communicate about monetary/fiscal interactions to policymakers and the general public, defined as financial market participants, politicians, etc. We hope that an outgrowth of the project will be essays and monographs that undergraduates and other generally educated folks can understand. EF  THEPROFESSION  Scrambling for Economists: The Ph.D. Job Search BY J E S S I E RO M E RO  “B  ring granola bars or other portable snacks with you in your briefcase and eat between interviews. One professor advises that you carry a bottle of water in your bag, in particular because sometimes you will have to take many flights of stairs because of long lines for the elevators. You must make sure to eat and stay hydrated in order to remain responsive, cheerful, and relaxed.” So John Cawley of Cornell University recommends in his guide to the job market, a handbook for young economists who are about to embark on the job interview process that takes place every January at the annual meeting of the Allied Social Sciences Association (ASSA). In the interest of efficiency, the vast majority of first interviews for about 1,200 soon-to-be economics Ph.D.s each year are conducted at the meeting. Employers reserve suites at multiple hotels in the host city, requiring candidates to race from location to location. The process can be stressful, but it isn’t all bad, says Marie Hull, who earned her Ph.D. from Duke University in 2015. “You’re selling to them, but they’re selling to you too, so you get some positive reinforcement,” she says. “And it’s fun to talk about your research.” (She accepted an appointment as an assistant professor at the University of North Carolina at Greensboro.) The job market kicks off in the fall, when both academic and non-academic employers begin posting jobs in earnest. The largest source of job postings is Job Openings for Economists, which the American Economic Association (AEA) started publishing in 1974; since 2002, it has been online-only. Roughly 2,000 new academic jobs and 850 non-academic jobs are posted each year, the majority between September and December, although not all of these are for junior positions. Job hopefuls apply for about 80 jobs on average, although some — including Hull — apply for more than 100. Employers generally start making calls in early December to schedule interviews. After the ASSA meeting, employers invite their top candidates on a “flyout” to the institution, where the prospects give a seminar on their research and meet with their potential colleagues. Most institutions make their offers in February and March, although a few select candidates might receive offers as early as December. From a market design perspective, the job market for young economists is very “thick,” meaning there are many applicants and many employers. But it’s also very congested. Because the marginal cost of sending one more application is low, departments end up receiving hundreds of applications. This is a particular problem for smaller or less prestigious programs because they can’t be sure if applicants are actually interested or if they’re applying as a fallback.  “Take Vanderbilt as an example,” says John Siegfried, an economics professor at the university from 1972 until 2010. “Our general practice was to take our top three candidates and cross them off the list, because we assumed they’d also be getting offers from higher-ranked schools. But what if one of those candidates also played country music as a hobby, and here we are in Nashville? We would skip them when actually they had a unique interest in us.” To help solve this problem, in 2006 the AEA introduced an online signaling mechanism that allows job applicants to send up to two signals of interest to institutions where they’ve applied. Sending signals requires considerable strategizing; the AEA advises applicants not to send signals to an employer that would already be fairly confident of their interest (such as a top-ranked academic department) or to an employer that would be likely to interview them anyway. The majority of applicants do send signals, which appears to increase the likelihood of getting an interview by about 7 percent. The signaling mechanism was the brainchild of the AEA’s Ad Hoc Committee on the Job Market, which was chaired by Alvin Roth of Stanford University and included Cawley, Siegfried, and several other economists. The committee also decided to address the issue of the secondary job market: After the primary job market cleared, there were still a number of employers with unfilled positions and candidates without jobs, but they lacked an efficient way to find each other. The committee designed a job market “scramble,” modeled after the medical residency matching system. (Roth received the economics Nobel Prize in 2012 in part for his work to refine residency matching.) The scramble started in 2006. During a week in mid-March, unmatched candidates and employers can register for the scramble; candidates who have accepted an offer aren’t allowed to participate. In 2014, the most recent year for which the AEA has published data, 452 candidates and 61 jobs were registered. Once the scramble goes live in late March, employers can see the unmatched candidates and candidates can see the unfilled positions. To reduce stigma, only registered candidates have access to the site, and they can’t see which other candidates are registered. “If someone knew that their peers could log in and see that they hadn’t found a job yet, they’d be less likely to participate,” says Siegfried. While new Ph.D.s might want a job in the highest-ranked department possible, Cawley advises seeking a job “in which your work is understood and appreciated.” That was the deciding factor for Hull. “At the end of the day, I had to think about where I thought I would fit in and where I could see myself being successful.” EF E CO N F O C U S | F O U RT H Q U A RT E R | 2 0 1 5  29  ECONOMICHISTORY  Conflagration in Baltimore The 1904 disaster was a turning point for U.S. fire prevention  Moments after the ear-splitting explosion, the “fire-proof ” Hurst building was engulfed in flames.  30  E CO N F O C U S | F O U RT H Q U A RT E R | 2 0 1 5  O  n a cold Sunday morning in February 1904, a small ember or spark ignited packing cases in the basement of the “fire-proof” Hurst building in downtown Baltimore. Firefighters arrived quickly and broke down a door, creating a backdraft that whisked superheated air up the building’s unprotected elevator shaft and central staircase. The firemen heard doors slamming shut on the upper floors of the six-story headquarters of John E. Hurst and Company. Then they heard an “ominous rumbling.” The firemen retreated to the street minutes before an ear-splitting explosion blew the roof off the building, showering adjacent structures with flaming debris. As more firefighters rushed to the scene, a hook-and-ladder wagon zoomed past a nearby church, catching the attention of Reverend D’Aubigny, a visitor from France. He was anxious to witness an American conflagration. “That is something I must see,” the reverend said. “We do not have them in Paris.” D’Aubigny, no doubt, was shocked by what he saw. The fire raged for 30-plus hours, destroying more than  1,500 buildings on 86 city blocks in the heart of what was then America’s sixth-largest city. Miraculously, the fire killed only four or five people, but it left 35,000 people jobless. Damage estimates reached as high as $100 million — more than $2.6 billion in today’s dollars. In the 19th century and early 20th century, conflagration was a constant threat to American cities, primarily because they had been built more quickly and cheaply than their European counterparts. American fires consumed large amounts of capital each year. One estimate in 1910 put the average annual “fire waste” at $500 per minute in the United States, which would be about $12,340 per minute in today’s dollars. “How absurd it is that we have fires to-day!” wrote Maynard Metcalf in the July 1916 issue of Scientific Monthly. Metcalf, a zoologist at Johns Hopkins University, highlighted Reverend D’Aubigny’s fascination with American conflagrations to demonstrate that U.S. cities were much more vulnerable to massive fires than European cities. “The economic system of fire insurance under private management, so greatly developed, has removed the individual motive for fire prevention,” Metcalf charged. “It is simpler for the individual to gain security against loss by fire by hiring an insurance company to carry his risks than it is for him to prevent loss from fire by building fireproof buildings.” Insurance rates typically did not reward fire-resistant construction in 1904, agrees Marc Schneiberg, an organizational and economic sociologist at Reed College. “So it was not clear who would reap the benefits.” Reformers within the industry had been advocating risk-adjusted rate schedules for years, but many insurance executives failed to see how their companies would benefit from prevention. “As long as they could keep the premium rates and the loss rates in the right proportion, they  PHOTOGRAPHY: J.H. SCHAEFER AND SON (ENOCH PRATT FREE LIBRARY)  BY KARL RHODES  really didn’t care if they had high average losses because they would just raise rates,” Schneiberg says. This attitude infuriated critics who contended that insurance companies made cities more hazardous by not differentiating between safe and unsafe properties, according to Sara Wermiel, a research affiliate of MIT’s Program in Science, Technology, and Society and author of The Fireproof Building: Technology and Public Safety in the Nineteenth-Century American City. But after the Baltimore fire, insurance leaders began to realize that their ability to continually raise rates to pay for conflagrations was declining because of increasing political and competitive pressures. And when the devastating San Francisco earthquake and fire occurred in 1906, the conflagration hazard appeared to be getting much worse. These events “forcibly brought home to insurance engineers that the increasing congestion of values in the larger cities represented a menace both to the public and to the business of fire underwriting,” wrote H.A. Smith, president of the National Fire Insurance Company of Hartford, in the Annals of the American Academy of Political and Social Science in 1927. “Although the business profited because fire is an ever-present possibility in all walks of life, the incineration of material wealth was reaching proportions which threatened economic disaster.”  Baltimore Ablaze After the Hurst building exploded, Baltimore’s fire chief sent an urgent telegram to his counterpart in Washington, D.C: “Big fire here. Must have help at once.” Firemen from Washington scrambled onto railroad flatcars for a full-throttle, open-air ride to Baltimore in sub-freezing weather. Cheering crowds welcomed them to Camden Station, but by then, the fire was spreading to the northeast beyond the seven-block area bounded by the streets of Liberty, Lombard, Baltimore, and Hopkins Place. To make matters worse, the D.C. firefighters discovered that the couplings on their hoses did not fit Baltimore’s hydrants. They devised makeshift adapters, but the water pressure in their hoses was severely limited. Firemen arriving in Baltimore from other cities also encountered similar compatibility problems. Initially, Baltimore turned down offers of assistance from other cities, but the fire continued to burn out of control, and at 6 p.m. the mayor sent a desperate dispatch to Philadelphia: “Send all help possible.” Philadelphia responded quickly, as did New York, Wilmington, Del., and 20 smaller mid-Atlantic cities. Dozens of engine companies — assisted by more than 2,000 Maryland National Guardsmen — tried in vain to contain the fire throughout the night. At one point, demolition crews attempted to create fire breaks by blowing up buildings in the fire’s path. One pre-emptive explosion at the Armstrong Shoe factory sent “shoes and boots flying into the night sky,” but the blasts failed to bring down the buildings, according to Peter Charles Hoffer, a history professor at the University  of Georgia and author of Seven Fires: The Urban Infernos that Reshaped America. Instead of stopping the fire, the explosions blew out windows of adjacent buildings, making them more vulnerable to the flames and intense heat, which reached 2,500 degrees in some hot spots. After 10 p.m., the wind shifted and intensified, with gusts exceeding 30 miles per hour. “Had this wind brought with it rain or snow, the fire might have quieted, but the angry current only drove the fire southeast, toward the harbor and the pier warehouses loaded with new sources of fuel,” Hoffer wrote. On Monday, the wind blew the fire south toward the harbor and east toward Jones Falls, a canal that was about 75 feet wide. There were large residential sections east of the falls, so the firemen resolved to stop the conflagration there in what Hoffer called “one of the most remarkable stands in the history of American firefighting.” Hoffer’s personified fire “leaped at targets of opportunity in the lumberyards, malt houses, and dwellings on the east side of the falls. Had it established a beachhead on the east side, all of east Baltimore would have shared the fate of downtown.” But the engine companies held their ground, and by 5 p.m., the fire had essentially run out of fuel. In the days that followed, engineers, architects, and builders converged on Baltimore to study the ruins — especially the remnants of the city’s so-called “fire-proof” buildings. Wermiel credited “a wall of substantial public buildings” for helping to turn the fire south toward the harbor. Metcalf also praised the well-protected O’Neil building, which sustained almost no damage, for helping to turn the fire east toward the falls. The contents of the city’s other “fire-proof” buildings burned “like charcoal in a furnace,” Hoffer wrote, but their superstructures remained intact. While some critics ridiculed these charred skeletons, the visiting architects and engineers concluded that Baltimore’s “fire-proof” buildings performed well considering the intense heat that surrounded them during the inferno. “In other words, fire-resistive buildings could help avert conflagration, but not if they stood as islands in a sea of firetraps,” Wermiel concluded. In the months following the Baltimore fire, campaigns for fire-resistant construction and other preventive measures gained momentum. For example, the National Fire Underwriters Board (NFUB) established national building code guidelines in 1905. These guidelines had been in the works for a long time, but the Baltimore and San Francisco fires made cities and states more willing to adopt them, says Dalit Baranoff, an expert in the history of fire insurance and a fellow at the Johns Hopkins Institute for Applied Economics, Global Health, and the Study of Business Enterprise. The NFUB also commissioned a committee of fire experts to assess risks in “conflagration districts” around the country. Many cities made important safety improvements as a result of the committee’s inspections — not just to be safer, but to bring down their insurance rates. E CO N F O C U S | F O U RT H Q U A RT E R | 2 0 1 5  31  As American cities grew “from settlement size to metropolis size, the size of the largest fires grew in proportion,” wrote nuclear-safety consultant William Shields in his Ph.D. dissertation in science and technology studies at Virginia Tech. Vast supplies of wood fueled the problem. “While many of the developed nations of Europe had exhausted forest reserves by the start of the 19th century, in the United States, the almost limitless availability of inexpensive, virgin-forest wood tended to discourage the use of brick, stone, and marble in the construction of dwellings, shops, factories, and warehouses.” Rapid industrialization and urbanization created dense clusters of high-value capital, greatly increasing the demand for fire insurance. And as more and more businesses borrowed money to finance their buildings, equipment, and inventories, fire insurance became indispensable to American commerce because lenders required it. But there were serious flaws in this burgeoning market. Initially, fire insurance companies wrote policies only in their own cities. So when conflagration destroyed New York’s business district in 1835, nearly all of the city’s 26 fire insurers went bankrupt, according to Baranoff. During the next 25 years, fire insurance companies spread their risks geographically — most notably, by expanding westward and working with independent agents who represented multiple companies in local and regional markets. But geographic diversity alone was not enough to protect fire insurers. Low barriers to entry and high levels of competition drove premiums too low for many of them to survive conflagrations. This problem resulted in lots of unpaid claims and worthless policies following huge fires in Chicago and Boston in the early 1870s. Many insurance companies went bankrupt, not only in the stricken cities, but in other areas of the United States as well. After the Civil War, fire insurance companies had tried to address the conflagration hazard by forming a national trust, but “they weren’t able to fix prices on a national level because there were so many local factors involved in setting 32  E CO N F O C U S | F O U RT H Q U A RT E R | 2 0 1 5  rates,” Baranoff says. The companies also experienced many of the classic obstacles to collusion — “free riding, defection, and prisoner’s dilemmas,” Schneiberg wrote, in a 1999 article in Politics & Society. The national trust scheme failed, but after the fires in Chicago and Boston, independent agents formed local and regional cartels that were somewhat successful at fixing prices at higher levels. From 1885 to 1910, the cartels stirred up a lot of “anti-compact” legislation in many states, but they began to address the failure of the old system by allowing insurance companies to build up greater reserves. As a result, only a few insurers failed after the massive Baltimore fire, according to Baranoff. Nearly 90 percent of claims got paid, and the city’s economy recovered quickly as money flowed into its burnt district.  Fixing Rates At the time of the Baltimore fire, the fire insurance industry’s business model was simple. Local insurance agents colluded with each other and the companies they represented to fix rates at profitable levels. This collusion allowed insurers to build up enough reserves to survive the next conflagration, and when the smoke cleared, the cartels pushed premiums even higher. But substantial rate hikes following the Baltimore and San Francisco fires met fierce resistance. The long-smoldering political feud between policyholders and insurance interests burst into flames. More states enacted anti-compact laws, and some states resorted to direct government rate setting. Meanwhile, growing competition from factory mutuals — groups of large industrial companies that banded together to self-insure — made it more difficult for insurance cartels to continue raising rates. “Rate wars, conflagrations, and political conflicts generated severe shortages and waves of bankruptcies,” Schneiberg wrote. These problems “served as object lessons or events that increased buyers’ receptivity to arguments for association and enhanced the credibility of insurers’ efforts to reframe price fixing as economically rational.” (Schneiberg  PHOTOGRAPHY: FREDERICK W. MUELLER (LIBRARY OF CONGRESS)  The Conflagration Hazard  This cycloramic (360-degree) photograph shows the smoldering ruins of the Great Baltimore Fire from Hanover Street.  prefers the more neutral term “association” because the cartels had both positive and negative effects on insurance markets.) In particular, “bankers, creditmen, and creditbased trades had strong incentives to join insurers in their search for order.” One turning point in that search for order came in 1910 while a joint committee of the New York State Legislature was investigating the fire insurance industry. Named for its chairman, Edwin Merritt Jr., the Merritt Committee ultimately endorsed the cartels (or associations) and codified three principles that insurance reformers had been advocating for many years — schedule rating, collective bargaining, and fire prevention. For individual policyholders, these interrelated principles linked premiums “to the documented features of a risk.” This approach “promised insureds lower rates (individually) in exchange for their taking steps to reduce hazards,” Schneiberg wrote. Likewise, “insurers began to bargain collectively with civic groups, local officials, and trade boards, offering lower rates (collectively) for specific improvements like passing building codes or razing hazardous stretches of buildings.” “Reduce the fire loss and let the premiums take care of themselves?” Merritt mused, during his questioning of insurance executive Edward Beddall. “Yes, sir,” Beddall replied, “the premiums will go down quickly if you reduce losses.” That simple statement became the conciliatory chord that gradually transformed confrontation into cooperation.  When insurance companies began sharing their loss data — via associations — and using actuarial science to make justifiable connections between risks and rates, policyholders and regulators came to realize that cartels (or associations) really could help reduce fire waste. “Legislators, regulators, and consumers began to endorse fire insurance associations in exchange for regulatory oversight,” Schneiberg says. By 1920, more than 20 states had sanctioned associations, and “by 1950, this stance was nearly universal.” During those years, as the rate-setting process became more scientific, transparent, and regulated, property owners started making safety improvements to lower their insurance rates. These preventive efforts — along with the proliferation of electricity, electrical-safety standards, building codes, better firefighting capabilities, and other technological improvements — eliminated the threat of urban conflagration. “The Great Fire of Baltimore was the last of its kind, a citywide fire developing from a single fire source,” Hoffer concluded. “Other cities would burn … but no American city would again allow a single spark to reduce an entire city core to ruins.” EF Except where otherwise noted, accounts of the Baltimore conflagration appearing in this article come from The Great Baltimore Fire by Peter Petersen or Seven Fires by Peter Charles Hoffer.  Readings Beddall, Edward F. “Verbatim Report of His Testimony before the New York Legislative Investigating Committee.” Weekly Underwriter, Jan. 14, 1911. Hoffer, Peter Charles. Seven Fires: The Urban Infernos that Reshaped America, New York: PublicAffairs, 2006. Petersen, Peter B. The Great Baltimore Fire. Baltimore: Maryland Historical Society, 2004.  Schneiberg, Marc. “Political and Institutional Conditions for Governance by Association: Private Order and Price Controls in American Fire Insurance.” Politics & Society, March 1999, vol. 27, no. 1, pp. 67-103. Wermiel, Sara E. The Fireproof Building: Technology and Public Safety in the Nineteenth-Century American City. Baltimore: Johns Hopkins University Press, 2000.  E CO N F O C U S | F O U RT H Q U A RT E R | 2 0 1 5  33  AROUNDTHEFED  Long Hours and High Turnover – For What? BY A A RO N ST E E L M A N  “Allocating Effort and Talent in Professional Labor Markets.” Gadi Barlevy and Derek Neal. Federal Reserve Bank of Chicago Working Paper No. 2016-03, March 2016.  I  t’s commonplace to hear of young employees at law firms, investment banks, and consulting groups working very long hours. Those employees, often termed “associates,” usually work within an “up-or-out” promotion system, meaning that after a set number of years they are either made a “partner,” typically receiving an equity stake in the firm, or they leave for another job, often in a less competitive sector of their profession or in a different profession altogether. Why do firms have such policies for their associates? In a new paper published by the Chicago Fed, Gadi Barlevy and Derek Neal argue that both policies — heavy workloads and up-or-out promotion decisions — serve a common purpose: They help current partners identify new talent that will lead their organizations into the future. Partners possess analytical skills required to perform and direct complex work as well as the communication and people skills required to earn and maintain the trust of clients. And because the trust relationship between a partner and a client depends on the partner’s ability to reliably provide expert services, each partner can manage only a limited number of clients. Firms grow horizontally by recognizing new partners who can handle such client relationships. This is done by observing how associates perform a large number of tasks over a fixed period and by cycling through new associates when current ones either are promoted or leave. Those who leave reduce their hours significantly in their new jobs, and their wage rates often rise because their skills are desirable in the labor force overall. As mergers have created some very large firms, up-and-out policies have been relaxed at certain organizations, with a limited number of employees retained in nonpartner positions to provide specific services that multiple partners can use. “Are Millenials with Student Loans Upwardly Mobile?” Stephan Whitaker, Federal Reserve Bank of Cleveland Economic Commentary No. 2015-12, Oct. 1, 2015.  F  rom 2007 to 2015, outstanding student loan debt rose 116 percent and now amounts to $1.19 trillion. Stephan Whitaker of the Cleveland Fed recently analyzed data from the New York Fed/Equifax consumer credit panel to determine how the increase in student loan debt is affecting debtholders’ socioeconomic outcomes across a variety of measures. In general, economists expect student loan debt 34  E CO N F O C U S | F O U RT H Q U A RT E R | 2 0 1 5  to be correlated with upward mobility because young people with higher education generally are more highly skilled and command higher wages, more than compensating for the debt they have acquired. But some observers have suggested that there may be a critical point at which the debt level becomes too large and upward mobility ceases to be possible. Overall, Whitaker finds that such fears have not proved true. “Millennials” with student debt still are more likely to be upwardly mobile than nonborrowers. But the advantages seem to have declined relative to the previous cohort of student debt holders. In particular, they are less likely to hold a mortgage. Whitaker observes that these trends “may be caused by the debt itself, or they may reflect the relatively weak economic recovery.” “Sentiment of the FOMC: Unscripted.” San Cannon, Federal Reserve Bank of Kansas City Economic Review, vol. 100, no. 4, Fourth Quarter 2015, pp. 5-31.  T  he Federal Open Market Committee (FOMC) releases transcripts of its meetings with a five-year lag. San Cannon of the Kansas City Fed has used text-mining techniques to examine participants’ tone and diction over time, from 1977, the first year the FOMC started identifying written records as transcripts, through 2009. As might be expected, when economic growth is above trend, discussions tend to be shorter, contain fewer unique words, and be more positive than when growth is below trend. Overall, Governors tend to make more comments than Reserve Bank presidents or Board staff members, but those comments tend to be shorter, perhaps because they ask a larger number of questions while Bank presidents, among other things, describe economic conditions in their districts and Board staff members often present prepared comments on specific topics. The tone of Bank presidents “has been consistently more positive than that of the Governors and staff for most of the period. The staff tone also has been consistently more positive, with smaller variation, than the Governors until recent years,” Cannon notes. In response to a congressional hearing in 1993, the FOMC announced it would start publishing meeting transcripts. Cannon shows that discourse has changed since that decision, offering that participants may have given more carefully worded responses in the 1994-2009 period, knowing that their comments would be made public. In addition, positive economic activity “sparked a less positive tone in FOMC discussions post-publication than pre-publication,” though changes in tone were not uniform across Governors, Bank presidents, and Board staff members. EF  BOOKREVIEW  Inequality: The Long View UNEQUAL GAINS: AMERICAN GROWTH AND INEQUALITY SINCE 1700 BY PETER H. LINDERT AND JEFFREY G. WILLIAMSON PRINCETON: PRINCETON UNIVERSITY PRESS, 2016, 424 PAGES REVIEWED BY HELEN FESSENDEN  I  s rising inequality part and parcel of economic growth? The historical relationship between growth and inequality has long been a thorny question, due in part to the challenge of assembling accurate datasets on incomes and growth over centuries. Now two leading economic historians, Peter Lindert of the University of California, Davis and Jeffrey Williamson, emeritus professor of economics at Harvard University, have done just that to map out the history of American inequality. In Unequal Gains, they offer an ambitious and rigorous attempt to address some long-overlooked questions about U.S. economic development, including whether American inequality has been distinctive compared to other major economies. The authors build on a vast body of work by their peers and predecessors, in some cases challenging findings, in other cases advancing them with richer data. Their biggest innovation is their use of income, rather than expenditures or production, to estimate gross domestic product, on the premise that income gives a more complete account of how earnings were distributed by region, class, gender, and race, as well as providing more accurate readings of inflation. The authors use this framework to assemble “social tables” to compare household income by regions and groups across five benchmark years from 1774 to 1870. Among their findings: U.S. inequality has waxed and waned over the centuries, but its correlation to economic growth has generally been weak. In the past century, U.S. trends coincided with broader global movements — namely, the “great leveling” from World War I to about 1970 and the spike in inequality that followed in nations such as Canada, the United Kingdom, and Australia. But America has also had some exceptional factors affecting inequality, such as slavery and its legacy, as well as the Great Migration from Europe. The social tables provide some startling findings on early American history. In colonial times, America was already a world leader in living standards thanks to abundant natural resources and cheap basic goods. Growth was slow, but the colonial economy was extremely equal. The Revolutionary War, however, upended everything as the economy took a beating from hyperinflation, extreme financial  mismanagement, and the proliferation of interstate tariffs. The 19th century, by contrast, saw both rising growth and rising inequality. As the frontier pushed westward and urbanization and industrialization took root, U.S. growth outpaced that of most European economies. Inequality increased as well, but it took different forms in the North and South. In the North, urbanization widened the income gap because high-skilled workers flocked to cities to take advantage of the higher wages. The financial sector took off by mid-century, creating a stratified class of one-percenters. And the initial surges of immigration from Europe, along with a high rate of natural increase, expanded the supply of cheap labor, driving unskilled urban wages down. In the antebellum South, inequality was more extreme, and growth was slower. Whites who owned property, including slaves, saw increasing returns on property income. But poor whites’ income stagnated, and slaves were relatively worse off than in the 18th century. The Civil War and emancipation sharply reversed these disparities: Freed blacks saw their incomes jump from prewar subsistence levels (by about 30 percent) while property-owning whites saw their incomes plummet. The South took a much harder hit from the Civil War, but it also became a more equal economy. Then, from about 1918 to 1970, the United States saw a remarkable stretch of income convergence driven by market factors, demographics, and policy changes. It began before the New Deal, as immigration slowed and population growth decelerated, tightening labor supply; in addition, the skills premium narrowed as new technologies actually benefited lower-skilled workers. The 1929 crash destroyed much of the new wealth created in the 1920s, leveling incomes further, while New Deal policies reinforced that trend. And broad-based education created a far more skilled workforce. Other industrialized nations also experienced a similar “leveling,” especially after World War II. The global spike in inequality that followed unraveled these gains, but it was especially pronounced in the United States. The financial sector began booming again in the 1980s and 1990s, lifting the highest earners; meanwhile, gains in schooling stalled just as the economy was increasingly rewarding education, hurting the low-skilled. The authors wisely avoid the usual political minefields with all these questions, but they also are right to note that politics is what makes it so hard to adopt policies that could mitigate inequality today — better and broader education, a more egalitarian inheritance-tax policy, and more sustained financial reforms to tame bubbles and crashes. “The opportunities are there, like hundred dollar bills lying on the sidewalk,” they conclude. “Of course, the fact that they are still lying there testifies to the political difficulty of bending over to pick them up.” EF E CO N F O C U S | F O U RT H Q U A RT E R | 2 0 1 5  35  DISTRICTDIGEST  Economic Trends Across the Region  Predicting Economic Activity through Richmond Fed Surveys BY S O N YA R AV I N D R A N AT H W A D D E L L  P  in 1993 and asks questions regarding revenues, number of employees, average wages, and prices received. For retailers, the survey includes questions on current inventory activity, big ticket sales, and shopper traffic. The number of survey respondents has varied over time. In 1993, the number of respondents to the combined surveys started at around 250 but then fell to a low of 82 respondents by the end of 2000. The number then stayed between roughly 150 and 200 respondents from 2001 until a large jump in 2011. For the past few years, the number of respondents has hovered around 200 businesses. There have also been changes to the surveys over time, such as the addition, change, or clarification of questions. For example, wage information was only collected from manufacturing firms starting in 1997. As another example, in 2005 the service sector questionnaire was revised so that retail/wholesale participants received a separate questionnaire that more closely aligned with their business. The questionnaire for other service sector participants was How the Surveys Work scrubbed of all retail references. To the extent possible, Both the Richmond Fed manufacturing survey and its serthese changes have been mindful of maintaining enough vice sector survey have been around for more than 20 years. consistency in the questions over time to allow meaningful The manufacturing survey began in June 1986 and took its comparisons across periods. current monthly form in November 1993. The survey asks Once the survey data are collected, indices are created out manufacturing firms questions about shipments of finished of the responses (see charts). For each question, respondents products, new order volumes, order backlog volumes, capacare asked about a change in activity: increase, decrease, or ity utilization (usage of equipment), lead times of suppliers, no change. Results are reported as diffusion indices that are number of employees, average work week, wages, inventories calculated by subtracting the share of respondents who said of finished goods, and expectations of capital expenditures. that activity decreased from the share who said that activity The second survey — that of service sector firms — began increased. For example, say 120 contacts respond to the question about employment activity and 78 (65 percent) indicate that employment increased, 24 Richmond Fed Employment Diffusion Indices (20 percent) report that employment decreased, Seasonally Adjusted and 18 indicate no change in employment. In this 30 case, the diffusion index for this question would 20 be 65 minus 20, or an index reading of 45. In addition, both the service sector survey and 10 the manufacturing survey report both current 0 activity and the level of activity anticipated by respondents at their establishments during the -10 next six months (compared with the current -20 month). If the diffusion index is positive, then that is generally interpreted as an expansion in -30 activity while negative values are interpreted as -40 a contraction. The Richmond Fed is not unique in developing measures of service and manufacturing Manufacturing Retail and Non-Retail Services sector activity; some other Reserve Banks and NOTE: Data start at November 1993. other institutions also do so, as discussed in a SOURCE: Fifth District Survey of Manufacturing Activity and Fifth District Survey of Service Sector Activity, Federal Reserve Bank of Richmond March 2014 Richmond Fed Economic Brief by 1993 1994 1995 1996 1997 1998 1999 2000 2001 2002 2003 2004 2005 2006 2007 2008 2009 2010 2011 2012 2013 2014 2015  DIFFUSION INDEX  art of the mission of each Federal Reserve Bank is to understand the economy of its district. To this end, the Richmond Fed, responsible for the Fifth Federal Reserve District — which includes the District of Columbia, Maryland, North Carolina, South Carolina, Virginia, and most of West Virginia — conducts surveys and creates indices of economic activity in the manufacturing and service sectors. These survey indices collect information that is not otherwise available and do so in a more timely fashion than the publicly available regional data. They also collect information on respondents’ projections of future economic conditions. These sector surveys complement other sources of information that the Richmond Fed relies on to understand economic activity in the Fifth District, such as data from the Bureau of Economic Analysis and the Bureau of Labor Statistics and anecdotes from business leaders. Recent research sheds light on how the sector surveys perform compared with other economic data.  36  E CO N F O C U S | F O U RT H Q U A RT E R | 2 0 1 5  The Explanatory Power of the Surveys  Richmond Fed Average Wage Diffusion Indices Seasonally Adjusted 50 40 DIFFUSION INDEX  David Price and Aileen Watson. The methodology the Richmond Fed uses to create the diffusion indices is consistent with that used by other Federal Reserve Banks in their sector surveys as well as that of many other surveys, such as the Institute for Supply Management Index and the Michigan Survey of Consumers Index of Consumer Sentiment.  30 20 10 0  1993 1994 1995 1996 1997 1998 1999 2000 2001 2002 2003 2004 2005 2006 2007 2008 2009 2010 2011 2012 2013 2014 2015  Most of the survey indices are unique in the -10 information that they provide. For example, -20 there is no source of data on manufacturing new orders or service firm revenues at the state level, which makes it impossible to aggregate that Manufacturing Retail and Non-Retail Services information to the District level. Therefore, it is NOTE: Data start at November 1993. generally difficult to evaluate the accuracy of the SOURCE: Fifth District Survey of Manufacturing Activity and Fifth District Survey of Service Sector Activity, Federal Reserve Bank of Richmond survey indices for new orders or service firm revenues. To the extent that data are available, they are not as frequent and often lag considerably; for example, This aggregate employment growth measured by the the annual Census Bureau data on state-level manufacturing QCEW data masks underlying details. For example, a modshipments and capital expenditures for the year 2014 were erately high aggregate employment growth in a particular not released until December 2015. Thus, it could be valuable area may result from a few sectors growing rapidly with to be able to use the sector surveys as leading indicators of other sectors growing more slowly or declining — or it may these measures of economic activity. But how reliable are result from every sector growing at a moderately high rate. the surveys for this purpose? In other words, the aggregate growth rate can be interpreted There are two indicators for which externally provided as (approximately) arising from changes in an intensive data exist in a monthly or quarterly series: employment margin (the difference between the intensity with which and wages by state and industry. The Quarterly Census expanding sectors grew and with which contracting sectors of Employment and Wages (QCEW) provides considerdeclined) and changes in an extensive margin (the difference able information on employment by state (and sub-state) between the fraction of sectors that expanded and the fracand industry across the United States. (See “State Labor tion of sectors that contracted). Markets: What Can Data Tell (or Not Tell) Us?” Econ Since the Richmond Fed diffusion indices give the share Focus, First Quarter 2015.) We can use the QCEW data of sectors whose employment increased (taking the responto understand how our survey measures perform. But to dent firms as representing the sector) versus the share of do that, we need to be able to better interpret what the sectors whose employment fell, they are, in effect, providing survey index is measuring and how that corresponds to the the extensive margin. In other words, diffusion indices, measures of employment and wages provided through the appropriately scaled, capture the contribution of the extenQCEW. sive margin to changes in the aggregate series of interest, as As already discussed, the Richmond Fed diffusion index discussed in a 2015 working paper by Santiago Pinto, Pierrefor employment in the manufacturing and service sectors Daniel Sarte, and Robert Sharp. captures the share of respondents who said employment To compare the employment or wage diffusion indices to increased compared with the previous month minus the the QCEW aggregate employment data, we need to matheshare of respondents who said that employment decreased matically decompose the aggregate employment change into (see chart on opposite page). An analogous measure can be an intensive and extensive margin. Doing so shows that variadeveloped using the QCEW data, which are derived from tions in employment growth are greatly influenced by changes the quarterly tax reports submitted to state workforce in the extensive margin (see chart on top of next page). One agencies by employers subject to state unemployment exception was during the Great Recession, when changes in insurance laws. The QCEW data represent about 97 perboth the intensive and extensive margins seemed to play an cent of all wage and salary civilian employment in the counequal role. Since 2009, the expansion in aggregate employtry and are available down to the county level by industry as ment in the Fifth District has relied heavily on the extensive granular as the six-digit NAICS code. (An exception is that margin — that is, on an increase in the share of sectors that the data are suppressed if the number of establishments in increased employment. the county/industry or state/industry combination is small The importance of the extensive margin outlined above enough to potentially compromise the confidentiality of suggests that diffusion indices may serve as a close indicathe reporting firms.) tion of aggregate growth. Can we use the actual employment E CO N F O C U S | F O U RT H Q U A RT E R | 2 0 1 5  37  Decomposition of Employment Growth Rate  -4  that the data through the end of 2015, for example, will not be available until June 2016. This analysis shows that given the role of the extensive margin in understanding aggregate employment growth, the real-time diffusion index developed by the Richmond Fed is a reasonable measure of employment activity and can be relied on to understand employment changes in the Fifth District in a timely fashion.  -6  Wages and Other Measures  ANNUALIZED MOM % CHANGE  4 2 0 -2  The reliability of a diffusion index in understanding “true” economic activity hinges on the relative contribution of the extensive margin of activity to overall growth. For employment, that contribution was significant. For other measures of activity, however, it might not be as significant. For example, another indicator that is available both in aggregate growth terms and through the Richmond Fed survey questionnaire is a measure of wage changes. When following the same exercise for employment as for wages, however, we find that the wage index fails to effectively track aggregate wage growth in the District. This is because changes in wages over time are driven to a greater extent by the intensive margin — the percent change in wages in sectors whose wages are changing in a given month — rather than the extensive margin, or the number of sectors whose wages are either increasing or decreasing in a given month. Therefore, even if a survey-based index were to exactly mimic its “true” synthetic counterpart constructed with data observed ex post, it may perform poorly in tracking the aggregate series of interest. Therefore, the validity of each individual question survey index in predicting overall economic activity in that area will vary — an important factor to consider if the measure is to be used as either a leading indicator or a sole indicator of economic activity.  1990 1991 1992 1993 1994 1995 1996 1997 1998 1999 2000 2001 2002 2003 2004 2005 2006 2007 2008 2009 2010 2011 2012 2013 2014  -8  Intensive Margin  Extensive Margin  Aggregate Growth Rate  SOURCE: BLS/Federal Reserve Bank of Richmond  data to validate the Richmond Fed sector indices as early proxies for direct measures of employment changes? In particular, if we use this extensive margin to develop a “synthetic” diffusion index from the employment data, how do the Richmond Fed sector indices correlate with the synthetic diffusion index? As it turns out, the survey-based diffusion index produced in real time by the Richmond Fed lines up remarkably well with the synthetic diffusion index produced from the later employment data (see chart below). The correlation for the entire time period is 0.68, and from June 2002 through December 2014 it is 0.77. This is a particularly useful finding since the surveys conducted by the Richmond Fed are much timelier than the aggregate employment data. The Richmond surveys are available almost in real time: The survey period ends on the third Wednesday of every month, and the survey results are released on the fourth Tuesday of every month. On the other hand, the QCEW data are released with a six-month lag so  Fifth District Survey Index vs. Synthetic Diffusion Index 0.4 0.3 DIFFUSION INDEX  0.2 0.1 0 -0.1 -0.2 -0.3 1990 1991 1992 1993 1994 1995 1996 1997 1998 1999 2000 2001 2002 2003 2004 2005 2006 2007 2008 2009 2010 2011 2012 2013 2014  -0.4  Synthetic Index  Fifth District Index  NOTE: Fifth District Survey Index data begins at the end of 1993. The Fifth District Survey Index is a diffusion index calculated from the survey responses. The synthetic diffusion index is calculated from the BLS employment data after decomposing growth into an intensive and extensive margin. SOURCE: BLS/Federal Reserve Bank of Richmond  38  E CO N F O C U S | F O U RT H Q U A RT E R | 2 0 1 5  What’s Next: State-Level Indices? Although the survey-based diffusion index for the Fifth District aids in understanding economic activity at the District level, it is even more useful to understand economic activity at the state level, especially given the role that state boundaries play in economic activity and policymaking. In light of the dearth of statelevel data, the manufacturing and service sector surveys have the potential to serve as a useful source of information that is not otherwise available at the state level (such as manufacturing new orders, retail shopper traffic, and projections of future activity) in a timely fashion. How reliable are the survey-based diffusion indices of employment as state-level indicators for the Fifth Federal Reserve District states? And how well do the Fifth District indices perform in capturing economic activity at the state  Representation of Fifth District States and Industries in  level in each Fifth District state? Employment Data and in Richmond Fed Surveys The level of representation of states in the Fifth District survey and in the QCEW data State Number of industries Share of respondents to (and Fifth District Richmond Fed survey in 2014 is generally consistent with economic activity share) in QCEW data in Q4 (representative quarter) in that state’s manufacturing or service sector. authors’ analysis The bulk of economic activity in our measures Manufacturing Service Sector is occurring in North Carolina and Virginia and Sector Survey Survey then, to a slightly lesser extent, Maryland and South Carolina (see table). The contributions of District of Columbia 30 (3.4%) 0% 5% the District of Columbia and West Virginia are Maryland 157 (18%) 12% 27% considerably lower. The number of responses at the state level North Carolina 207 (24%) 37% 31% is not enough to support state-level diffusion indices. We might be able to rely on the Fifth South Carolina 163 (19%) 15% 10% District survey index to accurately track the perVirginia 190 (22%) 30% 18% formance of each individual state in the District, however. To better understand this performance, West Virginia 121 (14%) 6% 9% we decomposed state employment growth in the QCEW data into extensive and intensive margins Fifth District Total 868 (100%) 100% 100% and then constructed state-level employment difSOURCE: BLS and Federal Reserve Bank of Richmond fusion indices. The analysis shows notable differences in the relative importance of the intensive and extensive margin across states. The extensive margin explains the bulk of variations in state employment growth Correlation Matrix: State and Fifth District in North Carolina, South Carolina, and Virginia, but it Diffusion Indices (Extensive Margin) seems less relevant to employment growth in the District of 5E DC MD NC SC VA WV Columbia, Maryland, and West Virginia. 5E 1.00 To understand how the Fifth District index might help us understand state economic activity, then, we developed synDC 0.45 1.00 thetic indices for each state and compared them to the Fifth MD 0.82 0.42 1.00 District synthetic index, as well as each state index to each other. These correlations differ notably across states (see NC 0.90 0.29 0.62 1.00 table). Again, the diffusion indices for Virginia and North SC 0.85 0.32 0.59 0.76 1.00 Carolina most closely follow the performance of the Fifth District index, with a correlation of approximately 0.9. The VA 0.91 0.39 0.74 0.74 0.71 1.00 correlation for South Carolina and Maryland was also quite WV 0.54 0.19 0.38 0.41 0.28 0.47 1.00 high (0.85 and 0.82, respectively). The correlations between the state and Fifth District indices are much lower for the SOURCE: Federal Reserve Bank of Richmond District of Columbia and West Virginia. In the same way that we should be careful in creating diffusion indices for indicators where the extensive margin does not explain the extent that change in the series of interest is driven by bulk of the change in aggregate growth, we should be carechanges in the extensive margin; to the extent that changes ful in relying too heavily on diffusion indices for regions or are the result of a few firms or sectors driving aggregate states where the extensive margin does not account for most growth, the indices will be less effective at measuring “true” of the change in the economic indicator. change. Research indicates that for a series such as employment and for particular states, the survey-developed diffuConclusion sion index is a good measure of economic activity in a much The Richmond Fed relies on diffusion indices developed timelier fashion than other available data. Further analysis through its surveys of manufacturing and service sector is required to better understand the efficacy of other survey growth to understand economic activity in the Fifth Federal measures or of these survey measures for different areas of Reserve District. These indices are reliable only to the the Fifth District. EF For a more in-depth discussion of this topic, see Santiago Pinto, Sonya Ravindranath Waddell, and Pierre-Daniel Sarte, “Monitoring Economic Activity in Real Time Using Diffusion Indices: Evidence from the Fifth District,” Federal Reserve Bank of Richmond Economic Quarterly (forthcoming).  E CO N F O C U S | F O U RT H Q U A RT E R | 2 0 1 5  39  State Data, Q2:15   DC  MD  NC  SC  VA  WV  Nonfarm Employment (000s) 766.5 2,654.5 4,228.5 1,994.2 3,832.2 765.5 Q/Q Percent Change 0.3 0.5 0.6 0.4 0.4 -0.5 Y/Y Percent Change 2.0 1.4 2.4 2.4 1.2 -0.6  Manufacturing Employment (000s) 1.1 103.8 460.2 235.6 232.9 47.7 Q/Q Percent Change 6.5 0.0 0.3 0.6 0.2 -0.7 Y/Y Percent Change 10.0 0.4 2.9 2.4 0.3 -0.6  Professional/Business Services Employment (000s)	 161.8 428.8 585.1 260.3 693.1 66.7 Q/Q Percent Change 0.8 0.6 1.0 2.7 0.8 -1.4 Y/Y Percent Change 3.3 1.3 2.8 2.4 1.9 0.4  Government Employment (000s) 237.9 503.3 720.8 359.7 711.9 152.0 Q/Q Percent Change 0.2 0.0 0.2 0.2 0.2 -0.1 Y/Y Percent Change 1.5 0.0 0.8 1.2 0.2 -0.7  Civilian Labor Force (000s) 387.5 3,144.1 4,751.3 2,250.4 4,225.2 784.7 Q/Q Percent Change 0.5 0.2 0.6 0.1 -0.1 0.4 Y/Y Percent Change 3.2 0.8 1.8 2.3 -0.6 -0.7  Unemployment Rate (%) 7.0 5.2 5.8 6.1 4.5 7.1 Q1:15 7.3 5.4 5.7 6.5 4.8 6.7 Q2:14 7.8 5.8 6.4 6.2 5.3 6.7  Real Personal Income ($Bil) 43.7 307.3 371.6 169.4 397.9 62.4 Q/Q Percent Change 0.7 0.7 0.8 1.0 1.0 0.4 Y/Y Percent Change 4.0 3.9 4.8 4.7 3.9 2.1  Building Permits 1,682 4,885 14,205 8,889 8,672 892 Q/Q Percent Change 119.0 54.1 21.2 30.5 33.1 53.5 Y/Y Percent Change 780.6 24.4 14.6 32.8 12.2 73.5  House Price Index (1980=100) 727.4 435.2 324.3 331.2 421.9 232.7 Q/Q Percent Change 1.8 1.2 1.5 1.9 1.1 3.7 Y/Y Percent Change 5.3 3.1 4.8 5.7 3.0 4.2 NOTES:  SOURCES:  1) FRB-Richmond survey indexes are diffusion indexes representing the percentage of responding firms reporting increase minus the percentage reporting decrease. The manufacturing composite index is a weighted average of the shipments, new orders, and employment indexes. 2) Building permits and house prices are not seasonally adjusted; all other series are seasonally adjusted. 3) Manufacturing employment for DC is not seasonally adjusted  Real Personal Income: Bureau of Economic Analysis/Haver Analytics. Unemployment Rate: LAUS Program, Bureau of Labor Statistics, U.S. Department of Labor/Haver Analytics Employment: CES Survey, Bureau of Labor Statistics, U.S. Department of Labor/Haver Analytics Building Permits: U.S. Census Bureau/Haver Analytics House Prices: Federal Housing Finance Agency/Haver Analytics  For more information, contact Michael Stanley at (804) 697-8437 or e-mail michael.stanley@rich.frb.org  40  E CO N F O C U S | F O U RT H Q U A RT E R | 2 0 1 5  Nonfarm Employment  Unemployment Rate  Real Personal Income  Change From Prior Year  Second Quarter 2004 - Second Quarter 2015  Change From Prior Year  Second Quarter 2004 - Second Quarter 2015  Second Quarter 2004 - Second Quarter 2015  4% 3% 2% 1% 0% -1% -2% -3% -4% -5% -6%  10% 9% 8% 7% 6% 5% 4% 	 05	 06	 07	 08	 09	 10	 11	 12	 13	 14	 15  3%  	 05	 06	 07	 08	 09	 10	 11	 12	 13	 14	 15  Fifth District  8% 7% 6% 5% 4% 3% 2% 1% 0% -1% -2% -3% -4%  	 05	 06	 07	 08	 09	 10	 11	 12	 13	 14	 15  United States  Nonfarm Employment Major Metro Areas  Unemployment Rate Major Metro Areas  Building Permits  Change From Prior Year  Second Quarter 2004 - Second Quarter 2015  Second Quarter 2004 - Second Quarter 2015  Change From Prior Year  Second Quarter 2004 - Second Quarter 2015  7% 6% 5% 4% 3% 2% 1% 0% -1% -2% -3% -4% -5% -6% -7% -8%  	 05	 06	 07	 08	 09	 10	 11	 12	 13	 14	 15 Charlotte  Baltimore  13% 12% 11% 10% 9% 8% 7% 6% 5% 4% 3% 2% 1%  Washington  40% 30% 20% 10% 0% -10% -20% -30% -40% 	 05	 06	 07	 08	 09	 10	 11	 12	 13	 14	 15 Charlotte  Baltimore  FRB—Richmond Manufacturing Composite Index  Second Quarter 2004 - Second Quarter 2015  Second Quarter 2004 - Second Quarter 2015  20  10  10  0 -10 -20 -30 -40 -50  0 -10 -20 -30 	 05	 06	 07	 08	 09	 10	 11	 12	 13	 14	 15  United States  House Prices Change From Prior Year Second Quarter 2004 - Second Quarter 2015  16% 14% 12% 10% 8% 6% 4% 2% 0% -2% -4% -6% -8%  40 30 20  	 05	 06	 07	 08	 09	 10	 11	 12	 13	 14	 15 Fifth District  Washington  FRB—Richmond Services Revenues Index  30  -50%  	 05	 06	 07	 08	 09	 10	 11	 12	 13	 14	 15  	 05	 06	 07	 08	 09	 10	 11	 12	 13	 14	 15 Fifth District  United States  E CO N F O C U S | F O U RT H Q U A RT E R | 2 0 1 5  41  Metropolitan Area Data, Q2:15   Washington, DC  Baltimore, MD  Hagerstown-Martinsburg, MD-WV  Nonfarm Employment (000s) 2,591.8 1,3706 105.2 Q/Q Percent Change 2.2 2.6 2.3 Y/Y Percent Change 1.9 1.4 1.7  Unemployment Rate (%) 4.6 5.6 5.6 Q1:15 4.7 5.7 5.7 Q2:14 5.1 6.2 6.2  Building Permits 7,173 2,412 309 Q/Q Percent Change 47.6 84.8 53.0 Y/Y Percent Change 34.3 39.7 49.3    Asheville, NC Charlotte, NC Durham, NC Nonfarm Employment (000s) 182.8 1,105.6 296.1 Q/Q Percent Change 3.0 1.9 0.7 Y/Y Percent Change 3.0 3.8 2.2  Unemployment Rate (%) 4.5 5.5 4.9 Q1:15 4.2 5.3 4.6 Q2:14 4.9 6.2 5.0  Building Permits 627 4,996 758 Q/Q Percent Change 68.1 19.8 -27.4 Y/Y Percent Change 57.9 39.1 -7.1     Greensboro-High Point, NC Raleigh, NC Wilmington, NC Nonfarm Employment (000s) 356.2 579.9 120.4 Q/Q Percent Change 1.7 2.3 3.9 Y/Y Percent Change 2.2 3.8 3.9  Unemployment Rate (%) 5.8 4.7 5.4 Q1:15 5.5 4.5 5.1 Q2:14 6.7 5.0 6.2  Building Permits 565 3,579 324 Q/Q Percent Change 34.8 20.2 -16.5 Y/Y Percent Change 1.6 21.4 -48.5 NOTE:  Nonfarm employment and building permits are not seasonally adjusted. Unemployment rates are seasonally adjusted.  42  E CO N F O C U S | F O U RT H Q U A RT E R | 2 0 1 5   Winston-Salem, NC Charleston, SC Columbia, SC Nonfarm Employment (000s) 258.2 334.3 383.3 Q/Q Percent Change 1.5 2.7 1.1 Y/Y Percent Change 2.0 3.1 2.6  Unemployment Rate (%) 5.4 5.7 6.1 Q1:15 5.1 5.7 6.0 Q2:15 6.1 5.3 5.6  Building Permits 538 1,730 1,426 Q/Q Percent Change 13.3 39.2 48.4 Y/Y Percent Change -1.8 34.9 38.7    Greenville, SC  Richmond, VA  Roanoke, VA  Nonfarm Employment (000s) 400.4 648.1 161.0 Q/Q Percent Change 1.9 2.1 1.4 Y/Y Percent Change 3.0 2.0 0.1  Unemployment Rate (%) 5.9 5.2 4.9 Q1:15 5.7 5.0 4.7 Q2:15 5.5 5.6 5.3  Building Permits 1,544 1,380 N/A Q/Q Percent Change -2.3 43.6 N/A Y/Y Percent Change 6.0 -3.4 N/A    Virginia Beach-Norfolk, VA Charleston, WV Huntington, WV Nonfarm Employment (000s) 768.3 123.9 141.1 Q/Q Percent Change 2.5 1.0 1.5 Y/Y Percent Change 0.7 -1.5 -0.9  Unemployment Rate (%) 5.4 6.7 6.4 Q1:15 5.2 6.4 6.2 Q2:15 5.8 6.5 6.7  Building Permits 1,841 70 81 Q/Q Percent Change 53.9 -12.5 161.3 Y/Y Percent Change 15.4 1,300.0 118.9   For more information, contact Michael Stanley at (804) 697-8437 or e-mail michael.stanley@rich.frb.org E CO N F O C U S | F O U RT H Q U A RT E R | 2 0 1 5  43  OPINION  The Importance of Researcher Independence BY K A RT I K AT H R E YA  W  riters, apparently, are often asked, “Where do you get your ideas?” Economists are rarely asked this question, in my experience. But I would like to say a little bit about it anyway to convey something of how we work within the Richmond Fed’s research department. Our economists — generally, there are a couple of dozen of us in the department — conduct original research in a variety of areas, including macroeconomics, monetary policy, banking, payments systems, and labor markets. In addition, within our team, there is a group of regional economists who study economic trends in our district and carry out research on regional economic issues. Our economists get their research ideas from many places, but with the exception of some work that we do specifically to support President Lacker in considering issues before the Federal Open Market Committee, they are largely free to follow their own instincts in setting the course of their research. I’ll share a couple of examples to illustrate how it works. The first is what we call the Non-Employment Index, or NEI, a measure of the health of the labor market that we began posting on our website late last year. Unlike the standard unemployment rate, the NEI takes into account individuals who are considered to be out of the labor force as well as the likelihood that unemployed workers will return to work. It had its origins several years ago when some of our economists were looking at the standard measures of the labor market’s performance and noted that those numbers were pointing in opposite directions: The unemployment rate had started to improve, while the labor force participation rate was still in decline. Motivated in part by this striking pattern, Marianna Kudlyak, one of our staff economists at that time, was working with Fabian Lange of McGill University on how the duration of a worker’s lack of employment affected his or her chance of finding a job — and how this information could be the basis of a measure of individuals’ connections to the labor market. Marianna drafted a memo about this work in early 2014 for President Lacker and other economists participating in his preparatory discussions ahead of the Federal Open Market Committee meeting in March. She showed it to a colleague here, Andreas Hornstein, who was enthusiastic and suggested that her work could be the basis of an index of the labor market as a whole, one that would give policymakers a better view of the labor market for some purposes than either the unemployment rate or the labor force participation rate. The three of them — Marianna, Fabian, and Andreas — then collaborated on an article for our scholarly economics journal, Economic Quarterly, setting out the methodology of the new index that became the NEI. Today, the 44  E CO N F O C U S | F O U RT H Q U A RT E R | 2 0 1 5  Hornstein-Kudlyak-Lange Non-Employment Index, as it’s more formally known, is part of the economic data disseminated through the St. Louis Fed’s FRED database alongside more traditional labor-force metrics. Another example of intellectual entrepreneurship here is the work by my colleague Nicholas “Nico” Trachter, whose interests include looking at how retailers set and change their prices. This is an area of obvious interest to the Fed, given our mandate to control the price changes associated with inflation. In 2013, as a professor in Rome, Italy, Nico learned about a large database of retail prices. Roughly at the same time, he met Guido Menzio, a University of Pennsylvania economist known for his work on the effects of market imperfections for macroeconomic behavior, at a lecture. They’ve been co-authors since then on several major pieces of work that have shed light on “price dispersion,” that is, differences in prices among sellers of the same item. This research is part of a strand that is now influencing economists’ view of the power of pricing behavior in driving aggregate economic activity. What these two lines of research have in common is that they came from the bottom up. That’s typical here, and there are a few reasons why we operate that way. The first is that the self-directed environment that we’ve had for a long time helps us compete in the marketplace for economists with top-tier research universities, where such an environment is standard. The second is that we think our economists are the best judges of where they can add value to current economic knowledge. Moreover, beyond the fact that policing research agendas is a good way to drive out the creative and productive, there isn’t any real need for us to do so: We make sure to hire economists interested in the kinds of questions that are important at a Federal Reserve Bank. And finally, to ensure quality, we use the standards set by the economics profession at large, through the thresholds for publication at high-quality journals. This is a test that the Bank’s research economists are expected to meet, and importantly, it’s how we know that the policy advice we get is coming from the right people. So while we believe in researcher independence, economists in our department have high expectations to meet. What we expect of each other is that we’re delivering topflight work to assist President Lacker in formulating his policy positions and that the rest of our research — the research we share with the world through journal articles, working papers, and our Economic Brief series — is meeting the tough standards of our peers in the economics profession. EF Kartik Athreya is executive vice president and director of research at the Federal Reserve Bank of Richmond.  NEXTISSUE What Happened to Wages?  Wage growth has been sluggish in recent years despite a steadily strengthening job market — a fact that has puzzled economists. Potential explanations have included hidden slack in the labor market, the lingering effects of employers’ inability to cut wages during the recession, and the changing mix of skills needed from workers. An equally pressing question is the extent to which the Fed’s policies have any power to improve wage prospects for workers.  The Missing New Banks  Since 2010, there has been an unprecedented collapse in the number of newly formed banks. Is this collapse being driven by the costs to comply with new financial regulations, by economic or technological changes that have hurt the profitability of small banks relative to large ones, or by some other factors? Since community banks have traditionally been major lenders to small businesses, economists are exploring whether this trend could have implications for the broader economy as well.  The Nuclear Option  This year, the Tennessee Valley Authority plans to connect a new nuclear reactor to the power grid, something that hasn’t happened in the United States for 30 years. Four more reactors are under construction in the Southeast. What are the economic forces driving multibillion-dollar decisions to build — or not to build — nuclear power plants?  Federal Reserve In recent years, a number of central banks have adopted negative interest rates, including the European Central Bank and the Bank of Japan. Economists have long assumed that nominal interest rates could not go below zero because depositors would simply choose to hold cash instead. But recent experience suggests that negative rates are possible — at least to a point. How do they work, and what do they mean for monetary policy?  The Profession Much has been written about disagreements among economists. But on many important policy issues, there is a pretty strong professional consensus — a consensus that often does not extend to the public. On which issues do economists and non-economists most strongly disagree and why?  Interview Erik Hurst of the University of Chicago on how looking exclusively at aggregate national data can mask important macroeconomic developments, why employment levels have been low following the Great Recession, and the factors determining entrepreneurship and self-employment.  Visit us online: www.richmondfed.org •	To view each issue’s articles and Web-exclusive content •	 To view related Web links of 	 additional readings and 	 references •	 To subscribe to our magazine •	To request an email alert of our online issue postings  Federal Reserve Bank of Richmond P.O. Box 27622 Richmond, VA 23261  Change Service Requested  To subscribe or make subscription changes, please email us at research.publications@rich.frb.org or call 800-322-0565.  Richmond Fed Research 2015  Working Papers Series  Economists at the Federal Reserve Bank of Richmond conduct research on a wide variety of economic issues. Before that research makes its way into academic journals or our own publications, it is often posted on the Bank’s website as part of the Working Papers series. December 2015, No. 15-16  What is the Monetary Standard?  September 2015, No. 15-10  May 2015, No. 15-05  Robert L. Hetzel  Approximating Time Varying Structural Models With Time Invariant Structures  November 2015, No. 15-15  Fabio Canova, Filippo Ferroni, and Christian Matthes  Did the Financial Reforms of the Early 1990s Fail? A Comparison of Bank Failures and FDIC Losses in the 1986-92 and 2007-13 Periods  August 2015, No. 15-09  Eliana Balla, Edward S. Prescott, and John R. Walter  On the Distribution of College Dropouts: Wealth and Uninsurable Idiosyncratic Risk Ali K. Ozdagli and Nicholas Trachter  Learning About Consumer Uncertainty from Qualitative Surveys: As Uncertain As Ever  November 2015, No. 15-14  Santiago Pinto, Pierre-Daniel G. Sarte, and Robert Sharp  Andra C. Ghent and Marianna Kudlyak  June 2015, No. 15-08  Intergenerational Linkages in Household Credit  May 2015, No. 15-04R  Should Greece Remain in the Eurozone? (Revised July 2015) Robert L. Hetzel  Innovation, Deregulation, and the Life Cycle of a Financial Service Industry  March 2015, No. 15-03  Fumiko Hayashi, Bin Grace Li, and Zhu Wang  Discussion on “Scarcity of Safe Assets, Inflation, and the Policy Trap” by Andolfatto and Williamson  Pooyan Amir-Ahmadi, Christian Matthes, and Mu-Chun Wang  June 2015, No. 15-07  Huberto M. Ennis  September 2015, No. 15-12R  Kartik B. Athreya, Felicia Ionescu, and Urvi Neelakantan  Fixed Prices and Regulatory Discretion as Triggers for Contingent Capital Conversion: An Experimental Examination  Borys Grochulski and Yuzhe Zhang  June 2015, No. 15-06  Douglas Davis and Edward S. Prescott  November 2015, No. 15-13  Measurement Errors and Monetary Policy: Then and Now  Optimal Liquidity Regulation With Shadow Banking (Revised March 2016) September 2015, No. 15-11  Tales of Transition Paths: Policy Uncertainty and Random Walks Christian Matthes and Josef Hollmayr  Stock Market Investment: The Role of Human Capital  Optimal Banking Contracts and Financial Fragility Huberto M. Ennis and Todd Keister  February 2015, No. 15-02  January 2015, No. 15-01  Equilibrium Price Dispersion Across and Within Stores Guido Menzio and Nicholas Trachter  To access the Working Papers and other Fed resources visit: http://www.richmondfed.org