View original document

The full text on this page is automatically extracted from the file linked above and may contain errors and inconsistencies.

FIRST QUARTER 2017  FEDERALRESERVE RESERVEBANK BANKOF OFRICHMOND RICHMOND FEDERAL  The Missing Boomerang Buyers Does it matter whether those who lost their homes during the crisis come back to the housing market?  Self-Driving Trucks and Jobs  The Future of Small Colleges  Interview with Janet Currie  VOLUME 22 NUMBER 1 FIRST QUARTER 2017  COVER STORY  8  The Missing Boomerang Buyers Does it matter whether people who lost their homes during the foreclosure crisis come back to the housing market? FEATURES  13  Kartik Athreya EDITORIAL ADVISER  Aaron Steelman Renee Haltom SENIOR EDITOR  David A. Price  Robots for the Long Haul There are 1.8 million heavy truck and tractor-trailer drivers in the United States. Will self-driving trucks soon mean the end of many of those jobs?   DIRECTOR OF RESEARCH  EDITOR      Econ Focus is the economics magazine of the Federal Reserve Bank of Richmond. It covers economic issues affecting the Fifth Federal Reserve District and the nation and is published on a quarterly basis by the Bank’s Research Department. The Fifth District consists of the District of Columbia, Maryland, North Carolina, South Carolina, Virginia, and most of West Virginia.  MANAGING EDITOR/DESIGN LEAD  Kathy Constant STAFF WRITERS  16  Too Small To Succeed? The hard facts of education economics are putting some small colleges at risk    Helen Fessenden Jessie Romero Tim Sablik EDITORIAL ASSOCIATE  Lisa Kenney  ­  CONTRIBUTORS  R. Andrew Bauer Michael Stanley DESIGN  Janin/Cliff Design, Inc.  DEPARTMENTS  	 1		 Message from the Interim President/A Focus on Public Engagement 	 2		 Upfront/Regional News at a Glance 	 3		 Federal Reserve/The Fed’s “Tequila Crisis” 	 6		 Jargon Alert/Business Cycles 	 7		 Research Spotlight/How Does Finance Fuel Growth? 	2 1		 Policy Update/Fighting Fund Runs 	22		Interview/Janet Currie 	27		 	Economic History/Lead Paint: A Level of Concern 31			 Book Review/An Extraordinary Time: The End of the Postwar Boom 			 and the Return 	of the Ordinary Economy 	32		 District Digest/Business Dynamics in the United States and 			 the Fifth District 40	 Opinion/Publicly Provided Data and the Fed  Published quarterly by the Federal Reserve Bank of Richmond P.O. Box 27622 Richmond, VA 23261 www.richmondfed.org www.twitter.com/ RichFedResearch Subscriptions and additional copies: Available free of charge through our website at www.richmondfed.org/publications or by calling Research Publications at (800) 322-0565. Reprints: Text may be reprinted with the disclaimer in italics below. Permission from the editor is required before reprinting photos, charts, and tables. Credit Econ Focus and send the editor a copy of the publication in which the reprinted material appears. The views expressed in Econ Focus are those of the contributors and not necessarily those of the Federal Reserve Bank of Richmond or the Federal Reserve System. ISSN 2327-0241 (Print) ISSN 2327-025x (Online)  MESSAGE FROM THE INTERIM PRESIDENT A Focus on Public Engagement  S  ince this magazine’s inception, the presidents of the Richmond Fed have used this page to share their thoughts about current economic issues and to explain some of the inner workings of the Fed and monetary policy. For those of you who don’t know me, I’ve worked in the Federal Reserve System for more than three decades and joined the Richmond Fed in 2013 as first vice president. I’m currently serving as interim president while our Board of Directors continues its search for the Bank’s new leader. Whoever the next president of the Richmond Fed is, I’m certain he or she will share my appreciation for the connection we have with the people who live and work in our district. Engagement with you is vital to our mission as a regional Reserve Bank for several reasons. First, the information and insights we gather from across our region provide important context for considering national monetary policy. They also help inform our own research and community development initiatives and ensure that we’re focusing on relevant issues. Recently, for example, we’ve devoted a great deal of effort to studying workforce development and the factors that contribute to persistent poverty with the goal of sharing our findings with local leaders. But it’s not a one-way street. We also want to inform you about the economic issues that affect you at work and at home. Whether you’re a policymaker, a business owner, or an interested citizen, you should have access to timely, unbiased information about regional and national economic trends. We also want to be transparent about the Fed’s operations and policymaking, not only because transparency can make monetary policy more effective, but also because we are accountable to you, the public. The Richmond Fed shares information in a variety of ways, from organizing or participating in conferences, to making presentations to local groups, to publishing original economic research. But this magazine is unique in its breadth and depth. In it, we have the opportunity to share some of the most innovative and interesting economic research currently underway, both within and outside of the Fed. We also are able to explore economic history, to ask and answer questions about monetary policy, and to dive deeply into issues of regional significance. Perhaps my favorite aspect of the magazine is that it shows how economics applies to a diverse — and sometimes surprising — range of topics. In recent years, we’ve published articles on farmland preservation, cybersecurity, and mass migration, to name just a few. We’ve also discussed the Fifth District’s labor markets following the Great Recession and the role of education in making the Carolinas less vulnerable to economic disruptions. With every issue, I learn something new about the economic forces that shape our communities and our economy, and I hope you do as well. EF  MARK L. MULLINIX INTERIM PRESIDENT AND CHIEF OPERATING OFFICER FEDERAL RESERVE BANK OF RICHMOND  E CO N F O C U S | F I R ST Q U A RT E R | 2 0 1 7  1  UPFRONT  Regional News at a Glance  BY L I S A K E N N E Y  MARYLAND — After a record-setting 25 million passengers passed through BWI Airport in 2016, the Board of Public Works in early February approved a $60 million construction contract for the expansion of the international terminal. The contract was awarded to Baltimore-based Whiting-Turner Contracting Co. The three-level, 70,000-square-foot extension will add six new gates, additional restrooms, and space for additional baggage operations, among other new features and modernizations. Total project costs could top $100 million, including the construction contract, and the new gates are expected to be open to travelers in summer 2018. NORTH CAROLINA — In early February, North Carolina’s first commercialscale wind farm became fully operational, with 104 wind turbines generating enough energy to power the equivalent of 60 homes a year. The Amazon Wind Farm U.S. East covers 22,000 acres near Elizabeth City, with the energy powering Amazon Web Services’ data centers. The wind farm has a permanent crew of 17 technicians, and landowner rents and taxes will put more than $1 million into the local economy annually. SOUTH CAROLINA — Spartanburg-based BMW Manufacturing announced in late January that it will give $300,000 to fund three years of a STEM education program in four Cherokee County middle schools. The program will begin in the fall and will be offered by Project Lead the Way, a nationwide nonprofit already operating in 164 other South Carolina schools. The University of South Carolina College of Engineering and Computing will provide training for Cherokee County teachers to implement the curriculum. VIRGINIA — Nestlé USA, a subsidiary of the world’s largest packaged-food company, will soon occupy the tallest building in Northern Virginia. The company will move its U.S. headquarters to Rosslyn this summer, bringing with it 750 jobs. Virginia lured Nestlé away from its current California location with $10 million in grant funds, in addition to $6 million in incentives from Arlington County. Nestlé will spend almost $40 million to take over 40 percent of a highrise building that has sat empty since its construction was completed in 2013. WASHINGTON, D.C. — After a more than 10-year struggle to leave RFK Stadium, D.C. United will soon have a new home in Buzzard Point. The D.C. Zoning Commission approved the Major League Soccer club’s plans for a new soccer venue on Feb. 16; a groundbreaking ceremony was held on Feb. 27. The District will cover $150 million in land and infrastructure costs, while United will spend $200 million for the 20,000-seat stadium. Audi has purchased naming rights in what has been reported as a multiyear, multimillion-dollar deal. The first game at Audi Field is slated for June 2018. WEST VIRGINIA — In February, EQT Corp. announced it won an auction for 53,400 acres in the Marcellus region. The acreage was previously held by Stone Energy, which filed for chapter 11 bankruptcy protection in December 2016. The $527 million acquisition includes drilling rights on about 44,000 acres in the Utica Shale, as well as 174 Marcellus wells and 20 miles of gathering pipeline. The acreage spans Wetzel, Marshall, Tyler, and Marion counties and currently produces about 80 million cubic feet of natural gas per day.  2  E CO N F O C U S | F I R ST Q U A RT E R | 2 0 1 7  FEDERALRESERVE  The Fed’s Tequila Crisis  A financial crisis in Mexico in the mid-1990s sparked a debate about the Fed’s role in international markets and its independence BY T I M S A B L I K  O  PHOTOGRAPHY: ASSOCIATED PRESS  n the day before New Year’s Eve in 1994, the Federal Open Market Committee (FOMC) held an emergency conference call. The topic was the rapidly deteriorating financial situation in Mexico. The value of the Mexican peso had fallen sharply, and billions of dollars in foreign investment and credit had fled the country. It was unclear whether Mexico would be able to roll over or service its short-term debt that was rapidly coming due. There was a concern that if Mexico defaulted, it would spread panic throughout Latin America, as had happened during Mexico’s last debt crisis in 1982. Some also feared spillover into the United States, given its new trade ties with Mexico. The North American Free Trade Agreement (NAFTA) had gone into effect in January. Still, Fed Chairman Alan Greenspan was initially somewhat optimistic. Mexico had made meaningful economic reforms since the 1980s. “The weak underlying economic structure that prevailed in 1982 when the Mexican economy last fell into a swoon clearly is not there,” Greenspan said on the Dec. 30 call. “We are obviously dealing with a highly psychological issue and a very significant amount of international financial volatility.” But as the new year unfolded, it quickly became apparent that the storm was not passing and that Mexico would not be able to weather it alone. The Fed was thrust into a debate over how the United States should respond, raising long-standing questions about its involvement in foreign operations and its independence from the Treasury.  1980s. The Fed under Chairman Paul Volcker began aggressively raising its policy rate to combat inflation, which raised the cost of Mexico’s debt as U.S. banks also increased rates on loans. Higher rates at home also made the relatively riskier investments in Latin America less attractive to American investors, and Mexico’s access to funding dried up. By August 1982, Mexico’s finance minister told officials in the United States and at the International Monetary Fund (IMF) that the country could no longer manage payments on its $80 billion debt. This prompted a crisis throughout Latin America, cutting off Mexico and other countries from international finance markets. The Fed organized bridge loans from central banks around the world that helped Mexico avoid a default, but they were not enough to reduce the principal on the debts. Mexico and other countries were forced to make deep cuts, leading to a “lost decade” of stagnant or negative economic growth. The crisis prompted major changes in Mexico. President Miguel de la Madrid undertook widespread industry deregulation and privatization and substantially lowered tariffs to open the country to trade. His successor, Carlos Salinas de Gortari, continued this trend. His administration participated in the trade negotiations with the United States that culminated in NAFTA and worked with then-U.S. Treasury Secretary Nicholas Brady  Setting the Stage In many ways, the run-up to what would later be dubbed Mexico’s “tequila crisis” looked similar to its last boom and bust. During the 1970s, oil price spikes stemming from the OPEC embargo boosted revenues from Mexico’s state-owned oil industry. Near-zero real rates on short-term loans due to rising global inflation made it attractive for the Mexican government to use its new revenue to take on greater debt. For their part, creditors in the United States were eager to lend. Low real rates at home made the yields from investing in Treasury Secretary Robert Rubin, left, and Federal Reserve Board Chairman Alan developing countries like Mexico attractive. Greenspan testify before the Senate Foreign Relations Committee in January 1995 in Things began to unravel quickly in the early regards to the Mexican crisis. E CO N F O C U S | F I R ST Q U A RT E R | 2 0 1 7  3  to renegotiate Mexico’s outstanding debt in 1989-1990. This allowed Mexico to regain access to international credit markets at the same time that it opened its financial markets to foreign investment and began privatizing its banking sector. By 1992, most of Mexico’s commercial banks had been privatized. This led to a large expansion in consumer credit. Once again, foreign credit flowed into the Mexican government and Mexican firms as well. Just as in the 1970s, U.S. investors were searching for yield due to low interest rates at home following the 1990-1991 recession. Net foreign direct investment in Mexico doubled from roughly $2 billion to more than $4 billion a year. This Time is Different? In hindsight, there were signs of another crisis brewing. As it had in the early 1980s, Mexico was running a substantial current account deficit by the early 1990s. From 1988 to 1992, Mexico’s current account deficit grew tenfold from $2.4 billion to $24.4 billion. Large current account deficits financed by borrowing often spelled trouble for developing nations; creditors might begin to doubt the country’s ability to repay them and decide to pull funding out, sparking a rapid devaluation of the currency. But there was a feeling in the air that Mexico was no longer a developing country. The financial officials in de la Madrid’s and Salinas’ administrations overseeing Mexico’s market-oriented reforms had been educated in top U.S. economics programs and were well-respected by their counterparts in the United States and Europe. Mexico was welcomed into the Organisation for Economic Co-operation and Development (OECD) in May 1994, the first new member since New Zealand in 1973. Mexico, it seemed, had “arrived.” Thus, initial signs of unrest in 1994 did little to break investors’ confidence at first. On Jan. 1, the same day that NAFTA went into effect, a rebel group seized control of several towns in the state of Chiapas in a standoff that lasted nearly two weeks. Violence and kidnappings intensified throughout the year. In March, the leading presidential candidate, Luis Donaldo Colosio-Murrieta (who was also a member of de la Madrid’s and Salinas’ party), was assassinated. And in September, Mexico’s secretary general was also killed. Mexico had a history of financial turbulence during election years. The Bank of Mexico did not gain its independence until 1993 and came under political pressure to keep interest rates low during elections. This led to recurring bouts of inflation. It attempted to curtail this inflation by managing the peso’s exchange rate, but it would inevitably be forced to let the currency devalue. In 1991, the Bank of Mexico established another managed exchange regime for the peso. Its value fluctuated freely but only within a narrow range of rates pegged to the dollar. The Bank of Mexico needed enough reserves on hand in order to credibly defend the peso’s floor and ceiling. 4  E CO N F O C U S | F I R ST Q U A RT E R | 2 0 1 7  As the political unrest in Mexico intensified in 1994, investors began to reconsider their bets on the country’s future. At the same time, the Fed initiated the first of six interest rate hikes that year in February, marching the fed funds rate up from 3 percent to 5.5 percent. As in the 1980s, higher rates at home reduced the attractiveness of riskier investments in developing markets. The real tipping point came in December 1994 after newly elected President Ernesto Zedillo Ponce de León took office. Zedillo replaced Finance Minister Pedro Aspe, who had served under Salinas and who was respected by foreign investors. More than $800 billion in investments poured out of the country as investors feared that Zedillo’s administration would renege on the reforms of his predecessors. And to bookend the year of turmoil as it began, a second rebel uprising in Chiapas occurred on Dec. 19. Under this mounting pressure, the Bank of Mexico could no longer credibly defend its peso peg. It attempted to devalue the peso slightly on Dec. 20. The move sparked additional panic from investors, and another $4.6 billion left the country in two days. The Bank of Mexico was forced to abandon the peg entirely, allowing the peso to devalue sharply from 3.5 pesos per dollar to 5.75 pesos per dollar. This devaluation threatened to spark a major debt crisis. Throughout the year, the Mexican government had issued short-term debt that guaranteed repayment in dollars (bonds known as tesobonos). The sharp devaluation of the peso relative to the dollar increased the burden of these tesobonos. With markets panicking, it was unlikely that Mexico would be able to secure new loans to roll over its short-term debt before it came due. The Fed Gets Involved The Fed had been watching these events with growing concern. On March 22, 1994 — the day before Colosio’s assassination, it would turn out — the FOMC held its second meeting of the year, and Mexico was high on the agenda. Fed policymakers discussed a proposal to temporarily increase the Fed’s swap line with the Bank of Mexico from $700 million to $3 billion. Mexico had had a standing swap line with the Fed since 1967, but with NAFTA in place, Mexico had requested an increase in its line, an increase that it suggested would befit its now-closer ties to the United States. The Fed’s swap lines were originally established in 1962 during the Bretton Woods monetary system to supplement efforts by the Treasury’s Exchange Stabilization Fund (ESF) to maintain the dollar’s fixed value to gold. The Fed used swap lines to exchange dollars for foreign currency with a foreign central bank, agreeing to repurchase them at a future date at the same exchange rate. This protected foreign central banks from exchange rate risk, which would in theory reduce their desire to convert dollars to gold and help defend the dollar-gold peg. The swap lines also allowed foreign central banks to draw on them to supplement their dollar reserves during  a crisis. The Bank of Mexico had done this repeatedly during previous crises, which gave some members of the FOMC pause. “I’m still not satisfied in my own mind as to what is or is not an appropriate use of swap lines per se,” Cleveland Fed President Jerry Jordan said at the March 1994 meeting. “When I look at the utilization of our swap lines with Mexico in the past, it’s a very troubling pattern.” On the other hand, Jordan conceded that if the Fed wanted to continue using the swap lines, then Mexico should be given the same access as any other major trading partner of the United States. “Mexico wasn’t just another emerging market country that was having all these problems anymore, it was our partner in NAFTA,” says Michael Bordo of Rutgers University. “Now it was of great strategic importance not to have a huge banking crisis in Mexico that would destabilize the hemisphere.” Following Jordan’s objections, then-Richmond Fed President Al Broaddus voiced other concerns. He noted that the swap lines had been set up for a specific purpose that no longer existed. Using them to lend to countries in financial trouble, like Mexico, could be seen as an abuse of the Fed’s independence. “It seems clear to me that any loan to Mexico in the current circumstances in essence would be a fiscal action of the U.S. government,” Broaddus said at the meeting. “And fiscal actions — expenditures of the government — are supposed to be authorized by Congress.” Additionally, there was a growing consensus among economists in academia and at the Fed that these interventions into foreign exchange markets were ineffective. “I thought that the Fed’s foreign exchange market operations undermined the credibility of monetary policy,” says Broaddus. The Fed had fought hard throughout the 1980s to build its credibility for pursuing low and stable inflation at home. Intervening in currency markets to prop up another country’s currency, particularly if such interventions didn’t work, would weaken the credibility of the Fed to achieve its policy goals at home. But others, such as New York Fed President William McDonough, argued that given the increasing interconnectedness of world markets, the Fed should take a wider view of monetary policy. “I think that one of the functions of the Federal Reserve is to seek monetary stability in a broader framework than just the American economy,” he said. “[Mexico] is a country, being on our border, in which serious financial instability would have a very definite possibility of spreading across the border and creating problems in our own markets. So to me it is appropriate to have the swap line used in times of market instability.” The FOMC was pressed into making a decision when Colosio was killed, creating further unrest in financial markets. On a March 24 conference call, the committee voted 8-1 in favor of temporarily increasing the swap line to $3 billion. Broaddus was the lone dissenter, predicting that “ultimately this will do us more harm than good.”  The Treasury’s Plan Broaddus’ warning was prescient. As the year continued and the crisis in Mexico worsened, the Fed was drawn deeper into the U.S.-led response. The FOMC voted to temporarily increase its swap line to $4.5 billion on Dec. 30, 1994. Again, Broaddus alone dissented. On Jan. 10, 1995, immediately after he took his oath in the Oval Office, Treasury Secretary Robert Rubin held a meeting with President Bill Clinton and other senior advisers, including the Treasury’s top international official, Larry Summers. Rubin and Summers both predicted global catastrophe if Mexico defaulted. They proposed that the United States provide a rescue package of $25 billion — more than 10 times the assistance the U.S. government provided to Mexico in 1982. Ultimately, the proposal was raised to $40 billion, to make sure to calm markets. Initially, congressional leaders pledged to support the plan. But in the following days, they wavered. Members in both parties questioned putting billions of taxpayer dollars at risk to bail out Mexico and the Wall Street bankers who had made investments there. Congressional opposition to President Clinton was high as well. The Republicans had just won control of the House for the first time in more than 40 years, and many of them were in no hurry to support an unprecedented foreign aid package orchestrated by the Clinton administration as their first action. As it became clear that Congress would not vote for the plan, Rubin and Summers began looking for alternatives. The IMF was willing to help, but it did not have the resources to support the size of intervention that the Treasury thought necessary to calm markets. To supplement the IMF, they turned to the ESF. The ESF also did not have enough dollars to make the now $20 billion loan that Rubin and Summers envisioned, but it did have substantial foreign currency holdings. They asked the Fed to engage in a swap with the Treasury, exchanging dollars for foreign currencies that the Treasury would agree to buy back at a later date. Initially, the discussion at the Fed focused on how the Treasury would protect it from any risk should Mexico default on the loan. But at the FOMC’s Jan. 31-Feb. 1, 1995, meeting, others joined Broaddus in voicing larger concerns about the Fed’s involvement. St. Louis Fed President Thomas Melzer did not agree that the crisis in Mexico represented a “systemic” threat to the United States, and he felt that the Fed was “setting a very bad precedent” by directly funding the Treasury’s fiscal operation. Board Governor Lawrence Lindsey noted that by funding the operation, the Fed was effectively helping the Treasury to subvert the will of Congress. “Our political risk in this is enormous,” he said. “A bill that [Congress] opposed was defeated, and now…we are going to go around all the normal processes and pull money out of this little pot people never knew even existed and use that money. Well, continued on page 20  E CO N F O C U S | F I R ST Q U A RT E R | 2 0 1 7  5  JARGONALERT  Business Cycles  I  t doesn’t take an economics Ph.D. to observe that economies experience times when things are generally good and times when things are generally not so good. Expansions in economic activity — the good times — are typically characterized by more jobs, rising incomes, and greater production across a number of industries. Recessions typically include weaker labor markets and lower readings of a wide array of economic indicators. Economists call these fluctuations “business cycles,” and they appear to be inevitable; recessions have occurred every 58 months on average since the end of World War II. The nonprofit National Bureau of Economic Research (NBER) in Cambridge, Mass., tracks the dates of business cycle peaks and troughs. And while not officially dated by the NBER, expansions are sometimes conceptually divided into periods of “recovery” — the time it takes for an economy to achieve the level of activity it had reached before a recession — and times of expansion beyond that level. Recoveries often, though not always, feature rapid growth as economies bounce back to health. Recessions and expansions alike can only be identified several months after they begin. Why do business cycles occur? Economists think of the economy as always tending to gravitate toward a long-run trend rate of growth. Simultaneously, shocks are continually coming along that bump economic activity above or below that path for a time. Shocks occur all the time; how do they result in business cycles? Two mainstream, but opposing, schools of thought dominated early research. Models in the Keynesian tradition held that business cycles arise from shocks to aggregate demand, such as a dive in consumer spending (perhaps spurred from shifts in confidence) or government budget tightening. A key element was that prices and wages do not adjust quickly, resulting in painful spells of unemployment and contractions in production. This implied that policymakers can potentially offset recessions with expansionary fiscal or monetary policy. An alternative framework, in which prices adjust flexibly to changing conditions, suggested that recessions are instead caused by fundamental changes in the economy’s ability to produce, such as an oil supply shock or a particularly bad harvest. This “real business cycle” framework suggested that recessions, while painful for affected individuals, are necessary responses to shocks without an obvious role for policymakers to play. Each approach had its drawbacks. Keynesian models 6  E CO N F O C U S | F I R ST Q U A RT E R | 2 0 1 7  had a limited role for disruptions to supply — which characterized the vast majority of business cycles throughout history. And the real business cycle prediction that monetary policy had no effect on the real economy seemed demonstrably untrue. Complicating research is that recessions differ dramatically in severity and length, ranging from the three-year, seven-month recession at the start of the Great Depression to the six-month recession of 1980. During the Great Moderation of the mid-1980s through the 2000s, recessions were milder, shorter, and less frequent. Some observers even suggested we had reached the end of business cycles. That proved too optimistic. The late 1990s saw a synthesis in research that considered different sources of shocks while acknowledging some degree of wage and price stickiness. And since then, research has focused on modeling the frictions in the economy that might make a particular shock more likely to propagate and amplify into an economy-wide downturn. Financial market frictions, in particular, have been a focus since the 20072008 financial crisis. If borrowers are collateral-constrained, for example, to what extent might a decline in housing wealth inhibit the ability of a large number of households to borrow and spend, sparking a deep recession? Financial markets had not always featured prominently in business cycle theory, perhaps because many financial market disturbances — such as the 1987 stock market crash, which had a minimal effect on the economy, and the more recent dot-com bust, which was followed by one of the mildest recessions in modern history — seemed not to affect the overall economy much. The financial crisis differed from these market disturbances in that it took place largely in debt markets. That it was followed by the Great Recession has made many economists rethink the role that debt and deleveraging might play in business cycles. The expansion following the Great Recession reached 90 months at the end of 2016, one of the longest on record. To some, this raised the question of when the United States might be “due” for another recession. But most economists think that’s the wrong question: Though recessions seem to be inevitable, they clearly have no set regularity. In predicting recessions, a good rule of thumb is to worry less about average length of business cycles and more about whether the economy is overheating — and consider that shocks could throw off all predictions.	 EF  ILLUSTRATION: TIMOTHY COOK  BY R E N E E H A LT O M  RESEARCH SPOTLIGHT  How Does Finance Fuel Growth?  E  BY R E N E E H A LT O M  conomists have long thought financial markets significant effect of financial deregulation, adding roughly to be beneficial to economic growth. Financial 0.8 percentage points to growth in state output per worker markets allow savings to be put to use, facilitate each year. But how? Financial development is found to investment by pooling risk, and help allocate capital to increase growth of total factor productivity (TFP), a meathe most lucrative and efficient projects. All of the above sure of the state of technology, as well as other determifoster competition and innovation, which contribute to nants of the productivity of labor and capital. This, in turn, rising living standards. suggests that “financial development fosters innovation and Measuring the relative importance of the channels entry of new firms, which together boost the economy’s through which finance boosts growth has been harder. productivity,” Jerzmanowski notes. Deregulation also coinOne challenge for researchers is that measures of financial cides with the accumulation of physical capital, consistent development — such as stock market activity or measures with the notion that access to credit facilitates investment. of the supplies of money and credit — are both affected by He finds no evidence that access to credit affects the rate of growth and affect growth in turn. That makes the causal human capital development, perhaps due to the large role effect of finance statistically harder to distinguish. of the government and nonprofits, as opposed to banks, in A recent paper by Clemson University economist Michal funding private educational investment. Jerzmanowski takes a stab at this question using a natural Contrary to evidence across countries, Jerzmanowski experiment — that is, when a measure of the topic one is finds little evidence that finance fuels “convergence,” the interested in studying (in this case, financial market developrate at which poorer states catch up to richer states. (Capital ment) arises fortuitously in a way accumulation does seem to that overcomes statistical probaccelerate in states that start Michal Jerzmanowski. “Finance and Sources lems like simultaneous causation. with very low capital stocks, but of Growth: Evidence from the U.S. States.” As a proxy for financial develthe evidence for this is weak.) opment, Jerzmanowski looks at The author suggests this may Journal of Economic Growth, March 2017, the dates of steps that U.S. states be because rates of innovation vol. 22, no. 1, pp. 97-122. took toward deregulating their and technology adoption do not banking systems. This began in stop once economies leave the the mid-1970s, when states began allowing their institutions bottom rung; development furthers these processes for to branch within state lines, out-of-state banks to branch rich economies as well. It could also be due to the fact that within their states, and bank holding companies to conthere’s little convergence left to be had among U.S. states solidate their subsidiaries into branches of a single bank. compared to the starker differences in income levels among (Barriers to bank branching were later eliminated nationally countries. And finally, traditional commercial banking is with the Riegle-Neal Act of 1994.) States made these moves not the only place where credit is offered; venture capital at different times, allowing researchers to look at whether the and financial markets also play a significant role in more timing of these policy shifts was met with a boost in growth. developed economies like the United States. But is the timing of deregulation truly unrelated to Finally, Jerzmanowski addresses a common critique of growth and thus valid as the basis of a natural experiment? studies on banking deregulation: that financial developPrevious research suggests so. Local lobbying power — ment boosts growth merely by growing the finance industry historically in the form of agricultural interests that preitself. He looks at the effect across three sectors: manuferred banks to be small and local, as well as on behalf of facturing, agriculture, and a collection of “other” sectors smaller banks themselves — has been found to be a much that includes financial-related sectors. The results show stronger predictor of banking deregulation than overall that finance actually has the largest effect on manufaceconomic conditions. turing, boosting growth by about 2 percentage points per Jerzmanowski employs a new dataset to evaluate the year compared to about 1 percentage point for all sectors. specific channels through which finance affects growth, one Financial deregulation appears to boost manufacturing based on output and stocks of physical and human capital through improvements to TFP and, somewhat surprisingly, across U.S. states. Physical capital estimates are from varinot the accumulation of physical capital (as elsewhere, ous sector censuses while human capital is calculated from finance had no effect on human capital). This is consistent state-level school-attainment data. The data span 48 states with the long-held notion that financial development and (Hawaii and Alaska are omitted) from 1970 through 2000. access to credit speed entry, innovation, and all-important The results confirm prior work indicating a positive and creative destruction. EF E CO N F O C U S | F I R ST Q U A RT E R | 2 0 1 7  7  The Missing Boomerang Buyers Does it matter whether those who lost their homes during the crisis come back to the housing market? By Jessie Romero  I  n July 2006, the Mortgage Insurance Companies of America, a now-defunct trade group, sent a letter to the Federal Reserve and other bank regulators. “[We] are deeply concerned about the potential contagion effect from poorly underwritten or unsuitable mortgages and home-equity loans,” wrote Suzanne Hutchinson, the group’s executive vice president. “[T]he most recent market trends show alarming signs of ongoing undue risk-taking that puts both lenders and consumers at risk.” The concerns were well-founded. Around the same time, the seemingly unlimited increase in house prices turned out to have a limit after all. As prices declined and the U.S. economy worsened, a wave of defaults that originated in the subprime mortgage sector eventually spread through the entire housing market. Millions of homes would be lost to foreclosure over the next decade. A foreclosure is a serious black mark on a consumer’s credit report, making mortgages and other types of credit more expensive to obtain. But most negative credit information is erased after seven years, so, in theory, homeowners who experienced a foreclosure during the first few years of the crisis should have the damage to their credit behind them now. As those foreclosures began to clear, many observers speculated that a slew of “boomerang buyers” was poised to return to the housing market. Those buyers have been slow to materialize, which might seem surprising in light of rising home prices and reports of bidding wars in many areas of the country. Higher prices, however, appear to reflect a relatively low supply of housing rather than a surge in demand. To the extent the housing market contributes to GDP, the absence of boomerang buyers could have implications for nearterm economic growth in the United States. So what’s hindering their return? Mortgage Mania The kinds of loans the potential boomerang buyers took out the first time around might influence their likelihood to return to the housing market. In general, mortgages are classified according to features of the borrower or features of the loan. With respect to borrowers, loans are either prime or nonprime; the latter category includes both subprime loans and “alt-A” loans. While there is no legal definition of prime or subprime, most lenders use a FICO credit score in the mid-600s as the cutoff. (FICO scores range from 300850.) Alt-A loans are made to borrowers who have higher-thansubprime credit scores but are unable to obtain a prime loan for  8  E CO N F O C U S | F I R ST Q U A RT E R | 2 0 1 7  Share of Personal Income  2016  2012  2008  2004  2000  1996  1992  1988  1984  1980  1976  1972  1968  1964  100 90 80 70 60 50 40 30 20 10 0 1960  PERCENT  U.S. Outstanding Mortgage Debt  Share of GDP  NOTE: Mortgage debt outstanding on one-to-four family residences. Shaded areas denote recessions. SOURCE: Bureau of Economic Analysis and Board of Governors/Federal Reserve Economic Data (FRED)  U.S. Foreclosure Rate 1.6 1.4 1.2 PERCENT  other reasons, such as a high debt-to-income ratio or an inability (or unwillingness) to document their income. With respect to loan features, loans are either traditional or nontraditional. In general, a traditional mortgage is any product that does not allow the borrower to defer repaying interest or principal. Nontraditional mortgages include products with negative amortization, interest-only payment options, balloon payments, or little to no down payment, among other characteristics. While not all nontraditional mortgages are nonprime and vice versa, there is significant overlap between the two categories. Mortgage lending increased dramatically beginning around 2000; outstanding residential mortgage debt grew from 48 percent of GDP to 75 percent by the end of 2006. As a share of personal income, mortgage debt grew from 56 percent to 91 percent over the same period. Prior to 2000, it took more than two decades for the shares to increase by a similar proportion. (See chart.) At the same time debt was increasing, there was a marked shift in the composition of loans. In the late 1990s, between 10 percent and 15 percent of mortgage originations, including both purchase and refinance loans, were nonprime; the share grew to nearly 40 percent by 2006. (Subprime loans made up about three-quarters of nonprime loans in the early 2000s, and the share fell to roughly 60 percent after 2003.) Between 2004 and 2007, the share of nontraditional mortgages nearly tripled, from 12.5 percent of originations to 35.1 percent, according to the industry publication Inside Mortgage Finance. These loans were taken out by borrowers from all demographic groups, but a number of researchers have documented that black and Hispanic borrowers were more likely to receive higher-cost or nontraditional loans, even after controlling for characteristics such as income and credit score. Anecdotally, much of the rise in mortgage lending was driven by people buying second homes for vacation or retirement or by speculators who intended to renovate and quickly “flip” the homes. But the role of investors is uncertain, in part because they are difficult to identify accurately in the data. Investors might have an incentive to lie about their occupancy status on their mortgage applications in order to receive more favorable terms, and research suggests such misrepresentation was widespread during the housing boom. Studies that rely on self-reported occupancy status thus are likely to understate the number of investors. In a 2011 paper, Andrew Haughwout, Donghoon Lee, Joseph Tracy, and Wilbert van der Klaauw of the New York Fed identified investors based on the number of first-lien mortgages an individual held. The authors found that in 2000, investors accounted for about 20 percent of the dollar value of purchase loans. By 2006, investors accounted for 35 percent of the value and as much as 45 percent in Arizona, California, Florida, and Nevada (widely referred to as the “sand states”). The authors also found that investors were more likely to take out nonprime  1.0 0.8 0.6 0.4 0.2 0.0 2000  2002  2004  2006  2008  2010  2012  2014  2016  NOTE: Share of existing mortgages on one-to-four family residences entering foreclosure. Shaded areas denote recessions. SOURCE: Mortgage Bankers Association/Haver Analytics  and nontraditional mortgages in order to increase their leverage and potentially amplify their returns. Who Lost Their Homes The long spiral of mortgage defaults and price declines began in 2006. By early 2012, house prices nationally had fallen nearly 30 percent and as much as 60 percent in the sand states. Between 2007 and 2014, more than 12.8 million homes entered the foreclosure process — roughly 29 percent of all homes with a mortgage. At the peak of foreclosures in 2009, more than 650,000 homes, 1.5 percent of those with a mortgage, entered foreclosure in a single quarter. (See chart.) Because many foreclosure filings during the crisis took months or even years to process, it’s difficult to calculate the share that actually resulted in a completed foreclosure (that is, a sale at auction or repossession by the lender). But between 2007 and 2016, there were nearly 7.8 million completed foreclosures, according to data from CoreLogic, a housing analysis group. Other outcomes E CO N F O C U S | F I R ST Q U A RT E R | 2 0 1 7  9  might have been short sales, deeds in lieu of foreclosure, or loan modifications. Initially, defaults were concentrated in the nonprime and nontraditional market segments. But as more homeowners became underwater on their mortgages and job losses increased, prime borrowers were affected as well. “The first wave of foreclosures was subprime mortgages blowing up,” says Nela Richardson, chief economist for the national real estate brokerage Redfin and a former researcher at Harvard University’s Joint Center for Housing Studies. “The second wave was the economic downturn. Borrowers were upside down on their loans and then they lost their jobs — and maybe their health insurance and their kids’ college funds. It was a double or triple whammy.” All else equal, subprime borrowers were more than twice as likely to lose their homes to foreclosure or short sale, according to a 2015 paper by Fernando Ferreira and Joseph Gyourko of the University of Pennsylvania. But the authors also found that about twice as many prime borrowers as subprime borrowers wound up experiencing a foreclosure or short sale. That’s because prime borrowers still made up the majority of the housing market despite the rise of subprime lending. Black and Hispanic borrowers were more likely to enter foreclosure than white borrowers. Among borrowers who purchased homes between 2005 and 2008, nearly 8 percent of black and Hispanic borrowers had lost their homes to foreclosure by the end of 2009 versus 4.5 percent of white borrowers, according to a 2010 study by the Center for Responsible Lending, a consumer advocacy group. Blacks and Hispanics also were more likely to be seriously delinquent on their mortgages. The disparities became smaller, but did not disappear, after the researchers controlled for income levels. In a 2016 article, Ferreira, Patrick Bayer of Duke University, and Stephen Ross of the University of Connecticut also found significant racial and ethnic differences in mortgage outcomes, even between borrowers with similar credit scores and loan characteristics. The source of the disparity could be minorities’ greater vulnerability to unemployment during economic downturns combined with the timing of their entry into the housing market. Intuitively, investors should be more likely to default on their mortgages than owner-occupants, since “there’s very little reason not to default on an investment property loan if it’s offering a negative return,” says Haughwout. “It’s one thing to move your family if you’re underwater — that’s very costly. But it’s another thing entirely to let go of a property that’s not a good investment.” The evidence on investors’ propensity to default during the crisis is mixed, however. On the one hand, Ferreira and Gyourko found that investors were about as likely to experience a foreclosure or short sale as owner-occupants with similar loan types and amounts of leverage. On the other hand, Haughwout and his 10  E CO N F O C U S | F I R ST Q U A RT E R | 2 0 1 7  fellow New York Fed economists found that investors’ delinquency rates in the nonprime sector increased more rapidly than owner-occupants’ rates, and that by 2008 investors’ share of seriously delinquent nonprime mortgage debt exceeded their share of overall mortgage debt. Consistent with Ferreira and Gyourko, they also found that some of the difference between investor and non-investor delinquency rates was related to the fact that investors were more likely to take out loans with a greater initial risk of default, for example, because they were in the sand states or had higher leverage. But about half of the difference remained unexplained, which suggests investors might indeed have taken a more pragmatic approach to default than other homeowners with similar characteristics. Bouncing Back? Homeowners who enter foreclosure take a serious hit to their credit. According to Fair Isaac Corp., the FICO score’s developers, a borrower with a credit score of 780 usually can expect to drop between 140 and 160 points; one with a score of 680 can lose 85 to 105 points, assuming there are no other delinquencies. (Short sales, deed surrenders in lieu of foreclosure, and most loan modifications have a smaller but still substantial negative effect.) During the foreclosure crisis, however, borrowers who lost their homes experienced even larger declines — 175 points on average for prime borrowers, and 140 points on average for subprime borrowers according to a 2016 Chicago Fed Letter by Sharada Dharmasankar of the consulting group Willis Towers Watson and Bhashkar Mazumder of the Chicago Fed. By law, many negative credit events, including foreclosure, are removed from individuals’ credit records after seven years. In principle, then, borrowers who experienced a foreclosure in 2007 should have seen their credit scores recover in 2014 and successive waves of borrowers in the years following. In a 2015 report, the foreclosure analytics company RealtyTrac estimated that 7.3 million people would have their credit sufficiently repaired to buy homes over the next eight years. Other trade groups and analysts also calculated that millions of former homeowners would have the credit to become homeowners again in the coming years. That prompted speculation that a wave of “boomerang buyers” was poised to re-enter — and reignite — the housing market. In the same report, RealtyTrac called these former homeowners “a massive wave of potential pent-up demand.” But history says not all those buyers are likely to come back. According to a 2016 study by CoreLogic, fewer than half of those who lost a home in 2000 or later have purchased new homes, even among those 16 years past a foreclosure. The boomerang rate has been especially low so far for people who lost their homes during the crisis. A little over 30 percent of borrowers who lost their homes in 2000 had  purchased another home seven years after the event. But only about 15 percent to 20 percent of borrowers who lost a home between 2006 and 2008 had returned to the housing market after seven years. Dharmasankar and Mazumder found similar results. Within seven years of a foreclosure that occurred between 2000 and 2006, about 40 percent of prime borrowers and 30 percent of subprime borrowers had purchased another home. But among borrowers who experienced a foreclosure between 2007 and 2010, only 25 percent of prime borrowers and 17 percent of subprime borrowers were homeowners seven years later. Once Bitten, Twice Shy A variety of factors could explain why homeowners (both owner-occupants and investors) who experienced foreclosure during the most recent crisis have been slow to return to the housing market. First, foreclosure generally is not an isolated incident; consumers tend to have higher delinquency rates on other forms of credit after a foreclosure than they did before the foreclosure. “It’s not very common that all your credit is fine except for the foreclosure,” says Haughwout. “And once you’ve experienced a foreclosure, the interest rate increases on your other debt, and it becomes harder to keep up with. The foreclosure has a deleterious effect for years.” The foreclosure crisis and Great Recession might have been particularly damaging financially. At least through 2011, borrowers who lost their homes between 2007 and 2009 had higher delinquency rates on credit cards and auto loans than borrowers who lost their homes in the early 2000s, a similar length of time after the foreclosure, according to a 2010 paper by Cheryl Cooper and Kenneth Brevoort of the Consumer Financial Protection Bureau and subsequent research by Brevoort. Dharmasankar and Mazumder found that the credit scores of people who went through a foreclosure between 2007 and 2010 have been slower to recover than those who had a foreclosure between 2000 and 2006. Prime borrowers have been especially slow to regain their former scores since they have a higher score to return to. As of 2016, previously foreclosed homeowners who had not returned to the housing market had significantly higher delinquency rates and lower credit scores than those who had returned, according to research by Michele Raneri of Experian. They also had higher delinquency rates than the U.S. average, which suggests continuing credit problems could be a hindrance for some former homeowners. Tighter lending standards could also be preventing some people from re-entering the housing market. To the extent some borrowers were able to obtain larger or riskier mortgages during the boom than they would have at other times, that may reflect a prudent amount of risk-taking by lenders. Still, there might be some creditworthy borrowers who would like to purchase a home but cannot. Although mortgages currently are easier to obtain than they were in the  “The market the boomerang buyers bought into the first time around doesn’t exist anymore.” ­ Nela Richardson — chief economist at Redfin years immediately following the crisis, when lenders drastically curtailed lending, mortgage credit during the first quarter of 2017 was only about one-half as available as it was in 2004, according to the Mortgage Bankers Association’s Mortgage Credit Availability Index. In addition, many potential homebuyers perceive that they would be unable get a loan. According to the New York Fed’s 2016 Survey of Consumer Expectations Housing Survey, nearly 70 percent of current renters thought it would be very difficult or somewhat difficult for them to obtain a mortgage. “The market the boomerang buyers bought into the first time around doesn’t exist anymore,” says Richardson. Some borrowers who could re-enter the housing market might not want to. Particularly for owner-occupants, research points to deep emotional scars from experiencing a foreclosure, which could affect one’s willingness to purchase a home again. Also, many of the people who lost their homes during the crisis were first-time homebuyers, and there is some evidence the crisis altered their views about the prudence and benefits of homeownership, at least in the medium term. As of December 2014, the credit bureau TransUnion estimated that about 1.26 million previously foreclosed consumers had recovered enough financially to meet strict underwriting standards. Of them, only 42 percent had taken out a new mortgage. Investors might be less sanguine about real estate as an investment strategy. Raneri also found that between 40 percent and 45 percent of investors (including second-home owners) who went through foreclosure between 2001 and 2006 returned to the market. The share for those who experienced a foreclosure between 2007 and 2010 was between 16 percent and 19 percent. (The lower share could reflect in part that 2010 foreclosures had not been erased from credit reports.) The number of people flipping houses is also significantly lower than it was during the boom. In 2005, more than 275,000 investors flipped 340,000 homes, or 8.2 percent of sales, according to ATTOM Data Solutions (which operates RealtyTrac). In 2016, 125,000 investors flipped fewer than 200,000 homes, or 5.7 percent of sales. That’s a slight increase from 2015, but overall the number of homes flipped has been relatively flat since 2010. By some measures, the housing market looks quite strong. In many areas of the country, house prices have rebounded to their 2006 peak and the length of time homes remain on the market has declined. But this in part is the E CO N F O C U S | F I R ST Q U A RT E R | 2 0 1 7  11  result of low inventory; new housing permits and new home construction starts have increased since 2010 but are low by historical standards. This relative lack of supply could be preventing some former homeowners from boomeranging. “We’re in a seller’s market,” says Richardson. “And there are a lot of cash buyers who are able to make sizeable down payments. That curtails the ability of boomerang buyers to make a successful bid in this market.” Does Homeownership Matter? The U.S. homeownership rate, defined as the percentage of households who own the home they live in, was 63.6 percent in the first quarter of 2017, compared to the peak of 69.2 percent in 2004. Since the Census Bureau began keeping track, the lowest recorded value was 62.9 percent in 1965. At first glance, it might seem that the increase in the homeownership rate during the early 2000s was driven by the expansion of mortgage credit to certain categories of borrowers, and that the decline is the result of these borrowers losing their homes. But the increase in nontraditional and nonprime loans does not seem to have had much effect on the homeownership rate. In part, that’s because the increase might have helped people obtain bigger mortgages than they otherwise would have rather than pushing them into homeownership to begin with. And to the extent the expansion of credit did increase the number of homeowners, it still might not have had a large effect since the owners of rental homes or other investment properties aren’t counted in the homeownership rate. “After 2004, many new purchases were by speculative investors,” says Haughwout. “There was a lot of buying and selling that didn’t have anything to do with the homeownership rate.” In large part, the rise in the homeownership rate through 2004 reflected the aging of the U.S. population, since older adults are more likely to own their homes, according to research by Haughwout and fellow New York Fed economists Richard Peach and Joseph Tracy. And much of the decline since then is the result of a secular decline  in homeownership for young and middle-aged adults, particularly those aged 25-54, a trend that backs to the 1980s. (The remainder does seem to be due to people who left the market via foreclosure.) Multiple factors could explain this decline in homeownership, such as declining real incomes for some groups or changes in preferences. Whatever the cause, it suggests that even if many more buyers boomeranged, the homeownership rate would be unlikely to return to its pre-crisis peak. Does that matter? For someone trying to buy or sell a home, the answer surely is “yes.” But for society as a whole, the answer is less clear. Some studies point to large social externalities; homeowners may have stronger incentives to maintain their homes and neighborhoods and invest in their community’s civic and social lives. But it’s difficult to establish a causal link between homeownership and community engagement. It could be that people who are more likely to plant attractive landscaping or vote for school board members are also more likely to buy homes rather than homeownership inducing those actions. And in some ways, homeownership might actually have negative effects, such as making labor markets less flexible if it is more difficult for people to move for new employment opportunities. The housing market is a vital part of the U.S. economy. Increases in residential investment, including new homes and remodeling, generate a lot of jobs — not only in construction, but also in real estate, finance, and transportation, to name just a few industries. Moreover, rising home prices create a wealth effect that enables many households to fund consumption. Some economists and policymakers thus pointed to the sluggishness of the housing market after the recession as a factor contributing to slower-than-desired economic growth. If potential boomerang buyers remain on the sidelines and current trends in homeownership continue, it’s unlikely that housing activity will return to the levels of the boom years — or that it will make as large a contribution to GDP growth. But to the extent the economy is in the process of adjusting to a sustainable level of housing activity, that may be an unavoidable cost. EF  Readings Belsky, Eric S., and Nela Richardson. “Understanding the Boom and Bust in Nonprime Mortgage Lending.” Joint Center for Housing Studies of Harvard University, September 2010. Brevoort, Kenneth P., and Cheryl R. Cooper. “Foreclosure’s Wake: The Credit Experiences of Individuals Following Foreclosure.” Federal Reserve Board of Governors Finance and Economics Discussion Series, Working Paper No. 2010-59, Nov. 18, 2010. Dharmasankar, Sharada, and Bhashkar Mazumder. “Have Borrowers Recovered from Foreclosures during the Great Recession?” Federal Reserve Bank of Chicago Fed Letter No. 370, 2016.  12  E CO N F O C U S | F I R ST Q U A RT E R | 2 0 1 7  Ferreira, Fernando, and Joseph Gyourko. “A New Look at the U.S. Foreclosure Crisis: Panel Data Evidence of Prime and Subprime Borrowers from 1997 to 2012.” National Bureau of Economic Research Working Paper No. 21261, June 2015. Haughwout, Andrew, Richard Peach, and Joseph Tracy. “A Close Look at the Decline of Homeownership.” Federal Reserve Bank of New York Liberty Street Economics blog, Feb. 17, 2017. Haughwout, Andrew, Donghoon Lee, Joseph Tracy, and Wilbert van der Klaauw. “Real Estate Investors, the Leverage Cycle, and the Housing Market Crisis.” Federal Reserve Bank of New York Staff Report No. 514, September 2011.  Robots for the Long Haul There are 1.8 million heavy truck and tractor-trailer drivers in the United States. Will self-driving trucks soon mean the end of many of those jobs? BY DAV I D A . P R I C E  I  n October 2016, a tractor-trailer loaded with about 52,000 cans of beer traveled 120 miles on I-25 from Fort Collins, Colo., to Colorado Springs. That, in itself, was unremarkable. What made the trip historic is that there was no one in the driver’s seat: A driver sat in the back of the cab while an automated system did the work. An on-board computer collected information on the truck’s surroundings from video cameras, laser-based sensors, and radar, then used it to make decisions about steering, acceleration, and braking. The beverage run was a demonstration of a self-driving truck system under development by San Francisco, Calif.based Otto, founded in January 2016 by a team that included engineers involved with Google’s self-driving car efforts and with Google Maps. The firm was acquired at the advanced age of eight months by Uber for a reported $680 million. Otto is one of a number of companies, both startups and established manufacturers, working on self-driving trucks; the projects are generally focused on automating long hauls on highways, with human drivers — at least for some time to come — riding along to take the wheel on local streets. The promise: safer highways, as the systems can’t get drowsy and, in theory, won’t make mistakes; less fuel consumption, since the autonomous trucks can be programmed to keep to efficient speeds; and, depending on whom you talk to, perhaps lower labor costs — much lower. With the software in control from highway on-ramp to off-ramp, companies say, drivers will be able to take their required rest breaks in the sleeper berths of the cabs, allowing for close to 24/7 utilization of the trucks and fewer truck drivers. That, in turn, means cheaper transportation. But it’s a development that may repay close attention by policymakers and labor-market economists. Long-haul truck driving is among a dwindling number of jobs that pay a middle-class wage without requiring a college degree. According to the Bureau of Labor Statistics (BLS), some 1.8 million people, most of them driving long hauls, earn a living as drivers of heavy trucks and tractor-trailers, with a median income of more than $41,000. It sounds like a lot of jobs, and it is. A 2015 study by researchers at the Philadelphia Fed, the Cleveland Fed, and the Atlanta Fed ranked the U.S. economy’s “opportunity occupations,” meaning the occupations paying at least the national median wage (adjusted for local price differences) and available to workers without a bachelor’s degree. Looking at the nation’s 100 largest metropolitan statistical areas, they  found that 27.4 percent of employment was in opportunity occupations in 2014 — and in terms of the number of jobs in opportunity occupations, heavy and tractor-trailer truck driving ranked fifth. (Registered nurse jobs ranked first.) Overall, heavy and tractor-trailer truck driving made up one in eight jobs in opportunity occupations. Should we be concerned? An Industry Rolling Out The impetus for the development of self-driving vehicles, both cars and trucks, came from the U.S. military after the turn of the millennium. The Defense Advanced Research Projects Agency, or DARPA, sponsored a “grand challenge” in 2004, offering a $1 million prize for the autonomous vehicle that was first to complete a course across 142 miles of desert from Barstow, Calif., to Primm, Nev. (In past decades, DARPA had provided seed money for the development of other technologies with military potential, including 3D computer graphics and a precursor of the Internet.) Fifteen vehicles started, but none finished; the most successful vehicle made it only 7.5 miles. 	Progress came quickly, however: Another challenge the following year saw five vehicles out of 195 entrants finish a 132-mile course in Nevada. And in 2007, a third challenge set in a simulated environment of urban traffic yielded six finishers out of 11 contestants. A decade later, while self-driving cars may get more of the headlines, self-driving trucks are the sought-after grail of development teams at around a half-dozen companies. In addition to Otto, the company behind the Colorado demonstration, Daimler’s Freightliner division is developing and testing a self-driving semi truck, named Inspiration, that is licensed to operate on the roads of Nevada. PACCAR, maker of Kenworth, Peterbilt, and other truck lines, has announced a partnership with chip maker NVIDIA to develop self-driving trucks and has reported testing its first on a closed course. Two other Bay Area startups, Embark and Starsky Robotics, are road-testing self-driving semis. The latter firm plans to station truck drivers in a central location to supervise 10 to 30 trucks each and have them drive the trucks during the local portions of trips by remote control. And large self-driving trucks from Caterpillar and Komatsu are being used at mine sites to haul mining loads. The latest generation of the Komatsu machine is headless — that is, it doesn’t have a cab for a driver. Volvo Trucks E CO N F O C U S | F I R ST Q U A RT E R | 2 0 1 7  13  14  E CO N F O C U S | F I R ST Q U A RT E R | 2 0 1 7  PHOTOGRAPHY: STARSKY ROBOTICS  line item — higher, even, than the loan or lease payments on the truck itself. Thus, even a modest 10 percent increase in fuel economy from more energy-efficient driving would translate into a significant payoff. Then there are the drivers. Few believe that long-haul truck drivers will be replaced entirely; for the time being, and perhaps for a long time, they’ll be needed to handle local roads and to deal with things like weigh stations, refueling, breakdowns, tire blowouts, and loading and unloading. But if the developers of A semi truck outfitted for self-driving by Starsky Robotics operates in autonomous mode self-driving trucks can make the trucks during a highway trip in February 2017. autonomous on the highways, and overis testing a self-driving truck in an underground mine in come the regulatory obstacles, the savings in salaries, benSweden, where it operates in tunnels more than 4,000 feet efits, and recruiting costs could be high. Morgan Stanley below the surface. research estimated in 2013 that adoption of self-driving Apart from the ones toiling at the mines, the self-driving trucks could yield a two-thirds reduction in the number of trucks under development are designed to run autonodrivers. Even if the shift leads companies to demand more mously on the highway portion of a long haul because technical skills in the remaining driver positions, leading to highway driving is easier to automate. a 50 percent wage increase, Morgan Stanley estimated that “Highway driving is a lot simpler than driving around the net result is still an elimination of around half of total San Francisco,” says Stefan Seltz-Axmacher, CEO and labor costs, for a savings of roughly $70 billion industryco-founder of Starsky Robotics. “Humans aren’t great at wide. The assumption of a wage increase, moreover, may doing repetitive tasks for long periods of time. Robots are be generous since the reduction in their actual driving time really good at sustained boring tasks.” during a trip could push wages down. The American Trucking Associations, a trade associAttractions of Self-Driving Trucks ation of trucking companies and other truck fleet operBehind these efforts is a bet that self-driving trucks will ators, has expressed skepticism about the technology’s bring major cost savings. One category of potential savpotential to displace drivers. “It’s important technology,” ings is avoiding accidents; in 2015 alone, according to says Bob Costello, the organization’s chief economist, the National Highway Traffic Safety Administration, “but we just don’t think it gets rid of the driver anytime accidents involving large trucks killed 4,067 people and soon or even allows the driver to go back and sleep.” injured an estimated 116,000. Of the fatal crashes involvIn Costello’s view, self-driving trucks will make trucking large trucks, 27 percent occurred on an interstate, ers’ jobs a bit easier rather than replace them. where self-driving trucks could be expected to make a “Autonomous technology should make the highways difference. Beyond the costs associated with lost lives and safer for all vehicles,” he says. “But aircraft have been injuries, trucking companies and their insurers bear costs autonomous in many ways for a long time, and you still from vehicle damage, cargo delays, and more. have pilots in the cockpit. We think that is very much true Still, it’s not yet clear how much better self-driving for the foreseeable future for trucking.” trucks will do than their human counterparts: A Federal But some proponents predict that automation will Motor Carrier Safety Administration study in 2008 found eliminate the need for truckers in the cab during the highthat in crashes between a truck and a car, the car or its way portions of trips sooner rather than later. “I think it’s driver was the cause 56 percent of the time, not the truck going to happen very rapidly,” says Seltz-Axmacher. “The driver. And in 27 percent of car-truck accidents — whether sight of a truck driving autonomously on an interstate will attributed to the car or the truck — there were brake probnot be extraordinary in five years. It will be within that.” lems in the truck, a maintenance issue rather than a driver issue. Regardless of the exact amount of improvement, The Demise of White Line Fever? though, developers of the trucks see accident prevention as The onset of self-driving trucks, if they live up to the a major selling point. labor-saving claims, presents a new instance of a quesAnother is fuel savings. The American Transportation tion that has periodically confronted economists and Research Institute found in a 2016 report that fuel costs policymakers for centuries: What, if anything, should the in recent years have made up 30 percent to 40 percent of government do when equipment is displacing — or seems a motor carrier’s operational costs on average, the largest likely to displace — large numbers of workers? For the  most part, the consensus answer historically has been: Do nothing to stand in the way of adoption of new labor-saving technology, because the displaced labor will find its way to more productive uses. Yet some historical concerns about automation seem to have been partly vindicated. Tim Taylor, managing editor of the Journal of Economic Perspectives, has noted that while forecasts of rising unemployment have not come true, forecasts of increasing income inequality have to an extent. Since the 1980s, the U.S. economy has seen a pattern in which high-education, high-wage jobs and lowwage, low-education jobs have grown, while the share of employment in the middle — the routine jobs that have been the most susceptible to automation, such as production workers and clerical workers — has gone down, a trend known as “job polarization” or “hollowing out.” And in the short term, such changes mean painful adjustments for the displaced jobholders, notes Harvard University labor economist Richard Freeman. “If you’ve been doing truck driving for 10 or 15 years, it’s going to be harder for you to make investments in new kinds of skills,” he says. “Traditionally, when people get laid off — the evidence is mainly for factory-type people — they take roughly a 20 percent cut in wages to find another job, they’re not getting as good a job, and it can take six months to a year. So there is a big cost.” Another factor, Freeman says, is that self-driving trucks are just a part of a much larger movement toward robotics and other automation. “One of the things about the current technology is that the other jobs that you might have said people would go to are also being impacted.” Economists and others have put forward a number of proposals to reduce the effects of job loss from technological change, offshoring, and other structural forces. Beyond state unemployment insurance programs, these have included retraining and a universal basic income (that is, a guaranteed income paid by the government to all citizens regardless of need). In a paper published by the Brookings Institution in 2005, three researchers who were then with Brookings — Lael Brainard (now on the Fed’s Board of Governors), Robert Litan, and Nicholas Warren — argued for a federal wage insurance program for all long-tenured workers who are permanently displaced; the workers would receive a wage subsidy for two years after landing a new job. But there are optimistic scenarios for truck drivers. One is that truck driving jobs might follow the path of bank teller jobs after the introduction of automated  teller machines (ATMs). During the period from 1980 to 2010, the number of bank tellers in the United States actually increased slightly even as ATMs proliferated, according to James Bessen of the Boston University School of Law. ATMs reduced the cost of bank branches, but banks did not simply pocket those savings. “Banks responded by opening more branches to compete for greater market share,” Bessen wrote in a 2015 article in Finance & Development. “Bank branches in urban areas increased 43 percent. Fewer tellers were required for each branch, but more branches meant that teller jobs did not disappear.” Could the same happen in trucking? Michael Watson, a supply chain consultant and co-author of the 2012 book Supply Chain Network Design, says that self-driving trucks may change the economics of supply chains in ways that could mitigate — but probably not fully offset — the job losses. By reducing the cost of transportation, self-driving trucks might lead manufacturers to build more warehouses so they can give customers faster deliveries. “A large manufacturer may have only two to five warehouses in the United States,” Watson says. “One of the reasons is that it’s expensive to store inventory in these facilities. And it’s expensive to ship products to the warehouses. But if the transportation costs get cheaper with self-driving trucks, I can have a lot of little warehouses around the country and provide better service.” That, in turn, creates jobs in local delivery. Moreover, Watson says, many of the new short-haul jobs would likely be higher-value-added jobs, interacting with customers and collecting intelligence. According to the BLS, today’s delivery drivers and driver/sales workers have a lower median income of $28,000, though that could change depending on how the role evolves. “The analogy is companies like Coca-Cola and Pepsi that make deliveries into the grocery store,” he suggests. “When the drivers make a delivery, they’re stocking the shelves, making sure their shelves look right. They’re also gathering competitive information. So when Coke goes in, they’re looking at what Pepsi’s doing and passing that information back. More companies will be able to do that when the economics of trucking change.” Self-driving trucks, Watson says, will be only the starting point for changes in the industry. “Amazon’s not going to just take the reduced transportation costs and call it a day,” he contends. “They’re going to use this to change service in a whole new way. Other companies will do the same.” EF  Readings Autor, David H. “Why Are There Still So Many Jobs? The History and Future of Workplace Automation.” Journal of Economic Perspectives, Summer 2015, vol. 29, no. 3, pp. 3-30. Bessen, James. “Toil and Technology.” Finance & Development, March 2015, pp. 16-19.  Wardrip, Keith, Kyle Fee, Lisa Nelson, and Stuart Andreason. “Identifying Opportunity Occupations in the Nation’s Largest Metropolitan Economies.” Federal Reserve Banks of Philadelphia, Cleveland, and Atlanta, September 2015.  E CO N F O C U S | F I R ST Q U A RT E R | 2 0 1 7  15  Too Small To Succeed? The hard facts of education economics are putting some small colleges at risk BY H E L E N F E S S E N D E N  The Sweet Briar campus and historic marker.  Market Pressures The conventional wisdom is that today’s students prefer larger schools, especially in more urban settings, because those institutions offer more in the way of amenities, choice of studies, and internships and job opportunities around them. So as demand shifts, small schools will suffer. 16  E CO N F O C U S | F I R ST Q U A RT E R | 2 0 1 7  PHOTOGRAPHY: CASSIE FOSTER/SWEET BRIAR COLLEGE  I  n March 2015, the administrators of Sweet Briar College, a bucolic women’s college near Lynchburg, Va., needed to make a major announcement. Gathering students in the main auditorium, the school officials dropped a bombshell: The board had voted to close the college due to ongoing financial pressures. They had just one technical glitch — their microphones weren’t working. While students were struggling to hear the announcement, the press release had already gone out, so many saw the news on their phones instead. “It was totally chaotic,” recalls Holly Rueger, now a senior. “Hundreds of students began crying, no one knew what was going on, and the press was already gathering outside. We were in shock.” The news spread almost instantly among the school’s devoted alumnae. Within a week, a massive fundraising effort had begun, ultimately bringing in almost $22 million over the next two years. That infusion, backed by a legal settlement, helped the college hang on, albeit with a reduced staff and student body. Under new leadership, it’s now channeling the fundraising support into a longer-term survival strategy. Sweet Briar’s plight generated media attention due to its storied reputation and the energetic alumnae response. But the episode — coming amid closures or near closures of other small, cash-strapped schools — has contributed to a growing debate among education experts on whether a college can in fact be too small to survive.  College Closures Over Time 16 14 CLOSURES PER YEAR  And the evidence does point to increasing pressures on small colleges — well after the Great Recession. From the academic years 2010-2011 to 2014-2015, full-time equivalent undergraduate enrollment at four-year institutions (both public and private nonprofit) rose 3.7 percent, from about 7.63 million to 7.91 million. But enrollment at small four-year colleges — those with 1,000 students or fewer — dropped about 15 percent, from about 227,000 to 193,000. According to a 2015 report by Moody’s Investors Service, which issues financial ratings for hundreds of colleges and universities, small schools are also experiencing slowing revenue growth. In 2010, about 30 percent of small private colleges (which it defined as running annual operating costs of $100 million or less) had annual revenue growth under 2 percent. By 2014, that share had risen to more than 50 percent. Moody’s has also projected an uptick in closures, although historically the closure rate tends to fluctuate — and outright closures are rare. (See chart.) The tally of closures in any given year is less than 1 percent of the number of public and private four-year institutions, which is around 2,300. Experts note that the trend of financial stress is largely confined to private, nonprofit institutions. Public schools, despite budget cuts in recent years, rarely close because they still can count on state and federal support on a relatively predictable schedule. Highly selective private schools also have better financial health, on average, because they tend to reap more endowment income, post higher retention and graduation rates, and generally don’t have to worry about revenue dropping off due to enrollment declines. (There is also the matter of for-profit private schools, which have been closing at a much higher rate in recent years, but this is due to legal challenges and federal policy changes.) The vast majority of small nonprofit private colleges, by contrast, are not highly selective. At the same time, they’re extremely tuition-dependent, which leaves them more vulnerable when they suffer a drop in enrollment. A school’s tuition dependency ratio is the share of revenue that comes from tuition, as opposed to public funds, investment income, or other sources. According to Moody’s, the smallest colleges have an average tuition dependency ratio of 75 percent; a typical private nonprofit college, by contrast, draws between 30 and 40 percent of its revenue from tuition. And women’s colleges and historically black colleges and universities (HBCUs) are in an especially tight corner: They face a shrinking pool of prospective students as educational opportunities for these once-excluded groups have expanded broadly. “The small nonprofit private schools are on the edge of the free market,” says Kevin Carey, an education expert with New America, a Washington, D.C., think tank. “They have to figure out a way to survive mainly off of tuition. They don’t need to make more money than what is needed to fill classrooms and dorms, but they can’t make less.”  12 10 8 6 4 2 0 1990  1995 Private 4-year  2000  2005  2010  Public 4-year  NOTES: The institutions in the sample are four-year degree-granting schools (classified as institutions of higher learning before 1995-1996). Institutions that merged are not counted as those that closed. Recessions are shown in grey. Data are through 2014. For a list of recent college closures in the Richmond Fed’s district, see the article online at: http://www.richmondfed.org/publications/research/econ_focus/ SOURCES: U.S. Department of Education, National Center for Education Statistics; National Bureau of Economic Research  A Risky Model The particular risks of size and tuition dependency have dominated the research on what puts an institution at risk. For example, a 2009 working paper by Iowa State University researchers analyzed a sample of 824 private schools from 1975-2005 to find some common vulnerabilities in the 11 percent of the institutions that closed over those three decades. In terms of resources, the biggest risk factors (holding other factors constant) were student body size and endowment per student — in both cases, the smaller the number, the greater at risk. The paper noted that small schools are especially disadvantaged in that they don’t enjoy the same economies of scale that larger schools do — for example, by dispersing the burden of a fixed cost upon a bigger student population. Selectivity also played a major role in long-term financial health. But once other risk factors were accounted for, it didn’t matter to a school’s stability whether it had a liberal arts focus or a professional one, perhaps because many students who attend nominally liberal arts colleges still pursue professional degrees. Singlesex status also didn’t matter once the researchers adjusted for the common risk factors — it was just that many of the women’s schools that closed or merged in that sample happened to be small and cash-strapped to start with. Other researchers have highlighted similar risk factors. A 2013 Vanderbilt University comparative study by then-doctoral students Dawn Lyken-Segosebe and Justin Cole Shepherd took a more recent sample of school closures (2004 to 2013), pointing out that those affected schools, totaling 57, shared features such as small enrollment size, low revenue per capita, and tuition dependency. The researchers noted that tuition dependency poses an especially high risk for schools that face a downturn in enrollment or that have to tackle a major expense like capital improvements, because they lack the buffer of public appropriations or investment income. Noting that a fairly E CO N F O C U S | F I R ST Q U A RT E R | 2 0 1 7  17  high number of closed institutions (14) had a religious affiliation, the authors suggested that this feature may in fact be a more recent risk factor as well. This finding would contrast with other research suggesting that religious schools are generally financially stronger due to an “enrollment advantage” of more dedicated students. The effects of the Great Recession may have overridden this advantage by making such students more willing to consider cheaper alternatives, according to the authors. Another common feature that troubled institutions exhibit is a sudden and substantial jump in tuition “discounting.” It’s become common practice for almost all schools — whether private or public, financially healthy or not — to reduce the tuition sticker price through a mix of financial aid and work-study programs. But if a school suffers from a drop in enrollment and tries to recruit and retain students more aggressively, it will often try to do so through sharply increasing the discount without necessarily finding offsetting funds elsewhere. According to the National Association of College and University Business Officers, the average “discount rate” for undergraduates at private colleges has risen substantially in the past decade, from around 35 percent to almost 43 percent, consistent with the trend of increasing financial strains for certain schools. A Lucrative ‘Ace’ As they face these challenges, some schools are seeking new and sustainable revenue sources while trying to monetize their “niche” qualities. In the Richmond Fed’s district, one of these colleges, Emory & Henry College, checks the boxes on some of the risk factors noted above. It’s a small liberal arts college (around 1,000 students, with many from low-income families) and was discounting its tuition at a relatively high rate of about 50 percent to stave off declining enrollment. It also happens to be in an economically hard-hit corner of Appalachia, in rural southwest Virginia. “What we needed,” says President Jake Schrum, “was a new ace in the hole.” This ace, his administration decided, would be to build on an idea proposed by his predecessor: establishing new graduate-professional programs in the health sciences for occupational therapy, physical therapy, and physician’s assistant training. So in 2016, the school finished a $20 million project to refurbish an empty hospital in Marion, Va., while beginning to admit students for two of three programs. By next fall, Schrum expects close to 180 students will be enrolled, each paying $30,000 annually in tuition and graduating with sought-after professional degrees. “This region is aging and economically challenged, and there’s a desperate need for more medical care,” says Schrum. “Our strategy hits the sweet spot of generating income for the school while serving the communities around us.” The administration hopes this new revenue stream will not just help the professional programs but provide some 18  E CO N F O C U S | F I R ST Q U A RT E R | 2 0 1 7  financial support for the programs on the main campus to help retain students through mentoring and keeping tuition affordable for those who need it. “This is the turnaround year,” Schrum says. “Next year, we expect to break even.” The school is also devoting other resources toward boosting its retention rates. This strategy is meant to help students launch into professional life, but it’s also important for the school’s finances by maintaining tuition inflows. As part of this effort, the school closely involves parents to keep students focused on graduation and finding a job. Schrum notes most students — almost twothirds — are the first in their families to go to college and therefore are more likely to drop out. And roughly 40 percent get federal Pell Grants, which indicates a large share from low-income families. Yet the school’s six-year graduation rate (54 percent) is not too far below the national average for private nonprofit schools (64 percent) despite its more vulnerable demographic profile. Saving Sweet Briar Sweet Briar, like many other women’s colleges, has grappled with declining demand for years. Only around 2 percent of college graduates today attend a single-sex college. From 1960 to 2015, women’s colleges in the United States and Canada plummeted from 230 to 47, with many merging with all-male schools or going coed. Despite their small numbers, however, it has been found that their graduates still outperform and outearn other women when it comes to professional advancement, even when controlling for family income, school selectivity, and other variables. Graduates of women’s schools make up 20 percent of all women in Congress and more than 33 percent of female members of Fortune 1000 boards, for example. As with selective small coed colleges, some well-known women’s schools (like the remaining members of the “Seven Sisters” in New England and the mid-Atlantic) flourish in terms of recruitment and finances. But Sweet Briar, a relatively isolated campus, found itself losing students and falling into the same revenue trap as many others. By 2014, undergraduate degree-seeking enrollment had fallen to 561 from 647 in 2008, while the rate of tuition discounting jumped from about 41 percent to 57 percent. It channeled more money into upgrading its facilities, but that failed to boost its numbers. These factors all came together in early 2015 when its board voted for closure — even though the school had a relatively healthy endowment of $85 million at the time. Galvanized, its alumnae immediately began a “Saving Sweet Briar” campaign that has so far kept the school afloat. In summer 2015, former Bridgewater College President Phillip Stone was brought on for the interim. After persuading some core faculty to stay on and boosted by the fundraising campaign, the school stayed open with diminished enrollment of around 236 degree-seeking undergraduates and reduced staff. Those numbers rose to 320 students in the fall of 2016, and Stone says he now expects the student  body to increase by about 100 each year for the next few years and eventually reach 800. (Stone was succeeded in May 2017 by Meredith Woo, formerly an academic dean of the University of Virginia. He was interviewed for this story while still serving as interim president.) As part of its turnaround, the school is channeling resources into science, technology, engineering, and math (STEM) majors to market itself as an environment where women can learn to succeed in well-paying, male-dominated fields, says Stone. It is one of only two women’s colleges to offer an engineering program, and Google has sent representatives to Sweet Briar in the past few years, including during its Engineering Week this spring. “We’re working with more tech firms now that more and more are looking to recruit and promote women,” he says. “This will be a very big part of our strategy looking ahead.” As for new and sustained revenue, the school is considering multiple approaches. Stone notes that one strategy is to recruit more foreign students, who are more likely to pay full tuition. Stone’s goal is to increase their numbers to around 10 percent to 15 percent of the student body. On the horizon, Stone also envisions new revenue-building masters’ degree offerings to leverage Sweet Briar’s natural setting: conservation and environmental science. These professional degrees, he suggests, may be open to both men and women. Changing Students, New Missions Historically black colleges and universities have long been recognized for their outsized role in producing black leaders in law, medicine, engineering, and science. Access and relative economic mobility, especially for lower-income students, have historically been selling points of HBCUs. These schools, which were established as the only alternative for blacks when the vast majority of colleges and universities were all-white, are located predominately in the South and mid-Atlantic, and a third of all HBCUs are in the Richmond Fed’s district. (See “Knowledge=Power,” Region Focus, Summer 2004). But they, too, have to compete harder than they used to for students and are facing growing financial strains and dropping enrollment share. From 1976 to 2014, the share of black students enrolled at HBCUs dropped from 18 to 8 percent in the wake of educational desegregation and active competition among non-HBCUs to recruit top black applicants. Today, the number of HBCUs with federal accreditation totals around 100, split between public and private, although both often get many different forms of state and federal money. Both public and private HBCUs also have a distinctive set of risk factors. First, they tend to have a higher share of lower-income students on federal aid, such as Pell Grants, and this source of support is more likely to vary over the years because it’s subject to annual congressional appropriations. If the amount of aid falls or tuition rises, many of these students are likely to switch  to community colleges. Moreover, a substantial share of HBCUs — about half — is small, with fewer than 2,000 in enrollment. Finally, retention is a challenge, especially for those who are the first in their families to attend college; among these students, a higher dropout rate feeds into the revenue strains. The combination of all these factors could make the financial dilemma at HBCUs more acute. “The spiraling cost of education has pushed many students who might otherwise go to HBCUs to community colleges,” agrees Johnny Taylor Jr., president and CEO of the Thurgood Marshall College Fund, an organization in Washington, D.C., that supports and represents public HBCUs. “For HBCUs to adapt, they need to make the case to prospective students that they offer an affordable education that leads to a good job.” One course of adaptation for many HBCUs is expanding their student pool with other minority students — notably Latino — as well as those from abroad. Today, about 20 percent of students at HBCUs are non-black. This strategy, however, has sometimes come under criticism by some for changing the character and mission of HBCUs. More broadly, Taylor describes the overall climate for HBCUs today as “very challenging.” But he also notes some examples of HBCUs that are innovating with new revenue streams and strategies to keep enrollment steady. In North Carolina, for example, Fayetteville State University has expanded its online programs so that the large (and mobile) military-base population around it can take fuller advantage of its offerings, including part-time and professional certification programs.  The Utility of College Stephen Porter, a professor of education at North Carolina State University and co-author of the Iowa State University study, believes prospective students have been evolving in their views of a college education in a way that has affected small schools in particular, well after the Great Recession. “Students and parents are both much more price sensitive than five or 10 years ago,” he says. “This probably has a lot to do with rising tuition at both private and public schools and rising student debt. Even though many private schools discount a lot, they’re seen as expensive.” Now more than ever, he notes, “a student’s selection of a particular college is shaped by how that decision will lead him or her to a career,” he adds. “If a school has a high nominal price tag but isn’t selective, and doesn’t have programs and support networks to lead you to a job, then it’s at a disadvantage.” These trends can be seen in one of the most comprehensive education surveys in the United States, “The American Freshman,” published annually by the Cooperative Institutional Research Program at the Higher Education Research Institute at the University of California, Los Angeles. When high school seniors were asked why E CO N F O C U S | F I R ST Q U A RT E R | 2 0 1 7  19  they selected their particular college over others, 60 percent in the most recent survey (2015) answered it was because its graduates “get good jobs.” That share was up 5 percentage points in just three years and was also the highest ever for that question, which has been asked since the 1960s. Do these converging trends mean that small schools will eventually become obsolete? Carey, of New America,  sees potential for many of these schools to turn around, especially by expanding their digital programs and bringing in a broader array of students who can benefit from them. “A school can keep a small and intimate campus for those who want it and still reach thousands more across the country,” he notes. “But for many of these small institutions, whatever they do, they need to go beyond their traditional model to stay viable.” EF  Readings Eagan, Kevin, Ellen Bara Stolzenberg, Abigail K. Bates, Melissa C. Aragon, Maria Ramirez Suchard, and Cecilia Rios-Aguilar. “The American Freshman: National Norms Fall 2015.” Cooperative Institutional Research Program, Higher Education Research Institute at University of California, Los Angeles, February 2016. Lyken-Segosebe, Dawn, and Justin Cole Shepherd. “Learning from Closed Institutions: Indicators of Risk for Small Private Colleges and Universities.” Working paper prepared for the Tennessee Independent Colleges and Universities Association, Vanderbilt University, July 2013.     Porter, Stephen R., and Trina J. Ramirez. “Why Do Colleges Fail? An Analysis of College and University Closings and Mergers, 19752005.” Iowa State University Working Paper, 2009. Price, Gregory N., William Spriggs, and Omari H. Swinton. “The Relative Returns to Graduating from a Historically Black College/University: Propensity Score Matching Estimates from the National Survey of Black Americans.” Review of Black Political Economy, June 2011, vol. 38, no. 2, pp. 103-130.  The Fed’s “Tequila Crisis” continued from page 5  maybe everyone will forget about it, but I don’t think so.” “They will if it works and they won’t if it does not work,” Chairman Alan Greenspan responded. The FOMC voted in favor of the swap with the Treasury, with Melzer and Lindsey opposing. (Broaddus was not a voting member in 1995, but he too voiced opposition to the arrangement at the meeting.) A Pyrrhic Success? The operation accomplished its immediate goals. President Clinton authorized the $20 billion loan from the ESF on Jan. 31, 1995. An additional $17.8 billion from the IMF and $10 billion from the Bank for International Settlements brought the total aid package up to nearly $50 billion. With this assistance, Mexico was able to meet its demands and avoid default, but it did suffer a severe recession. Eventually, its economy recovered and it repaid its loans in full and ahead of schedule. Still, the event raised a number of lasting questions. Intervening to prevent the default of companies or countries creates a moral hazard problem; international investors might take larger and larger risks in the future if they believe they are protected from the consequences  of failure. The 1995 intervention was more than 10 times the size of the loans made to Mexico in 1982. And just two years later, the international community would fund a $118 billion loan to Thailand, Indonesia, and South Korea to prevent another crisis. The Mexico intervention also raised serious questions for the Fed. The Treasury ultimately never called on the Fed to swap its foreign currencies with dollars to finance the loan to Mexico, but the event still sparked a discussion about how such operations might affect its credibility and independence. By the late 1990s, the FOMC voted to close nearly all of the Fed’s swap lines. The decision was short-lived, however. During the financial crisis of 2007-2008 and the subsequent debt crises in Europe, the Fed revived them to provide foreign central banks with dollar liquidity. Continuing the Richmond Fed tradition, then-Richmond Fed President Jeffrey Lacker dissented against the swap arrangements in 2011, reiterating the argument that they amounted to fiscal policy. “I think Richmond has done a good job keeping this issue in front of the FOMC for a long time, but I can’t say we’ve completely sold them on it,” says Broaddus. “That’s still a work in progress. And it may always be.” EF  Readings Bordo, Michael D., Owen F. Humpage, and Anna J. Schwartz. Strained Relations: U.S. Foreign-Exchange Operations and Monetary Policy in the Twentieth Century. Chicago: University of Chicago Press, 2015. Boughton, James M. “Tequila Hangover: The Mexican Peso Crisis and Its Aftermath.” In Tearing Down Walls: The International Monetary Fund 1990-1999. Washington, D.C.: International Monetary Fund, 2012. 20  E CO N F O C U S | F I R ST Q U A RT E R | 2 0 1 7  Broaddus Jr., J. Alfred, and Marvin Goodfriend. “Foreign Exchange Operations and the Federal Reserve.” Federal Reserve Bank of Richmond Economic Quarterly, Winter 1996, vol. 82, no. 1, pp. 1-19. Musacchio, Aldo. “Mexico’s Financial Crisis of 1994-1995.” Harvard Business School Working Paper No. 12-101, May 2012. Rubin, Robert E., and Jacob Weisberg. In an Uncertain World. New York: Random House, 2003.  POLICYUPDATE  Fighting Fund Runs  L  ast October, the Securities and Exchange Commission (SEC) adopted a new rule governing the assets held by open-end mutual funds and exchangetraded funds (ETFs). (Money market funds, another type of mutual fund, are subject to a different SEC rule which took effect last fall.) These funds have become increasingly popular investment choices for households in recent years. According to the SEC, some 44.1 percent of all U.S. households owned shares in open-end funds as of 2015. Open-end funds allow investors to sell their shares back to the mutual fund — that is, redeem them — at the end of any trading day. (As opposed to closed-end funds, which do not allow investors to sell shares back to the fund after the initial purchase.) ETFs are also considered open-end funds, but their shares are generally traded on a stock exchange rather than bought and sold from the fund directly. Only authorized participants can purchase or redeem shares from an ETF directly, and these participants are typically large financial institutions that deal in large blocks of thousands of shares at a time. According to the SEC, the new rule is intended to protect investors and address developments in open-end funds that may have increased their liquidity risk. Over the last decade, alternative mutual funds and ETFs have grown considerably: Their total assets jumped nearly a thousand-fold from $365 million in 2005 to $334 billion in 2014. These funds tend to invest in nontraditional and more illiquid assets, such as global real estate or commodities, while still pledging to redeem shares on demand. The fact that investors in open-end funds can redeem their shares on demand could pose a problem for some funds. On one hand, the fund needs enough cash or “liquid” assets that can quickly and easily be converted to cash on hand to satisfy redemption requests from investors. The Investment Company Act of 1940 requires that funds process redemption requests within seven days, though in practice many funds today pledge to make payments as soon as the next business day. On the other hand, many funds also choose to invest in long-term assets. These types of assets are difficult to liquidate quickly for full value, however, leading to an inherent tension in how funds manage their assets. Even if a fund holds mostly assets that can be sold relatively easily, like publicly traded stocks or bonds, it may run into trouble if it does not have enough cash on hand to handle redemptions. When a fund’s portfolio is sustaining losses, many investors may decide to redeem their shares at the same time. Without enough cash, the fund may need to sell some of the assets from its portfolio to honor the redemption requests. That may require  BY T I M S A B L I K  selling less liquid assets at a steep discount, depressing the value of the remaining assets in the fund’s portfolio and prompting more investors to redeem their shares. The fact that the investors who redeem their shares first suffer no losses until the fund’s cash is exhausted and suffer fewer losses the sooner they sell after the cash is gone encourages all investors to cash out of a fund at the first sign of trouble, making it more likely that a fund’s liquid assets are overwhelmed. Liquidity risk has garnered a lot of attention from financial regulators since the 2007-2008 crisis, and they have adopted rules requiring banks and other financial firms to maintain greater liquidity buffers. (See “Liquidity Requirements and the Lender of Last Resort,” Econ Focus, Fourth Quarter 2015.) The new SEC rule for mutual funds and ETFs is very similar to these other post-crisis measures. Funds must classify their assets based on how long it would take to convert them into cash without altering their market value. Each fund must hold some minimum fraction of its net assets in cash or highly liquid investments (convertible into cash within three business days without significant loss of value) and no more than 15 percent of its net assets in illiquid investments (can’t be sold within seven days without significant loss). The illiquid asset minimum of 15 percent was previously an informal guideline from the SEC, and the new rule makes it official. Funds must disclose their liquidity positions and plans to their board and the SEC as well as report when they breach their liquid or illiquid asset thresholds. Empirical evidence supports the assumption that funds holding more illiquid assets are more susceptible to runs by their investors during times of stress. In a 2010 Journal of Financial Economics article, Qi Chen of Duke University, Itay Goldstein of the University of Pennsylvania, and Wei Jiang of Columbia University looked at data on equity mutual funds between 1995 and 2005. They found that funds that were more illiquid were more likely to suffer increased redemptions by noninstitutional investors during a period of stress: The fear of being the last one out drove investors to run for the exits. Interestingly, the authors also found that illiquid funds held by large institutional investors were not as prone to increased redemptions due to bad performance. Still, they suggested that funds investing in illiquid assets might be better off operating as closed-end funds in order to avoid the problem of outflows altogether. The new SEC rule goes into effect on Dec. 1, 2018, for funds with $1 billion or more in net assets and on June 1, 2019, for smaller funds. EF E CO N F O C U S | F I R ST Q U A RT E R | 2 0 1 7  21  INTERVIEW  Janet Currie Editor’s Note: This is an abbreviated version of EF’s conversa-  tion with Janet Currie. For additional content, go to our website: www.richmondfed.org/publications  EF: Regardless of the topic, a common element in much of your research is using a novel approach or dataset to study questions where the possibility of reverse causation or omitted variables, for example, has made it difficult for other researchers to tease out cause and effect. Is that intentional? Currie: I wouldn’t say that my intention is to be novel, necessarily. But much of my work has focused on the environmental factors and social programs that affect women and children, and it is often the case that those are the kinds of problems to be overcome in trying to figure out whether something works or not. A classic example is Head Start. Almost all the kids in Head Start are poor, so if you just compare their outcomes to other children’s outcomes, they’re worse, which might lead you to think the program isn’t working. But the question is, what is that counterfactual? Is the program actually helping them to do better than they would have otherwise? I did do some early work on Head Start and found that it closed about onethird of the gap between Head Start kids and other kids. That seems to have been verified in subsequent research. EF: You mentioned environmental factors, and you’ve done a lot of research on the effects of pollution. How can economics inform the study of pollution?  22  E CO N F O C U S | F I R ST Q U A RT E R | 2 0 1 7  PHOTOGRAPHY: EGAN JIMENEZ/PRINCETON UNIVERSITY  Princeton University economist Janet Currie began her career studying collective bargaining and arbitration systems. “But as I got further along in my career and started thinking about what I really wanted to do,” she says, “I realized I wanted to work on a question that everyone agrees is important: How can society improve children’s well-being? Most of my research since then has been motivated by the factors that affect children.” Those factors are extremely varied; her work has looked at issues as diverse as pollution, prescription drugs, and school meal programs. In the process, she has made major contributions to our understanding of the effects of social safety net programs, the links between socioeconomic status and health, and the intergenerational transmission of health and human capital. More recently, Currie has studied the legal and economic forces that govern the health care system, including how those forces might influence access to care for different groups. Over the course of her career, Currie has gained a reputation for answering longstanding questions in innovative ways, such as using the introduction of EZ Pass highway tolls to study the effects of pollution or comparing data on hurricanes and births to understand the impact of maternal stress. In addition to being the Henry Putnam Professor of Economics and Public Affairs at Princeton, she is the co-director of the university’s Center for Health and Wellbeing and chair of the economics department. Currie also co-directs the Program on Children at the National Bureau of Economic Research and is a member of the National Academy of Medicine and of the American Academy of Art and Sciences. Jessie Romero interviewed her at her office at Princeton in February 2017.  Currie: Pollution is a classic example of an externality, where one person, in the course of an activity such as producing a good, also produces something that harms another person. Because economics emphasizes both the costs and the benefits of the activity, it can help us think about useful approaches to regulation. One approach is very legalistic: We just forbid people to engage in a certain activity. But that ignores the fact that in some circumstances, there might be some benefit to the activity. A more economic approach would be to try to get people to weigh those costs and benefits themselves, for example by making the polluter pay for part of the costs of the cleanup. Environmental protections can be viewed very much in terms of who has the right to do what. Do I have the right to breathe clean air? Or do you have the right to use the air to produce whatever it is you want? The law is supposed to decide. One way to decide could be based purely on economic grounds, and in some places the cost of giving people clean air is going to be very high and in other places it’s going to be low. It depends on the baseline: If you start fracking in a national park, that has a high cost in terms of degrading the environment. If you start fracking in an area where they’ve been drilling for oil and gas for 100 years, the costs are much lower. A purely economic view might be that your rights should depend on the cost of providing them. But you can also argue that everyone should have the right to clean air; someone might have an absolute right to something even if the short-run costs, at least, are higher than the benefits of giving them that right. EF: Is there a relationship between socioeconomic status and exposure to pollution? Currie: There is a large environmental justice literature arguing that low-income and minority people are more likely to be exposed to a whole range of pollutants, and that turns out to be remarkably true for almost any pollutant I’ve looked at. A lot of that has to do with housing segregation; areas that have a lot of pollution are not very desirable to live in so they cost less, and people who don’t have a lot of money end up living there. It also seems to be the case, at least some of the time, that low-income people exposed to the same level of pollutants as higher-income people suffer more harm, because higher-income people can take measures to protect themselves. Think about air pollution. If I live in a polluted place but I have a relatively high income, maybe I have better-quality windows so I have less air coming in, or I can afford to have air purifiers, or I can afford to run my air conditioner. It could even be the case that lower-income people are more vulnerable to the effects of pollution in the first place. For example, someone who is malnourished is more likely to absorb lead than someone who is not malnourished. So people who are better nourished may be better able physiologically to protect themselves against the effects of pollutants.  EF: You’ve also found that the current and future effects of climate change vary with socioeconomic status, especially if one compares developed and developing countries. Does that mean wealthy Americans don’t need to worry? Currie: Wealthy Americans will likely be impacted less, but that doesn’t mean that they won’t be impacted at all. First, if things like polar bears and coral reefs totally disappear from the world, presumably that represents a loss to us as well as to other people. But we’re also likely to see a higher prevalence of natural disasters, such as the catastrophic rains in California or the fact that many neighborhoods in Florida are effectively sinking. We all face a higher probability of extreme weather that could damage our homes or cause other losses. Now, you could say that if you live in Minnesota, a warming climate means your weather is actually going to be much more pleasant. But even if a natural disaster is in a different part of the country, we all pay when the government has to come in and help the people who were affected. And we may all end up paying more for food and for the costs of remediation when we finally realize that climate change and environmental degradation are important problems. EF: You’ve also studied how socioeconomic status affects parental investment in children. Currie: An investment is something where you pay now and get a return later. We end up doing a lot of things for our kids that are not necessarily all that pleasant, such as helping them with their homework or disciplining them. And we do the things that are costly now because we expect some payoff in the future: We want them to graduate from high school, to go to college, to get a good job, to be well-behaved people. One of the key questions in the area of child and family economics is why parents make the choices they do. There is a tendency to think it’s the result of preferences; if one parent chooses to spend a lot of time on education and another parent doesn’t, then perhaps those parents just value education differently. But it’s important to realize that when we make investment choices, we make them subject to constraints, and different people have different constraints. For example, maybe a single mom doesn’t spend as much time doing homework with her children as another mother because she’s working 12 hours a day and has a long commute to her job. An interesting question is, if you change people’s constraints, to what extent will you change their investment behavior? In addition to resource constraints, people may face social constraints as well. In some developing countries, women aren’t allowed to work or even allowed to go outside the home without an escort. So parents have less incentive to invest in their daughters’ educations, because their E CO N F O C U S | F I R ST Q U A RT E R | 2 0 1 7  23  daughters may not be able to reap the rewards of an education. Now, if you change those constraints, that might also change parents’ choices about whether or not it’s worthwhile to educate their daughters. Similarly, here in the United States, for many years disabled people were kept out of the public eye and no one expected they would be able to work, which meant there was less incentive to invest in their education. But as those barriers have come down, opportunities have opened up that change peoples’ incentive to invest.  Janet Currie ➤ Present Positions Henry Putnam Professor of Economics and Public Affairs, Princeton University Chair, Department of Economics, Princeton University Co-Director, Center for Health and Wellbeing, Princeton University Co-Director, National Bureau of Economic Research’s Program on Children  in particular the Great Recession, have a short-term effect on women’s fertility. What did you and Hannes Schwandt find about the long-term effects of recessions on fertility?  Currie: In that paper, we looked at cohorts in the Census over time; a woman who was 10 in 1950 was 20 in 1960 and 30 in 1970, and so on. We ➤ Selected Past Positions also could see how many children the Columbia University (2006-2011); women of different ages had. So we University of California, Los Angeles followed each group of women to the (1988-1991, 1993-2005); Massachusetts point where their fertility would have Institute of Technology (1991-1993) been completed, and we could see if ➤ Education EF: How effective are government women who experienced recessions Ph.D. (1988), Princeton University; assistance programs for children, at different ages altered their fertilM.A. (1983), University of Toronto; such as nutrition assistance or ity patterns. Essentially, we followed B.A. (1982), University of Toronto medical care? women across the whole life cycle instead of just making projections ➤ Selected Publications Currie: Many people have argued based on a point in time. “Diagnosing Expertise: Human Capital, Decision Making and Performance that these programs aren’t workWe knew that you always see Among Physicians,” Journal of Labor ing because the poverty rate in the a decline in births in a recession. Economics, 2017 (with W. Bentley United States has basically been flat But the unresolved question was, do MacLeod); “Inequality in Mortality for several decades. But the official those births get made up later on, Decreased Among the Young While poverty rate measures cash income or is there a permanent decline in Increasing for Older Adults, 1990–2010,” before taxes and transfers, so most the number of births? The former is Science, 2016 (with Hannes Schwandt); of the programs we have in place for called a tempo effect: I plan to have “Environmental Health Risks and poor people are not counted. (See two kids, and then something causes Housing Values: Evidence from 1600 “Drawing the Line,” Econ Focus, First me to delay my fertility, but I still Toxic Plant Openings and Closings,” Quarter 2013.) We give people food end up having two kids. There’s no American Economic Review, 2015 (with stamps, we give people Medicaid, change to my completed fertility. coauthors); “Is There a Link Between we give people public housing, we For the latter, something could hapForeclosure and Health?” American give people the Earned Income Tax pen that changes my mind about the Economic Journal: Economic Policy, 2015 (with Erdal Tekin) Credit, and none of those things number of kids I want to have, or my are counted in the official poverty ability to have those kids, and then measure. Essentially, by definition, none of the important there is a difference in my completed fertility. things that we do to alleviate poverty can affect the U.S. We found that if women experienced a recession in poverty measure. their early 20s, there did seem to be a permanent decline If instead you use an alternative poverty measure that in the number of births. And rather than just having fewer counts such programs, you see that those programs have children, these women were less likely to have children at made a big difference in reducing poverty. The next all. (Our data only looked at live births, so we don’t know question to ask is, does that have any impact on other if there was an effect on how many conceptions resulted indicators of well-being? And I would say yes. Many of in termination or miscarriage.) The key factor seemed to these programs have been very well studied, and there is be that women who were affected by a recession in their quite a lot of evidence that they have positive impacts. early 20s were less likely to get married; maybe they were Over the past 20 years we have seen large declines in looking around for a partner, but then a recession hit child mortality, injury rates, crime, and teen pregnancy, and unemployment increased, and none of the potential to name just a few domains. And we’ve seen an increase partners seemed attractive. For women who experienced in the number of young adults who’ve gotten any college recessions at other ages, there was a temporary decline in education. There are a lot of indicators showing posifertility but the births occurred later. tive movement, and I think we can attribute that to the Distinguishing between tempo effects and a permainvestments that we’ve been making in children. nent decline is quite important for population projections. It affects planning for schools, forecasting how EF: Many researchers have found that recessions, much money will be coming in to Social Security, or how 24  E CO N F O C U S | F I R ST Q U A RT E R | 2 0 1 7  many people will need to be supported in old age, among other things. If there’s a permanent decline, then the population is going to be permanently lower. If it’s just a temporary decline, there will be a dip in the population at the time those births are deferred but then a bump up in the population later to make up for it. EF: The Great Recession is closely linked to the foreclosure crisis that began around 2006. What motivated you to study the effects of foreclosure on health, and what did you find? Currie: That paper, which I wrote with Erdal Tekin, was part of a broader research agenda on the effects of acute stress. We were looking for events that we thought would be stressful, and foreclosures just leapt out from the newspapers; there were a lot of anecdotal reports about people committing suicide or having heart attacks. To the extent that a really stressful event could affect someone’s health, we thought foreclosure would be a good candidate to study. We found evidence linking increases in foreclosures to an increase in the number of urgent and unscheduled hospital and emergency room visits, at least in part because people appeared to forgo preventive care or to cut back on care for chronic conditions. Of course, it’s hard to identify a causal effect of foreclosure, and one thing we looked at was whether we were just picking up the effects of unemployment rather than the effects of foreclosure. But the relationship between foreclosures and hospital visits was strong even at the beginning of the crisis before unemployment started to increase. Another possibility could be that people with financial problems switch from outpatient providers to emergency rooms, but there was an increase in hospital visits for conditions that would typically require an ER visit in the first place, such as a heart attack or a stroke. It’s also possible that poor health could lead to foreclosure. But the foreclosure crisis was unexpected: Prices were rising, everybody was investing, everybody was buying homes. So it’s pretty unlikely that the sudden wave of foreclosures was caused by a sudden wave of health problems among American homeowners. EF: You’ve looked at reforms that many states have enacted to the rule of joint and several liability in an effort to curb frivolous or expensive lawsuits. One concern about these reforms is that they will reduce people’s incentives to take precautions against harm. Is that what’s happened? Currie: Joint and several liability, or JSL, is essentially the “deep pockets” rule: If multiple parties are found to be liable for the harm caused, the plaintiff can collect damages from one or all of the parties, regardless of how each one contributed to the harm. So people sue the deep pocket. A hospital is a good example. When Bentley MacLeod and I first started reading about tort cases related to malpractice  during child delivery, one of the things that struck us as bizarre is that they often talked about the nurse: The nurse was sitting in the nurse’s station, she didn’t come when I called, she didn’t call the doctor. We wondered, why are they spending so much time talking about what the nurse did or didn’t do? Surely the doctor was the prime mover in deciding treatment? What we eventually realized was, the nurse is the employee of the hospital, whereas doctors are generally working as independent contractors; so if you want to blame the hospital — the deep pocket — you have to tie the nurse to the lawsuit. Most of the time, under JSL, the hospital gets sued and the doctor doesn’t. If the hospital pays, legally it can try to recover damages from the doctor, but they hardly ever do that. Essentially, under JSL, the doctors are working in a regime where they’re never going to get sued. JSL reform makes the payment of damages proportional to the contribution to the harm, which makes it more likely the doctor will be sued. And if the doctor is the decisionmaking agent, then in theory that should improve outcomes. It’s similar in the case of accidents. For example, if someone falls because of a loose railing on a stair, they might sue the landlord because the landlord is the deep pocket. But maybe it was the fault of the contractor who installed the railing. Under JSL, the landlord would have to sue the contractor themselves, which gives the contractor less incentive to take precaution than if the contractor could be sued directly. But by making the probability of being sued closer to the probability that you created the harm, JSL reform can improve the incentives of people to take precaution. It looks like that’s what has happened; Daniel Carvell, Bentley, and I looked at data on accidental deaths and found that JSL reforms are associated with reductions in the accidental death rate in the United States. EF: So the fear of lawsuits appears to make contractors, for example, take more precaution. Does that fear affect doctors’ decisionmaking? What other factors influence how they practice? Currie: In principle, the fear of being sued could impact doctor behavior, as we saw with the JSL example. This is the basis for the idea of “defensive medicine.” In fact, though, people are probably too quick to blame fear of lawsuits for doctors’ decisions. Most of the time, doctors aren’t sued when they make a mistake. When they are, the vast majority of cases are settled out of court, and because doctors have malpractice insurance, it’s the insurance company that pays. Doctors’ individual premiums aren’t experience rated, meaning their premiums aren’t affected by lawsuits. I’m sure it’s true that doctors don’t like to be sued, but both the likelihood of being sued and the cost of being sued seem to be exaggerated as motivators of doctor behavior. So why do doctors act as they do? One motivator, E CO N F O C U S | F I R ST Q U A RT E R | 2 0 1 7  25  although maybe not the primary motivator, is that doctors do have an incentive to do more procedures, because the more procedures they do, the more they get paid. If you take your car in for an oil change and the mechanic says you need a new muffler, you might be suspicious. But if you go in for a checkup and the doctor says you need this, that, and the other thing, you will probably be much more trusting. And yet doctors are subject to the same economic forces as mechanics, in the sense that the more things they sell you, the more money they get. But doctors don’t just always do the highest-paying thing. Another factor that seems to be important is training effects. Even within the same hospital, different cohorts of doctors behave differently, which probably reflects what they were trained to view as good or bad. We also see that doctors vary in how responsive they are, meaning how much attention they pay to whether a procedure is appropriate for a particular patient. Doctors also might have more or less experience with various types of patients, which can shape how they behave. We know that experts in general have lots of cognitive biases that might lead them to overweight the possibility of one type of outcome versus another type of outcome, and I think doctors are subject to the same kinds of biases. Many people are concerned about overtreatment and excessive spending, but the problem is more subtle. Bentley, Jessica Van Parys, and I studied heart attack patients admitted to emergency rooms in Florida. We found large differences in how doctors allocated procedures across patients; some doctors were much less likely to use aggressive treatments with older or sicker patients who might have been deemed less appropriate candidates for the treatment. Young, male doctors who trained at a top-20 medical school were the most likely to treat all patients aggressively, regardless of how appropriate the patient seemed to be. In the case of heart attacks, it appears that all patients have better outcomes with more aggressive treatment, so treating only the “high-appropriateness” patients aggressively harms the “low-appropriateness” patients. Similarly, many people are concerned that U.S. doctors perform too many C-sections. But actually, in another paper, Bentley and I found that it looks like too many women with low-risk pregnancies receive C-sections, while not enough women with high-risk pregnancies receive C-sections. So the goal shouldn’t necessarily be to reduce the total number of C-sections but rather to reallocate them from low-risk to high-risk pregnancies. EF: In a recent paper with Diane Alexander, you found that publicly insured children are less likely to be admitted to the hospital than privately insured children. Is that cause for concern? Currie: Not necessarily. Because what we found was that most of the kids didn’t need to be admitted. For 26  E CO N F O C U S | F I R ST Q U A RT E R | 2 0 1 7  example, many children came into the emergency room with asthma attacks. The doctor would give them the medicine they needed in the ER, and then, for well-insured children, admit them. They wouldn’t receive any additional treatment, and then they would go home in the next day or two. You might think, no harm done. But it’s very expensive, it is disruptive to the child and the family, and there is always the risk of infection or some other injury in the hospital. So it’s not necessarily a good thing to admit children to the hospital just because their health insurance company will pay for it. EF: What are you working on now? Currie: Recently, I’ve been looking at the effects of lead exposure. Anna Aizer, Peter Simon, Patrick Vivier, and I just had a paper accepted where we looked at the effect of small levels of blood lead on children’s test scores in Rhode Island. Rhode Island is interesting because they have a very comprehensive lead testing program, and it’s possible to link the lead test data to data from the public schools. There were some policy changes that caused differences in lead levels among children, so we were able to see the effects of low levels of lead on academic outcomes. In short, we found that reducing blood lead levels even from very low levels has positive effects on children’s reading scores. I’m working on another paper with Anna Aizer on the relationship between lead and crime, also using Rhode Island data. There, we’re taking advantage of the fact that people who lived close to busy roads before gasoline was deleaded were exposed to a lot of lead, while people who lived farther away from busy roads, or who lived near busy roads after gasoline was deleaded, got less exposure. That’s allowing us to study how lead exposure affects disciplinary problems in the schools and juvenile incarceration. EF: Which economists have had the greatest influence on your work? Currie: I think the people who have the greatest influence are the ones you meet when you’re young. So I would have to give the credit (or the blame) to people such as my thesis advisers, Orley Ashenfelter, David Card, and Angus Deaton. I really liked that in Angus’ Nobel Prize lecture [in 2015], he emphasized the importance of measurement and of learning facts about the world. I was glad to see that process recognized as an important part of economic research. When I went to UCLA, Finis Welch was my senior professor, and he was the kind of person who really made you think. He challenged all my assumptions and that was very good for me. And then I moved to MIT for a time and was fortunate to have Jim Poterba and Hank Farber as mentors. I’m very lucky to have had people who looked out for me, challenged me, and helped me get where I am today. EF  ECONOMICHISTORY A Level of Concern  Lead paint was known to be toxic in the early 1900s, but it wasn’t banned in the United States until 1978 — a delay with grave consequences BY J E S S I E RO M E RO  M  ore than two years after testing first revealed elevated lead levels in the water in Flint, Mich., the city’s residents — the majority of whom are black, and 40 percent of whom live below the federal poverty line — still can’t drink their tap water without a special filter. By most accounts, the crisis began in April 2014, when the city began using highly corrosive water from the Flint River instead of from Lake Huron, part of an effort to reduce a multimillion-dollar budget deficit. But the problem actually dates back to the city’s early days, when the water distribution system was built with lead pipes. Today, Flint is trying to come up with the $80 million that engineers estimate it will cost to replace the city’s pipes. Lead is highly toxic; exposure can cause sterility, miscarriages, joint and muscle pain, and memory loss, among other symptoms. Children are especially susceptible to lead’s effects and can suffer comas, convulsions, or death at high levels of concentration in their blood. In recent decades, researchers have linked even low blood levels to long-term cognitive and behavioral problems and health problems later in life. Both the Environmental Protection Agency (EPA) and the Centers for Disease Control (CDC) state that there is no known safe level of lead in a child’s blood. At the same time homes were being built with lead pipes behind the walls, those walls were being covered with lead paint, which would turn out to be another potent source of childhood lead poisoning. More than a dozen countries banned lead paint in the early 1900s, but it wasn’t until 1978 that the United States followed suit. Throughout lead paint’s history, children of lower socioeconomic status have been at greater risk of poisoning — and are still at greater risk today, nearly 40 years after lead paint was banned.  In the United States, the increase in lead production and use coincided with the country’s industrialization and urbanization in the second half of the 19th century and the early 20th century. “Lead was pulled out of the ground at the very same time we were building large urban areas, putting in huge water systems, and painting homes by the millions,” says historian David Rosner, co-director of Columbia University’s Center for the History and Ethics of Public Health. By the 1920s, lead was found in everything from makeup to bathtubs to canned goods to gasoline. “A child lives in a lead world,” wrote physician John Ruddock in a 1924 article in the Journal of the American Medical Association. Lead paint became a desirable wall covering in homes. White lead, a powder created by corroding lead with acid, created a bright white paint that was highly opaque and water resistant, and that could be easily tinted other colors. Brightly painted walls were part of a “tremendous reaction against the dark, Victorian-era houses with a lot of wallpaper,” says Gerald Markowitz, a historian at John Jay College and the Graduate Center at the City University of New York. And in an era where a flu pandemic had just killed an estimated 675,000 people in the United States, many people perceived them as more hygienic because they could be wiped down; doctors warned against the dust that collected on unpainted walls.  Living in a Lead World Lead was one of the first metals used by humans. The element is relatively easy to mine and extract from ore, and it’s also highly malleable and resistant to corrosion. This makes lead and its various compounds useful in a variety of applications; the ancient Romans used lead for everything from build- Deteriorating lead paint is a serious health risk for children who may transfer the dust from hand to mouth or eat the sweet-tasting paint chips. ing aqueducts to sweetening wine. E CO N F O C U S | F I R ST Q U A RT E R | 2 0 1 7  27  Lead paint manufacturers appealed to the desire for hygiene. “Painted walls are sanitary, cheerful, and bright,” stated a 1927 advertisement for Dutch Boy white-lead paint. “Cleanliness depends upon washability and consequent freedom from dirt and other impurities,” proclaimed other ads. These “results are best reached by the use of paint made with pure white-lead.” Lead paint was advertised as especially appropriate for children’s rooms. Parents were advised it would make fingerprints and smudges easy to wipe up. Dutch Boy, the most popular brand, produced coloring books that depicted children repainting their rooms and furniture with lead paint to conquer “old man gloom” and “make this playroom fairly shine.” The rooms might have shone, but they also were poisonous to the teething babies who chewed on lead-painted cribs and windowsills and to the toddlers who put leadpainted toys in their mouth or ate sweet-tasting paint chips that peeled off the walls. Even the dust created by opening a painted window frame could contain enough lead to make a child sick. Young Minds Damaged Although lead poisoning among factory workers and painters was well-documented in the late 18th century and early 1900s, physicians in the United States were slower to recognize the prevalence of lead poisoning in children. In part, that’s because the symptoms in children can resemble the symptoms of other diseases, and in part because testing was difficult and imprecise; it could take a lab worker two full days to analyze a urine specimen for elevated blood levels. Laws also restricted testing for lead poisoning to occupational cases. The advent of an X-ray test around 1930 and wider availability of blood testing after 1940 helped doctors identify more cases of childhood lead poisoning. Between 1925 and 1945, children younger than 5 went from less than 5 percent of all reported lead poisoning deaths to nearly 30 percent. “Physicians have not been looking for lead poisoning with any vigorous search,” wrote Dr. Edward Vogt in a 1932 article in the Journal of the American Medical Association. “Now that they are suspecting it, they are finding three or four times as much lead poisoning as they found before.” Doctors and public health officials in Baltimore were at the forefront of efforts to identify childhood lead poisoning. In 1914, Henry Thomas and Kenneth Blackfan of the Johns Hopkins Hospital were the first to publish an account of a child’s death from eating lead paint in the United States. (Researchers in Australia had documented childhood lead poisoning from paint as early as 1904.) In 1935, Baltimore’s health department started offering free laboratory tests to doctors who suspected their patients had lead poisoning, the first such program in the country. City officials mounted a campaign to inform parents about the hazards of lead paint. One radio broadcast from 28  E CO N F O C U S | F I R ST Q U A RT E R | 2 0 1 7  the mid-1930s warned that in addition to the risk of death, “lead poisoning leaves behind it a trail of eyes dimmed by blindness, legs and arms made useless by paralysis, and minds destroyed even to complete idiocy.” Despite the warnings, lead poisoning continued: Between 1931 and 1951, there were 293 recorded cases among Baltimore children, with 83 deaths. During the 1940s and 1950s, it became clear that the problem was not confined to Baltimore. No national reporting system existed at this time, but there were some limited investigations. In 1952, an internal report of the Lead Industries Association (LIA), a trade group founded in 1928, counted 197 children poisoned by lead, including 40 deaths, in nine cities. A few years later, the New York Times reported on 165 poisonings and 94 deaths in New York, Chicago, Cincinnati, St. Louis, and Baltimore. These reports identified only the most severe cases of lead poisoning; until the 1960s, children generally weren’t diagnosed until their blood lead level exceeded 60 or even 80 micrograms per deciliter (µg/dl), at which point they could be displaying acute symptoms such as convulsions or coma. Doctors also believed that once the acute symptoms were resolved, the danger had passed, assuming the child survived. But in 1943, Randolph Byers, a pediatric neurologist, and Elizabeth Lord, a psychologist, published the first study showing that children who had suffered acute lead poisoning remained intellectually and behaviorally impaired. And over the next few decades, evidence mounted that children could be harmed at levels well below what was generally considered the threshold for poisoning. Lead Loses its Allure By the mid-1930s, more than a dozen countries around the world had banned or restricted the use of white-lead interior paint, beginning with France, Belgium, and Austria in 1909. The United States was slower to take action. One factor was the relative weakness of the labor movement in the United States compared to other countries. “The impetus for banning lead in paint came from the labor movement in Europe and Latin America; it was really to protect painters,” says Markowitz. “Children were the beneficiaries eventually, but painters were the major force pushing legislation.” Another factor might have been the trade group the LIA, which lobbied against lead paint bans and labeling laws that would have designated lead paint as poisonous. Still, as concerns about lead paint became more widespread, pigments made from zinc and titanium began to replace lead. In 1951, Baltimore issued the first U.S. ban on the use of lead paint on the interior of any dwelling. Several years later, the LIA, perhaps concerned about the swell of negative publicity and the potential for more stringent regulations, voluntarily worked with the American Standards Association to develop a standard limiting the amount of lead in paint to 1 percent — still enough to be toxic to children. (Historians, public health researchers, and  present-day lead industry execdeveloped an “appetite” for lead By the 1920s, lead was found in everything utives continue to debate how paint that was not found among from makeup to bathtubs to canned goods much, and when, the industry more affluent children. to gasoline. “A child lives in a lead world,” knew about the health conseCivil rights and community quences of lead paint.) activists used the association wrote a physician in 1924. By the 1970s, health authoriwith inner cities to pressure the ties had acknowledged that chilgovernment for increased lead dren could be harmed at lower levels of exposure than screening and treatment programs, and landlords for previously thought. In 1970, the surgeon general recomimprovements to substandard housing. As New York mended that children with blood lead levels above 40 µg/ housing activist Paul DuBrul wrote in 1968, “We have dl should be closely monitored, official recognition that already been told by the Health Department that no children were at risk even if they weren’t acutely symptommoney can be found for a testing program until the black atic. The CDC lowered its “blood lead level of concern” to community begins yelling ‘Murder.’” 30 µg/dl in 1975 and to 25 µg/dl in 1985. Six years later, the One group yelling “murder” was the Black Panthers. In CDC lowered the level again, to 10 µg/dl. In 2012, the publications from the early 1970s, the group railed against CDC replaced the “level of concern” with a “reference the “silent epidemic” of lead paint poisoning; it blamed value” to reflect the belief that there is no known safe the housing conditions created by slumlords and the medilevel of lead. This value is based on children aged 1 to 5 cal profession’s inattention to a problem of primarily poor, whose blood lead levels are in the highest 2.5 percent of minority children. To help combat lead poisoning, the children — that is, the roughly half a million children with Black Panthers added a lead screening program to the free the greatest exposure. Currently, the reference value that clinics they operated in several cities. They were joined by triggers continued testing and observation is 5 µg/dl. the Young Lords, a Puerto Rican activist group. In the late The 1970s also saw the first federal legislation on lead 1960s, the group went door to door in East Harlem testing paint. The Lead-Based Paint Poisoning Prevention Act, children for lead exposure. When 30 to 40 percent of the which took effect in 1971, prohibited lead paint in fedchildren tested positive, the Young Lords held press coneral housing, on toys, and in cooking utensils. In 1978, ferences and staged a sit-in at the New York City Health all consumer uses of lead paint were effectively banned Department. — although the Department of Housing and Urban In his 2000 book, Brush with Death, historian Christian Development (HUD) estimated in 2006 that more than Warren of Brooklyn College (part of the City University 37 million U.S. homes still contain it. of New York) credited these and other community groups with helping to raise awareness about childhood lead The Basic Problem is Poverty poisoning among doctors, public health officials, and From the beginning, the poor were especially at risk for policymakers. “[T]he impetus for change ran from the lead paint poisoning. “It was always the poorest people community to the city and beyond,” he wrote. living in the most dilapidated housing, where absentee The CDC began monitoring blood lead levels in the landlords let properties disintegrate, who were the most population in 1976, as part of the National Health and victimized,” says Rosner. The link between poverty and Nutrition Examination Survey. The second wave of this lead paint was strengthened during the post-World War II survey, conducted between 1976 and 1980, confirmed that era, when “white flight” to the suburbs and discriminatory black and lower-income children had much higher blood housing practices led to a greater concentration of poor lead levels than white and higher-income children. More and minority residents in the inner cities. Their homes than 12 percent of black children between the ages of and apartments tended to be older and poorly maintained, 6 months and 5 years had blood lead levels above 30 µg/dl, increasing the chance that children were exposed to chipthe level of concern at the time, compared with 2 percent ping and peeling paint. of white children. Children from households with an Some lead industry advocates argued that the problem annual salary of less than $6,000 (then the poverty line wasn’t the paint itself, but rather parents who lacked the for a family of four) had an average blood lead level of 20 knowledge to adequately supervise their children. In a 1957 µg/dl, versus 14.1 µg/dl in children from families with an letter to toxicologist Robert Kehoe, for example, Manfred income greater than $15,000. (Median household income Bowditch, the LIA’s health and safety director, wrote, was about $13,000 in 1976). “Childhood lead poisoning is essentially a problem of slum Since the 1970s, when lead paint was banned and leaded dwellings and relatively ignorant parents.” In another letgasoline began to be phased out, blood lead levels have ter, to the former head of the LIA, Bowditch expressed fallen significantly across all socioeconomic groups. But doubt those parents could ever be educated. Kehoe, whose lower-income children and black children have remained research lab was funded in part by the Ethyl Corporation, a at greater risk. According to the American Healthy Homes manufacturer of leaded gas additives, argued in a 1960 lecSurvey, conducted by HUD between 2005 and 2006, ture that poor children living in “unsatisfactory” conditions 29 percent of families earning less than $30,000 per year E CO N F O C U S | F I R ST Q U A RT E R | 2 0 1 7  29  had a lead-based paint hazard in their home, versus 18 percent of those with higher incomes. Because cities and states vary in how they collect and report data on blood lead levels, it’s difficult to calculate precisely how lead exposure varies with race and income. But a survey conducted by the CDC between 1999 and 2004 found that the average blood lead level among black children aged 1 to 5 was 2.8 µg/dl, versus 1.7 µg/dl among white children. Black children also were nearly three times more likely to have a blood lead level above 10 µg/dl. Nonwhite children also are less likely to receive follow-up testing after an initial screening test, which might increase the risk of permanent cognitive damage, according to researchers at the University of Michigan. Weighing the Costs Lead paint abatement is expensive. In 2000, HUD estimated that it would cost $166 billion over 10 years to inspect and fully abate all the pre-1960 homes at risk of having a lead paint hazard, or about $9,000 per housing unit. Over the years, some cities and the federal government have planned large-scale lead removal programs that were abandoned due to time and cost constraints. At present, HUD offers several grant programs. In 2016, the agency granted nearly $100 million to 38 state and local governments for testing and abatement. The grants covered an estimated 6,000 housing units. While 37 million U.S. homes contain lead paint, “not all of these houses have children living in them,” notes Ludovica Gazze, a postdoctoral scholar at the University of Chicago who has studied the costs and benefits of lead-abatement programs. And not all of these homes pose an immediate hazard, so long as the paint is intact. “So it’s probably not efficient or cost-effective to abate all of them.” One solution is to mandate that homes be tested for lead and abated only if children move in, or if a child living in the home is found to have an elevated blood lead level, as 19 states have done. But in a 2017 paper, Gazze found these laws can have unintended consequences. While they do appear to result in lower blood lead levels, it’s not necessarily because landlords are abating lead paint; rather, it’s because many landlords with older homes discriminate against families with children,  leaving them with a smaller selection of housing. Those who don’t discriminate pass the costs of abatement on to their tenants in the form of higher rents. Overall, Gazze found that the mandates increased rental costs for families with children by about $400 per year for at least several years, and that lower-income families were disproportionately affected. “Given the distributional consequences,” Gazze says, “we should also think about how to focus the mandates to ensure that the costs are not falling on those families that are already disadvantaged.” Whoever bears the costs — landlords, tenants, or taxpayers — “there are potentially large benefits to society from introducing lead reduction regulations,” Gazze notes. For example, childhood lead exposure is linked to problems with aggression and impulse control and thus with criminal behavior later in life. Many researchers have identified a strong correlation between the reduction in childhood lead levels that started in the 1970s and the drop in violent crime that began in the mid-1990s. Other research has linked childhood lead exposure to lower test scores, higher medical costs as an adult, and lower lifetime earnings, which leads to lower tax revenue. In another paper, for example, Gazze found that preventing one microgram above 10 µg/dl in a child’s blood lead levels increased individual lifetime earnings by $110,000 and tax revenue by more than $16,000 per child. Lower blood lead levels also reduced state expenditures on special education by as much as $111 million per cohort of children. On Aug. 22, 1913, a 5-year-old boy was admitted to Johns Hopkins Hospital. Five days before he was admitted, he started having neck and face pain, became restless, and vomited repeatedly. He deteriorated rapidly, and “[o]n admission he was comatose,” wrote Johns Hopkins doctors Thomas and Blackfan. “His head was retracted, and his arms and legs were extended and spastic… There were recurrent, general convulsions.” A century later, lead poisoning as severe as that experienced by that little boy is rare. “It really was a tremendous public health victory that we got rid of lead in paint and in gasoline,” says Markowitz. “But there are still a lot of kids with blood lead levels high enough to cause damage.” Whether the benefits of preventing that damage outweigh the costs — and who should pay — is a question policymakers will continue to debate. EF  Readings Gazze, Ludovica. “The Price and Allocation Effects of Targeted Lead Abatement Mandates.” Manuscript, April 16, 2017. Lin-Fu, Jane S. “Undue Absorption of Lead among Children: A New Look at an Old Problem.” New England Journal of Medicine, March 30, 1972, vol. 286, no. 13, pp. 702-710.  30  E CO N F O C U S | F I R ST Q U A RT E R | 2 0 1 7  Markowitz, Gerald, and David Rosner. Deceit and Denial: The Deadly Politics of Industrial Pollution. Oakland: University of California Press, 2013. Warren, Christian. Brush with Death: A Social History of Lead Poisoning. Baltimore: Johns Hopkins University Press, 2001.  BOOKREVIEW  Are the Good Times Really Over? AN EXTRAORDINARY TIME: THE END OF THE POSTWAR BOOM AND THE RETURN OF THE ORDINARY ECONOMY BY MARC LEVINSON NEW YORK: BASIC BOOKS, 2016, 326 PAGES REVIEWED BY AARON STEELMAN  A  number of authors have recently made the case that we can expect the U.S. economy to putter along for some time, growing considerably more slowly than during much of the second half of the 20th century. Robert Gordon of Northwestern University has provided perhaps the most rigorous treatment in his The Rise and Fall of American Growth, which traces developments in standards of living since the Civil War. At the heart of Gordon’s case is that the types of major innovations that have propelled the U.S. economy during times of rapid growth simply are much less common and less likely to materialize in the future. In addition, the United States no longer has a large pool of untapped labor to propel it to new heights. Female labor force participation, for instance, has nearly doubled since 1950 for prime working-age women, standing at roughly 75 percent today. And as the population ages, funding of retirement and other safety net programs will be tested. In short, he paints a pretty glum picture with care and sophistication. Marc Levinson, the former finance and economics editor at the Economist, provides a similar forecast in his An Extraordinary Time but on a scale that is narrower in one way and broader in another. Unlike Gordon, Levinson focuses almost exclusively on the period since the end of World War II, with particular emphasis on the 1970s. Also unlike Gordon, his focus is global, arguing that many of the same trends ­— economic, political, and social — that have prevailed in the United States have “transcended national borders,” hampering growth in other countries. Levinson’s approach can be both entertaining and frustrating. In 15 broadly related chapters, he takes the reader around the world, producing often interesting and readable vignettes laden with useful anecdotes. But he often introduces a theme without developing it fully, before jumping to another, and then returning to it later in the book. This can be jarring and ultimately has the effect of the sum of the book’s parts being greater than the whole. Nevertheless, many of those parts are very good. In Levinson’s account, the key year is 1973. He argues that is when one period — characterized by robust growth widely distributed across the populace — ended and another of tepid growth with gains more concentrated among upper  income people began. In some ways, this is an artificial distinction, as the slowdown in productivity, the decline of the manufacturing sector, and soaring inflation were all gradual processes that cannot be pinpointed so easily, with some starting well before 1973 and some really picking up steam only afterward. But it is notable in the sense that the oil shock did cause significant short-term disruptions and shook the confidence of policymakers and consumers alike. Levinson’s narrative of the events leading up to the oil shock and of its consequences is a high point of the book. So too is his discussion of Japan’s rise from a relatively poor country still hobbled from the war in the late 1940s, to a rich one in the 1980s, to one that has seen anemic growth over the last 20 years. Levinson nicely details the efforts of Japan’s Ministry of International Trade and Industry, better known as MITI, to direct growth, both its seeming successes as well as its failures. The economic changes that came in the 1970s also produced significant political changes, Levinson argues. Slowing economies led people to reconsider some policies that were widely seen to be choking growth, ushering in leaders with more market-oriented rhetoric. This occurred not only in the United Kingdom and the United States, the two most famous examples, but also in countries such as France and Spain, where political change came more slowly and less comprehensively. Ultimately, Levinson maintains, these political shifts made little difference, as key long-run economic trends — most importantly, the decline in productivity — have proven largely immune to economic reforms. “Hope that wise, well-considered measures will propel an economy to a higher growth trajectory is eternal, but there are no foolproof recipes,” he writes. What’s more, Levinson argues recent trends are unlikely to change. In particular, there is little reason to expect a significant uptick in productivity that would boost growth. But is there? We recently have seen significant innovations in communications and entertainment that are hard to measure but have certainly improved well-being. Levinson seems to dismiss those too quickly. More generally, as Gordon’s colleague at Northwestern, Joel Mokyr, has argued, “There are myriad reasons why the future should bring more technological progress than ever before — perhaps the most important being that technological innovation itself creates questions and problems that need to be fixed through further technological progress. If we rethink how innovation happens, we have every reason to suspect that we ain’t seen nothing yet.” This may seem Pollyannaish to those who share Levinson’s rather bleak outlook, but it’s useful to keep in mind as one reads this often engaging and meandering book. EF E CO N F O C U S | F I R ST Q U A RT E R | 2 0 1 7  31  DISTRICTDIGEST  Economic Trends Across the Region  Business Dynamics in the United States and the Fifth District BY R . A N D R E W B A U E R  Job Creation & Job Destruction Rates 20 15 10 5 0  U.S. Job Creation Rate U.S. Job Destruction Rate U.S. Net Job Creation Rate  E CO N F O C U S | F I R ST Q U A RT E R | 2 0 1 7  2011  2009  2005  2007  2001  2003  1997  Fifth District Job Creation Rate Fifth District Job Destruction Rate Fifth District Net Job Creation Rate  SOURCE: Census Bureau  32  1999  1993  1995  1991  1987  1989  1983  1985  1977  -10  1981  -5 1979  PERCENT OF TOTAL EMPLOYMENT  25  2014  Slowdown in Business Dynamics While there are noticeable changes during recessions, when new business formation drops and the exit rate of existing  firms increases, the general trend over the last four decades is fairly clear: The rate of decline for job creation has been slightly faster than job destruction resulting in a slowing in the net job creation rate over time. These trends for the United States and the Fifth District are highlighted in the chart below on job creation and destruction rates. The data are from the Census Bureau’s Business Dynamics Statistics (BDS) database, which is based on an annual survey of the more than 6 million establishments in the United States. The survey, taken since 1976, captures information on establishment openings and closings; firm startups; job creation and destruction by firm size, age, and industrial sector; and other data related to business dynamics. When looking at job creation and destruction, a couple of things stand out. First, the overall trend and movements for the United States and the Fifth District are very similar. This is not unexpected. Given the industry composition and diversity of the regional economies, the Fifth District economy is fairly representative of the broader national economy. Second, while job creation, job destruction, and net job creation have all declined since 1977, the job creation rate declined considerably faster than the job destruction rate. In the late 1970s, the job creation rate averaged 20.9 percent and then declined steadily to 13.4 percent from 2010-2014 — a cumulative decline of 7.5 percentage points. The decline in the job destruction rate was not as pronounced. After averaging 14.8 percent from 1977-1979, the job destruction rate averaged 16.2 percent in the 1980s, 14.8 percent for next two decades, and 12 percent from 2010-2014 — a much smaller cumulative decline of just 2.8 percentage points from the late 1970s or 4.1 percentage points from the 1980s. Thus, there has been a decline in the net job creation rate over this period. Lastly, the severity of the recessions in the early 1980s and the Great Recession are readily apparent from the sharp decline in job creation and the notable increase in job destruction during those periods. A major difference between the two is that the job destruction rate returned to its pre-recession level following the 1980s recession but not following the Great Recession. Instead, both the job destruction rate and the job creation rate returned to levels below where they were prior to the recession — reflecting the moderate growth and less dynamic economy during the recovery. Since both rates dropped, however, the net job creation rate returned to above 2 percent from 2011 to 2014 (2.2 percent average), close to the average for the 2000s expansion.  2013  T  he dynamics of firm creation and exit are an important engine of economic growth. Entrepreneurs identify an opportunity, enter the market, and increase competition by offering new goods and services. In the process, they add to the demand for labor, make investments in equipment and software, and contract for services from other businesses. At the same time, some businesses become obsolete either because consumers are no longer interested in their products or services or because their competitors are able to offer a higher-quality product or service or a lower price; in such cases, the firms exit and the resources they utilized, such as labor, are then freed to be used by more productive firms. Studies have shown a prominent role of business startups in job growth and have found a positive relationship between entry and exit and productivity growth. Researchers have noted that there has been a slowing in business dynamics in the United States in recent decades. Job creation and job destruction rates have declined since the late 1970s, and net job creation has trended lower as well. Lower business startup activity is one of the factors responsible for this slowdown. The rate at which new firms are created has declined since the late 1970s, and their contribution to employment growth has decreased as well. The Great Recession of 2007-2009 further contributed to this decline; job creation and destruction rates, as well as new business formation, dropped sharply and have remained at levels well below those prior to the recession.  New Firms & Job Creation 18 16 14 12 10 8 6 4 2 0  8 7  2011  2013  2012  2010  2007  2009  2008  2006  2003  2005  2004  2002  2001  1999  1998  2000  1997  1995  1994  1996  1991  1993  1992  1989  1988  1990  1985  1987  1986  1984  1981  1983  1982  1980  1977  1979  1978  PERCENT OF TOTAL  Slowdown in Startup Activity 6 Underlying the slowdown in job creation has been a 5 slowdown in startup activity. The major break came 4 during the Great Recession: The number of new firms in the economy each year had been steady at around 3 500,000 from 1977 through the mid-2000s, but there 2 was then a notable drop during the Great Recession and 1 entrepreneurial activity has remained subdued since; 0 the number of new firms each year since the recession has averaged roughly 400,000. When compared with a growing economy, the fact that the number of startups U.S. New Firms (left axis) Fifth District New Firms (left axis) U.S. Employment (right axis) Fifth District Employment (right axis) was relatively steady over such a long period of time SOURCE: Census Bureau reflected declining entrepreneurial activity. Startups have declined not only in absolute numbers, but also as a proportion of all firms. The 564,000 entering the market to displace existing establishments startups in 1977 represented 16.5 percent of firms in the that are less productive. They noted that the “productivity economy, whereas the 557,000 new firms in 2006 repgap between low-productivity exiting single-unit estabresented just 10.8 percent of firms. That percentage fell lishments and entering high-productivity establishments further during the Great Recession to 8.0 percent, where from large, national chains plays a disproportionate role in it has remained. (See adjacent chart.) these dynamics.” Declining startup activity has hurt job growth. In a In a 2004 article in Annals of Regional Science that 2010 National Bureau of Economic Research working examined the determinants of new firm formation in the paper, John Haltiwanger of the University of Maryland manufacturing sector in Texas from 1970 to 1991, Donald and Ron Jarmin and Javier Miranda of the Census Bureau Hicks of the University of Texas at Dallas and Vinod found that “firm births contribute substantially to both Sutaria, then a doctoral student there, looked at a number gross and net job creation” and that startups play a “critiof factors to explain firm formation: demographics, labor cal role” in U.S. employment growth dynamics. For those market conditions, industrial restructuring, availability of startups and younger firms that survive, their growth rate local finance, local government spending, and local busiis considerably higher than that of more mature firms. In ness dynamics. They found that new firm formation was that paper, they found that business startups account for reduced by rising unemployment rates in a metro region roughly 3 percent of total employment in any year from and was boosted by higher average establishment size and 1992 to 2005. But that percentage was higher prior to availability of capital (as measured by local per capita bank 1992, averaging close to 4 percent prior and averaging just deposits) in a metro region. They also found that popu2 percent from 2006 to 2014. lation and per capita personal income growth were not So what has been the cause of the slowdown in business factors that influenced new firm formation. dynamics and the decline in new firm formation? There In a 2014 paper, Ian Hathaway of the Brookings has been no definitive accounting for the dynamics of Institution and Robert Litan, formerly at Brookings, used firm entry and exit and the trends observed in the data. the BDS data to look at the variation in startup rates In a 2013 National Bureau of Economic Research working across U.S. metropolitan areas and found two prominent paper, Daron Acemoglu of the Massachusetts Institute of drivers of regional differences: population growth and Technology, Ufuk Akcigit of the University of Chicago, business consolidation. Contrary to the results of Sutaria Nicholas Bloom of Stanford University, and William Kerr and Hicks, Hathaway and Litan found that firm formation of Harvard Business School looked at innovation and tends to be higher in regions with greater population and productivity growth to explain firm entry and exit. They real per capita income growth. They noted that regions found that policies that subsidize either the research and with the highest firm entry rates in the late 1970s were development or the continued operation of incumbent strongly correlated with population growth in the 1970s, firms stifle the formation of new firms. They argued that and the opposite was true for regions with lower firm forincumbent firms that are slow to innovate use research mation rates. They ran several regressions and in one found and development resources inefficiently. Eliminating subthat the change in population from the late 1970s to the sidies would free up these resources for incumbent firms mid-2000s had a large positive effect on startups. When that are more innovative as well as for new firms. they accounted for region-specific effects, they found that Similarly, a 2006 article by Haltiwanger and Lucia the estimated impact of population change over the prior Foster and C.J. Krizan of the Census Bureau in the Review three years is reduced but still strong and statistically signifof Economics and Statistics looked at the restructuring in the icant. They also find that income per capita is a significant retail trade sector in the 1990s and found that much of the factor, although they estimate that the impact of popularestructuring was due to more productive establishments tion change is three times greater than income per capita. E CO N F O C U S | F I R ST Q U A RT E R | 2 0 1 7  33  New Firms by Sector 18 16 14 12 10 8 6 4 2 0  PERCENT OF TOTAL FIRMS  Startup activity in the agricultural, forestry, and fishing sector experienced the second-largest decline, by 7.3 percentage points. In contrast, declines in service-oriented industries were less severe although still significant, ranging between 3.9 percentage points and 4.5 percentage points. Retail trade and services (a broad category that includes professional workers, research and development, information technology, education and health, and leisure and hospital1980s 1990s 2000s 2010s ity) experienced the smallest declines of 3.9 and Economy wide Transportation and public utilities 4.0 percentage points. It should be noted, howAgricultural, forestry, fishing Wholesale trade Retail Trade Mining ever, that in the case of the finance, real estate, Construction Finance, insurance, real estate (FIRE) and insurance sector, the 2010 to 2014 period Manufacturing Services average masks a strong decline in recent years. The SOURCE: Census Bureau startup rate in this category fell 4 percentage points during this period and as a result declined by a Hathaway and Litan also looked at the possible effect cumulative 7.5 percentage points from 1980 to 2014 — the of an aging population. Prior research has suggested that second-largest decline after construction. individuals age 35 to 44 have the highest propensity to In light of the research looking at firm entry and exit, start a new business. To examine the possible impact of one explanation for the sizeable decline in new entry an aging population on startup activity, they included that in construction and agriculture would be the increased age group in their regressions and found that when conrole of larger, multi-establishment firms. As argued by trolling for regional factors, the share of the population Hathaway and Litan, greater business consolidation would between 35 and 44 does greatly influence firm formation inhibit new firm entry and in both industries larger firms rates—and the impact is greater than that of per capita have become more prominent, although there remain income growth. a sizeable number of smaller firms in both industries. The other significant driver of new firm formation in Subsidies — which are sizeable in the agriculture sector — their results is business consolidation. In previous work, would also depress new entry as well. Subsidies to incumHathaway and Litan documented an increase in business bents encourage the survival and expansion of these firms consolidation across geographies and sectors over the past at the expense of potential new firms with higher rates of few decades. They found that the firm formation tends to innovation and productivity. The subsidized incumbent be higher in regions with less business consolidation. They firms utilize labor and funding that otherwise would be defined business consolidation as an increase in the ratio available to new firms. The relatively smaller decline in of the average firm size to the average establishment size. services would perhaps be not unexpected as increased A ratio of 1.0 would indicate no consolidation as each firm innovation due to greater adoption of information techhas one establishment. As the ratio increases, there are nology, smaller-sized firms (startup costs), and less busimore multi-establishment firms. They argue that greater ness consolidation would foster greater firm entry. concentration would be associated with higher barriers to entry and thus would reduce firm formation. Slowdown in the Fifth District The Fifth District has experienced trends in business Slowdown Across Sectors dynamics and startup activity similar to those of the nation. The long-term slowdown in business dynamism and (See chart on next page.) The new firm formation rate for startup activity has been observed across industries. Each the Fifth District was only 0.4 percentage point lower industry sector has experienced a decline in its firm forthan that of the nation in the 1980s and 1990s and 0.6 and mation rate, although there are some notable differences 0.8 percentage point lower in the 2000s and 2010-2014, across industries. (See chart above.) Comparing the 1980s respectively. Among Fifth District jurisdictions, North to 2010-2014, the average decline was 5.4 percentage Carolina, Virginia, and South Carolina have had the stronpoints, with the goods-producing sectors experiencing the gest startup rates, followed by Maryland and then West largest declines. The greatest decline was in the construcVirginia and the District of Columbia. The startup rates tion sector. In the 1980s, the startup rate in the construcfor North and South Carolina and Virginia have been fairly tion sector averaged 14.1 percent — the second-highest close since 1980, with the period averages usually within a rate after agricultural, forestry, and fishing and just slightly few tenths of a percentage point of one another. The District above mining. The construction startup rate fell by of Columbia has historically had the lowest startup rate 9.6 percentage points to an average of 4.6 percent in until 2010-2014 when the West Virginia rate dropped a full 2010-2014, the second-lowest rate among all industries. 2 percentage points from the 2000s to a low of 5.4 percent. 34  E CO N F O C U S | F I R ST Q U A RT E R | 2 0 1 7  New Firms 14 PERCENT OF TOTAL FIRMS  12 10 8 6 4 2 0  1980s  1990s MD NC  U.S. DC  2000s  2010s  WV Fifth District  SC VA  SOURCE: Census Bureau  Job Creation from Startups 4.0 PERCENT OF TOTAL EMPLOYMENT  Accounting for the differences, as the research literature suggests, is challenging. But the findings of past research, if applied to the Fifth District, may suggest some partial explanations. North and South Carolina have been the two fastest-growing jurisdictions within the Fifth District while West Virginia and the District of Columbia have been the slowest. Virginia has had relatively strong population growth, as well, particularly in the northern part of the state. As discussed by Hathaway and Litan, population growth differentials would explain some of the variation in entry rates. Sutaria and Hicks argue that average firm size is related to new firm formation as large firms may find it more efficient to outsource some production. The experience of South Carolina is in line with this view; the state has seen an increase in large manufacturing firms, and a sizeable supplier base has been built to service these firms. Finally, Acemoglu and his co-authors note the negative impact of subsidies and policies that favor incumbent firms as they create inefficiencies in the allocation of resources for research and development. The federal government has a large presence in the northern half of the district with a large number of federal institutions and facilities in Maryland, the District of Columbia, and Virginia. All three receive a large amount of federal contract spending. The extent to which this funding is not being allocated to the most productive entities would impact the availability of resources for new firms looking to enter the market. This could partially explain the lower entry rates in Maryland and the District of Columbia. Additional likely factors are taxes, regulations, and other state policies. The decline in startup activity and job creation has been fairly uniform across the Fifth District. From the 1980s to 2010-2014, the decrease in startup activity in the Fifth District was 4.9 percentage points (comparing period averages), slightly greater than the 4.4 percentage point drop for the United States. Most Fifth District jurisdictions experienced a decline close to the district average, with South Carolina having the greatest at 5.2 percentage points, although Maryland, Virginia, and West Virginia were only slightly smaller. (See chart.) Startup activity declined the least in the District of Columbia, by 3.5 percentage points. As would be expected, the decline in startup activity was reflected in job creation. The percentage of employment created by new firms in the Fifth District fell from 3.3 percent in the 1980s, just slightly less than the U.S. rate of 3.6 percent, to 1.8 percent in 2010-2014. Although there was a moderate upward trend in the absolute number of jobs created by new firms from the 1980s through the mid2000s (from 245,000 in the 1980s to 275,000 in the 2000s and peaking at 322,000 in 2006), the increase did not match the growth in overall employment, so the job creation rate by startups slowed each decade before dropping after the Great Recession (to an average of 203,000 in 2010-2014).  3.5 3.0 2.5 2.0 1.5 1.0 0.5 0.0  1980s U.S. DC  1990s MD NC  SC VA  2000s  2010s  WV Fifth District  SOURCE: Census Bureau  There was notable variation in the decline in the new firm job creation rate across the Fifth District. From the 1980s to 2010-2014, the number of new jobs created declined by 17 percent — a 1.5 percentage point decline in the new firm job creation rate. West Virginia, the District of Columbia, and Maryland experienced larger decreases of 37, 34, and 27 percent, respectively, while North Carolina had the smallest change, 4.4 percent, or just a 1.1 percentage point decline in the new firm job creation rate. Conclusion Over the last several decades, the rate which jobs are created and destroyed has diminished and fewer new firms are created each year. This slowing in business dynamics is taking place in the Fifth District and across all industry sectors. Research has highlighted the recent trends and has offered some insights into factors that may be impacting firm entry and exit, entrepreneurship, and business dynamics more broadly, but there has yet to be a definitive accounting of the current trends. The Great Recession accentuated the slowdown and new startups and job creation from new firms remain well below pre-recession levels. EF E CO N F O C U S | F I R ST Q U A RT E R | 2 0 1 7  35  State Data, Q3:16   DC  MD  NC  SC  VA  WV  Nonfarm Employment (000s) 781.0 2,714.0 4,354.5 2,061.6 3,922.3 744.6 Q/Q Percent Change -0.1 0.4 0.6 0.6 0.3 -0.5 Y/Y Percent Change 1.4 1.3 2.5 2.4 1.1 -1.5  Manufacturing Employment (000s) 1.2 103.3 464.5 238.5 232.5 46.6 Q/Q Percent Change 0.0 -0.6 0.0 0.3 0.2 -0.5 Y/Y Percent Change 9.1 -1.5 0.5 0.9 -0.8 -2.0  Professional/Business Services Employment (000s)	 165.3 443.2 608.0 269.5 718.0 65.5 Q/Q Percent Change 0.2 0.7 0.6 0.1 0.5 -0.4 Y/Y Percent Change 1.9 2.5 2.8 1.8 1.7 -2.2  Government Employment (000s) 238.4 505.0 730.4 365.3 714.8 155.1 Q/Q Percent Change -0.6 0.2 0.7 0.7 0.1 -0.9 Y/Y Percent Change 0.1 0.4 1.4 1.3 0.4 0.7  Civilian Labor Force (000s) 392.2 3,171.3 4,876.0 2,297.3 4,242.6 783.0 Q/Q Percent Change -0.1 0.3 0.5 0.0 0.5 0.1 Y/Y Percent Change 1.2 0.7 1.9 1.1 0.8 -0.2  Unemployment Rate (%) 6.0 4.2 5.0 4.6 4.1 6.0 Q2:16 6.1 4.3 5.0 5.1 4.0 6.0 Q3:15 6.7 4.9 5.7 5.6 4.2 6.7  Real Personal Income ($Bil) 46.7 316.3 386.6 177.7 409.7 61.9 Q/Q Percent Change 0.8 0.8 0.7 0.8 0.7 0.4 Y/Y Percent Change 3.3 2.9 3.0 3.3 2.4 -0.1  Building Permits 1,609 3,274 16,408 8,617 8,035 702 Q/Q Percent Change 0.0 -41.5 8.6 -2.4 -3.0 -9.9 Y/Y Percent Change 0.0 -26.8 24.8 2.0 -8.7 -13.0  House Price Index (1980=100) 796.5 450.7 345.5 352.2 436.9 230.8 Q/Q Percent Change 0.3 1.1 2.1 1.8 0.8 0.6 Y/Y Percent Change 6.0 3.6 6.2 6.4 3.4 2.0 NOTES:  SOURCES:  1) FRB-Richmond survey indexes are diffusion indexes representing the percentage of responding firms reporting increase minus the percentage reporting decrease. The manufacturing composite index is a weighted average of the shipments, new orders, and employment indexes. 2) Building permits and house prices are not seasonally adjusted; all other series are seasonally adjusted. 3) Manufacturing employment for DC is not seasonally adjusted  Real Personal Income: Bureau of Economic Analysis/Haver Analytics. Unemployment Rate: LAUS Program, Bureau of Labor Statistics, U.S. Department of Labor/Haver Analytics Employment: CES Survey, Bureau of Labor Statistics, U.S. Department of Labor/Haver Analytics Building Permits: U.S. Census Bureau/Haver Analytics House Prices: Federal Housing Finance Agency/Haver Analytics  For more information, contact Michael Stanley at (804) 697-8437 or e-mail michael.stanley@rich.frb.org  36  E CO N F O C U S | F I R ST Q U A RT E R | 2 0 1 7  Nonfarm Employment  Unemployment Rate  Real Personal Income  Change From Prior Year  Third Quarter 2005 - Third Quarter 2016  Change From Prior Year  Third Quarter 2005 - Third Quarter 2016  4% 3% 2% 1% 0% -1% -2% -3% -4% -5% -6%  Third Quarter 2005 - Third Quarter 2016  10% 9% 8% 7% 6% 5% 4% 	 06	 07	 08	 09	 10	 11	 12	 13	 14	 15	 16  3%  	 06	 07	 08	 09	 10	 11	 12	 13	 14	 15	 16  Fifth District  8% 7% 6% 5% 4% 3% 2% 1% 0% -1% -2% -3% -4% -5%  	 06	 07	 08	 09	 10	 11	 12	 13	 14	 15	 16  United States  Nonfarm Employment Major Metro Areas  Unemployment Rate Major Metro Areas  Change From Prior Year  Third Quarter 2005 - Third Quarter 2016  Building Permits Change From Prior Year Third Quarter 2005 - Third Quarter 2016  Third Quarter 2005 - Third Quarter 2016  7% 6% 5% 4% 3% 2% 1% 0% -1% -2% -3% -4% -5% -6% -7% -8%  	 06	 07	 08	 09	 10	 11	 12	 13	 14	 15	 16 Charlotte  Baltimore  13% 12% 11% 10% 9% 8% 7% 6% 5% 4% 3% 2% 1%  Washington  30% 20% 10% 0% -10% -20% -30% -40% 	 06	 07	 08	 09	 10	 11	 12	 13	 14	 15	 16 Charlotte  Baltimore  FRB—Richmond Manufacturing Composite Index  Third Quarter 2005 - Third Quarter 2016  Third Quarter 2005 - Third Quarter 2016  30  30  20  20  10 0  10  -10  0  -20  -10  -30  -20  -40 	 06	 07	 08	 09	 10	 11	 12	 13	 14	 15	 16  -50  -50%  	 06	 07	 08	 09	 10	 11	 12	 13	 14	 15	 16  	 06	 07	 08	 09	 10	 11	 12	 13	 14	 15	 16 Fifth District  Washington  FRB—Richmond Services Revenues Index 40  -30  40%  United States  House Prices Change From Prior Year Third Quarter 2005 - Third Quarter 2016  16% 14% 12% 10% 8% 6% 4% 2% 0% -2% -4% -6% -8%  	 06	 07	 08	 09	 10	 11	 12	 13	 14	 15	 16 Fifth District  United States  E CO N F O C U S | F I R ST Q U A RT E R | 2 0 1 7  37  Metropolitan Area Data, Q3:16   Washington, DC  Baltimore, MD  Hagerstown-Martinsburg, MD-WV  Nonfarm Employment (000s) 2,641.1 1,398.4 106.7 Q/Q Percent Change -0.1 -0.2 -0.6 Y/Y Percent Change 1.8 1.1 0.7  Unemployment Rate (%) 3.9 4.4 4.6 Q2:16 3.7 4.6 4.5 Q3:15 4.3 5.2 5.4  Building Permits 6,323 1,290 238 Q/Q Percent Change -18.3 -38.2 -5.9 Y/Y Percent Change 4.0 -36.5 1.7    Asheville, NC Charlotte, NC Durham, NC Nonfarm Employment (000s) 187.5 1,147.2 301.8 Q/Q Percent Change -0.3 -0.1 -0.4 Y/Y Percent Change 3.5 3.8 3.2  Unemployment Rate (%) 3.9 4.6 4.1 Q2:16 4.0 4.8 4.5 Q3:15 4.6 5.4 5.0  Building Permits 494 6,497 1,086 Q/Q Percent Change -18.8 45.7 3.9 Y/Y Percent Change -3.5 43.8 -7.4      Greensboro-High Point, NC Raleigh, NC Wilmington, NC Nonfarm Employment (000s) 355.8 604.0 124.6 Q/Q Percent Change -0.6 0.6 0.0 Y/Y Percent Change 0.9 3.7 2.8  Unemployment Rate (%) 4.8 4.0 4.6 Q2:16 5.1 4.3 4.8 Q3:15 5.9 4.7 5.5  Building Permits 758 3,965 320 Q/Q Percent Change -25.5 -5.6 -37.1 Y/Y Percent Change 7.2 38.8 -29.4  NOTE:  Nonfarm employment and building permits are not seasonally adjusted. Unemployment rates are seasonally adjusted.  38  E CO N F O C U S | F I R ST Q U A RT E R | 2 0 1 7   Winston-Salem, NC Charleston, SC Columbia, SC Nonfarm Employment (000s) 259.6 348.1 393.7 Q/Q Percent Change -0.9 0.2 0.1 Y/Y Percent Change 1.5 3.7 2.3  Unemployment Rate (%) 4.5 4.3 4.7 Q2:16 4.8 4.7 5.0 Q3:15 5.5 4.8 5.2   Building Permits 372 1,864 1,184 Q/Q Percent Change -41.4 -7.4 -3.9 Y/Y Percent Change 46.5 -4.8 -10.8    Greenville, SC Richmond, VA Roanoke, VA Nonfarm Employment (000s) 408.5 665.1 161.7 Q/Q Percent Change -0.4 -0.1 -0.7 Y/Y Percent Change 1.5 1.7 0.9  Unemployment Rate (%) 4.6 4.0 3.8 Q2:16 4.8 3.8 3.6 Q3:15 5.0 4.4 4.2  Building Permits 1,640 1,265 N/A Q/Q Percent Change 13.3 -9.8 N/A Y/Y Percent Change -4.4 -13.1 N/A    Virginia Beach-Norfolk, VA Charleston, WV Huntington, WV Nonfarm Employment (000s) 778.1 118.4 136.9 Q/Q Percent Change 0.3 -1.4 -1.1 Y/Y Percent Change 0.5 -1.9 -0.5  Unemployment Rate (%) 4.5 5.6 6.0 Q2:16 4.4 5.7 6.1 Q3:15 4.7 6.4 6.2  Building Permits 2,212 66 30 Q/Q Percent Change 16.0 8.2 -33.3 Y/Y Percent Change 21.1 6.5 -41.2   For more information, contact Michael Stanley at (804) 697-8437 or e-mail michael.stanley@rich.frb.org E CO N F O C U S | F I R ST Q U A RT E R | 2 0 1 7  39  OPINION  Publicly Provided Data and the Fed BY K A RT I K AT H R E YA  G  overnments around the world routinely provide many official statistics, including — perhaps most prominently — data that summarize the state of the national economy. The United States is no different. Multiple agencies, including the Federal Reserve, are dedicated to collecting, disseminating, and using macroeconomic data. Macroeconomic data are forms of information. Information, in turn, can be what economists consider a “public” good. A public good has two features. First, it’s “nonexcludable,” which means its use is something that cannot be effectively restricted: Think of how hard it is to fully “gate” content on the Internet. Second, it’s “nonrivalrous,” which means one person’s use of it doesn’t diminish the ability of others to use it: Any number of people can learn or know the same thing, after all. Both features suggest that private markets may under-provide information. Macroeconomic information, in particular, is likely to be under-produced. It’s not necessarily in the interest of any one private firm, for example, to produce and maintain data on what the overall economy is doing, especially when the firm can’t easily restrict access to this good. Why incur the cost to collect, organize, and maintain data that, once widely known, will give you little or no edge over your competitors? The origins of arguably the single most important measure of economic performance — gross domestic product — illustrate the poor private incentive to produce basic macroeconomic data. Before the 1930s, no private firms produced these data, and the U.S. government didn’t systematically collect this information, either. The Great Depression prompted policymakers to reconsider this need. The economist Simon Kuznets, who worked at the National Bureau of Economic Research, led a group of researchers at the Commerce Department that developed the first-ever consistent set of accounts to measure the total economic output in the nation over a given period of time. Around the world, other economies faced the same problem, but once Kuznets showed the way, his measurement principles were the basis for many standards adopted by nearly all of the world’s countries over time. These “national income and product accounts” or NIPA — produced by the Commerce Department’s Bureau of Economic Analysis (BEA) — now provide the basis for our understanding of the state of the economy. Today, few would dispute the enormous value of these data. Economists, policymakers, financial markets, and the public all routinely rely on NIPA-based information to assess the state of the economy as a whole and make decisions.  40  E CO N F O C U S | F I R ST Q U A RT E R | 2 0 1 7  What are some other examples of critical data produced by the government? Measures of employment and unemployment, which provide important information about the labor market, are supplied by the Bureau of Labor Statistics (BLS). A more recent BLS dataset, the Job Openings and Labor Turnover Survey ( JOLTS), provides information on vacancies, hires, and separations between employers and employees. Such information has been key for researchers and policymakers who are trying to understand whether labor markets are functioning well or not, and, in turn, whether Fed policy is appropriately set or not. Thus, as with NIPA, employment and JOLTS data play crucial roles in public policy. But they are also good examples of information that wouldn’t necessarily be in the interest of a private entity to produce. To be sure, there are also many instances today of valuable privately collected information, like payroll data provided by ADP or the Billion Prices Project produced by the Massachusetts Institute of Technology. There are also new analytical tools that can process all sorts of data much more quickly than before as well as produce unique data — a good example being an index of economic uncertainty, developed by economists Scott Baker, Nicholas Bloom, and Steven Davis, that is based on computational text analysis of newspapers. However, because these private data sources are typically narrower and not as comprehensive or long-standing as many government series, they are best seen as a complement to publicly provided data, not a substitute. It’s also important to note that, collectively, these government datasets provide a complex and wide-ranging account of the economy — where it’s doing well, and where there’s pain. While headlines in the news often fixate on one number, these data provide economists at the Fed and elsewhere (including private entities) with a far richer and more accurate understanding of our economy — and plausibly help us attain better macroeconomic and microeconomic performance. But to be clear, successful monetary and other policies almost certainly require public support for data collection and management because of the public-good nature of macroeconomic data. As Kuznets famously once noted, economists often find surprises as they try to “find order in the universe of their study.” With the tools provided by the public-sector entities that produce rich, timely, and accurate data, the Fed and other policymakers are far better equipped to find this order than they ever could in his day. EF Kartik Athreya is executive vice president and director of research at the Federal Reserve Bank of Richmond.  NEXTISSUE Pricing Vice  Soda taxes are rising in popularity. They’re just one example of policymakers using “sin taxes” to promote health while generating revenue. Economists are asking how much these measures actually reduce consumption and whether they lead to better health outcomes — and to what extent they’re regressive, borne disproportionately by lower-income consumers.  Credit Unions  Unlike banks, credit unions have been exempt from paying federal corporate income taxes since 1937. This favorable tax treatment has been met with opposition over the decades as critics argue credit unions have become indistinguishable from banks. What do we know about who benefits from the tax exemption, and what are observers saying about its continued relevance?  Public Broadcasting  The Corporation for Public Broadcasting, which supports local public television and radio stations, was established in 1967 to “constitute a source of alternative communications.” At the time, with only three TV networks and commercially dominated radio, the case for government-supported alternatives was straightforward. Have changes in media and technology made federal funding for public broadcasting less necessary?  Federal Reserve Since the 1960s, the Fed has occasionally intervened in foreign exchange markets. Originally, it did so to help maintain U.S. gold reserves when the dollar was convertible to gold. But the Fed continued to intervene even after the dollar switched to a floating exchange rate in 1971. By the 1980s, a number of economists and Fed officials were questioning the wisdom of these actions — debates that have influenced how the Fed views foreign exchange interventions today.  Economic History In 1918, a deadly flu virus began spreading across the world. Within two years, the “Spanish flu” pandemic had killed as many as 50 million people worldwide, with lasting social and economic repercussions.  Interview Jesse Shapiro of Brown University on the role of media in democracy, the drivers of media bias, and whether the internet is driving political polarization.  Visit us online: www.richmondfed.org •	To view each issue’s articles and Web-exclusive content •	 To view related Web links of 	 additional readings and 	 references •	 To subscribe to our magazine •	To request an email alert of our online issue postings  Federal Reserve Bank of Richmond P.O. Box 27622 Richmond, VA 23261  Change Service Requested  To subscribe or make subscription changes, please email us at research.publications@rich.frb.org or call 800-322-0565.  Annual Report Essay Explores Urban Decline The featured essay in the Richmond Fed’s 2016 Annual Report provides a framework for understanding and responding to urban decline. The article discusses the economic advantages of cities, patterns of development, cycles of development and redevelopment, and guidance for policy responses to urban decline. Previous annual report essays include “A ‘New Normal’? The Prospects for Long-Term Growth in the United States” and “Living Wills: A Tool for Curbing ‘Too Big to Fail.’”  Annual reports are available on the Bank’s website at www.richmondfed.org/publications/research/annual_report/