View original document

The full text on this page is automatically extracted from the file linked above and may contain errors and inconsistencies.

FOURTH QUARTER 2014  FEDERAL FEDERALRESERVE RESERVEBANK BANKOF OFRICHMOND RICHMOND  The Sharing Economy Are new online markets creating economic value or threatening consumer safety?  Money and Politics  Debating Fed Crisis Policy  Interview with Claudia Goldin  VOLUME 18 NUMBER 4 FOURTH QUARTER 2014  COVER STORY  12  The Sharing Economy Are new online markets creating economic value or threatening consumer safety?  Econ Focus is the economics magazine of the Federal Reserve Bank of Richmond. It covers economic issues affecting the Fifth Federal Reserve District and the nation and is published on a quarterly basis by the Bank’s Research Department. The Fifth District consists of the District of Columbia, Maryland, North Carolina, South Carolina, Virginia, and most of West Virginia. DIRECTOR OF RESEARCH  Kartik Athreya EDITORIAL ADVISER  Aaron Steelman EDITOR    FEATURES   Money Talks Legal changes have opened the door to new kinds of political spending. What does the money buy?    Bottoms Up Craft brewers raise the bar in the American beer industry  Renee Haltom  16  SENIOR EDITOR  David A. Price MANAGING EDITOR/DESIGN LEAD  Kathy Constant STAFF WRITERS  Helen Fessenden Jessie Romero Tim Sablik  21  EDITORIAL ASSOCIATE  Lisa Kenney  ­  CONTRIBUTORS  Jamie Feik Thomas M. Humphrey Richard Kaglic Joseph Mengedoth Karl Rhodes DESIGN  DEPARTMENTS  	 1	 	President’s Message/What’s It Like on the FOMC? 	 2			 Upfront/Regional News at a Glance 	 3			 Policy Update/Banning the Box 	 4			 Federal Reserve/Averting Financial Crises 	10			 Jargon Alert/Aggregate Demand 	11			 Research Spotlight/Revisiting the ‘Paradox of Choice’ 24				 Interview/Claudia Goldin 29			The Profession/Economics and Ideology 	30			Economic History/Car Wars 34			Around the Fed/How Real is the U.S. Manufacturing Revival? 	35			 Book Review/Trillion Dollar Economists  36				 District Digest/Building the Aerospace Cluster in South Carolina 44	 Opinion/A New Payments Role for the Fed?  Janin/Cliff Design, Inc.  Published quarterly by the Federal Reserve Bank of Richmond P.O. Box 27622 Richmond, VA 23261 www.richmondfed.org www.twitter.com/ RichFedResearch Subscriptions and additional copies: Available free of charge through our website at www.richmondfed.org/publications or by calling Research Publications at (800) 322-0565. Reprints: Text may be reprinted with the disclaimer in italics below. Permission from the editor is required before reprinting photos, charts, and tables. Credit Econ Focus and send the editor a copy of the publication in which the reprinted material appears. The views expressed in Econ Focus are those of the contributors and not necessarily those of the Federal Reserve Bank of Richmond or the Federal Reserve System. ISSN 2327-0241 (Print) ISSN 2327-025x (Online)  PRESIDENT’SMESSAGE  What’s It Like on the FOMC?  E  ight times per year, the Federal Open Market Committee (FOMC) meets in Washington, D.C., to discuss the most appropriate path for monetary policy. The FOMC is made up of the Board of Governors (which currently has five members), the president of the New York Fed, and four other Reserve Bank presidents on a rotating basis. In 2015, it’s my turn again to serve as a voting member of the FOMC — a responsibility that is especially important as the Committee determines when to raise interest rates in response to the outlook for inflation and growth. For two decades after the Fed’s founding in 1913, the regional Reserve Banks retained some autonomy in conducting open market operations. The system worked fairly well until the Great Depression, when Banks began disagreeing about policy and in some cases refused to cooperate with each other. This failure of coordination contributed to the creation in 1933 of the FOMC, whose decisions would be binding on all Reserve Banks. The current structure of the FOMC was established by the Banking Act of 1935, with the goal of enabling the committee to effectively set monetary policy for the nation as a whole while remaining aware of regional conditions. Regardless of voting status, all 12 Reserve Bank presidents participate fully in the deliberations at every FOMC meeting. (In Fed parlance, all the presidents and governors are FOMC participants, while those who are voting in a given year are designated FOMC members.) The meetings typically begin with a presentation by a New York Fed official about developments in financial markets, followed by presentations from senior staff at the Board of Governors about their economic and financial forecasts. Then each president and governor shares his or her economic outlook, which is the result of extensive research and preparation. Following that economic “go-round,” the Board’s director of monetary affairs discusses various policy options, and there is a policy go-round in which all the participants share their views about the most appropriate policy. The final step is the vote. Here in Richmond, we continually follow evolving economic conditions, but preparations begin in earnest about three weeks before the meeting. The Bank’s economists and I identify several topics of special interest, and the economists then prepare research reports. The week before the FOMC meeting, we meet for half a day to discuss their findings, as well as national and regional economic conditions. The next day, a smaller group meets to discuss specific monetary policy alternatives and refine our Bank’s perspective. While this process varies from Bank to Bank, every president has a team of economists and analysts to help him or  her prepare for FOMC. As a result, we bring diverse analytical perspectives to the table in Washington, which makes for a rich and informed discussion. It also means that we learn a great deal from our colleagues. My role at an FOMC meeting is not only to articulate my view on policy but also to listen to my colleagues and engage in give-and-take that improves our understanding of the challenging questions we face. I will never forget the first time I attended an FOMC meeting as president of the Richmond Fed. Consistent with longstanding custom for new presidents, I was greeted at the door to the ornate boardroom by the committee’s assistant secretary and shown to my assigned seat, even though I knew it quite well from having often accompanied my predecessor to meetings. Early in the meeting, I recall looking around the room and becoming acutely aware of the millions of people outside that room who would be affected by our decisions. That immense sense of responsibility remains with me to this day. My colleagues on the FOMC share that sense of responsibility, but that doesn’t mean we always agree about the best course of action. In part, that’s because monetary policymaking in real time is no easy task. The sources of uncertainty are numerous; many data are available only with a lag and are subject to later revisions, making it difficult to assess the current state of the economy. It also can be hard to judge whether a given event will have transitory or lasting effects, and thus whether or not that event justifies a change in policy. Add to this uncertainty the fact that committee participants may adopt different analytical frameworks that affect how empirical evidence is interpreted, and it’s clear why our views sometimes diverge. But the strength of the FOMC is that it ensures a wide range of perspectives are brought to bear. And even when we disagree, we respect the integrity of our colleagues’ views — and know that we all take seriously our responsibility to the American people. EF  JEFFREY M. LACKER PRESIDENT FEDERAL RESERVE BANK OF RICHMOND  E C O N F O C U S | F O U RT H Q U A RT E R | 2 0 1 4  1  UPFRONT  Regional News at a Glance  BY H E L E N F E S S E N D E N  MARYLAND — Gov. Larry Hogan’s new administration is facing a $20 billion unfunded liability for state pensions. The state’s pension fund has only about 69 percent of assets needed to cover its pension payments, well below the recommended 80 percent, despite a 2011 reform requiring higher contributions from workers. In April, lawmakers approved a budget plan that shored up the pension fund by $75 million, which was only half of the amount proposed by Hogan.  NORTH CAROLINA — Chiquita Brands International announced in January that it’s shuttering its headquarters in Charlotte — only three years after being promised $22 million, over 10 years, in tax breaks for relocating. Lawmakers are now debating whether to renew and expand the state’s tax incentives. Gov. Pat McCrory and some business groups want more incentives money, but others say the state should take a closer look at whether these sweeteners actually work. Chiquita is paying back the $1.5 million it’s received to date.  SOUTH CAROLINA — Volvo has picked South Carolina as the home for a new $500 million auto manufacturing plant. South Carolina officials met with Volvo for months to discuss tax incentives and other inducements, which are expected to total around $204 million. Volvo is seeking a bigger U.S. presence and has said it wants to boost U.S. sales to 100,000 by 2018, up from 56,000 in 2014. The new plant will be located in Berkeley County and is expected to employ 4,000 workers by 2030.  VIRGINIA — Dominion Virginia Power is set to build the state’s first commercial solar energy plant. If Dominion’s application is approved by the State Corporation Commission, it will construct a $47 million, 20-megawatt facility in Fauquier County that powers about 5,000 homes. According to Dominion, declining costs for solar equipment, as well as a federal investment tax credit, now make such projects financially viable.  WASHINGTON, D.C. — The Washington metro region was an economic star during the recession, but its job growth could face stagnation ahead. According to a new report by George Mason University’s Center for Regional Analysis, the number of federal jobs in the region dropped 5.6 percent from 2010-2013, while procurement outlays dropped 16 percent. The new jobs that were created tended to be in lower-paying sectors like retail. The center projects a further 22.3 percent drop in federal jobs through 2019.  WEST VIRGINIA — The state will rake in an estimated $53 million more than expected for fiscal year 2016, which means less will have to be drawn from reserves to make up for a $195 million budget shortfall. In March, the state legislature approved a $4.3 billion budget that took only $22.7 million from reserves, far less than the $68 million initially expected.  2  E CO N F O C U S | F O U RT H Q U A RT E R | 2 0 1 4  POLICYUPDATE  Banning the Box  S  hould employers be allowed to screen candidates based on their criminal history? That is the question being raised by advocates of the “ban-the-box” movement, referring to the box on job applications that candidates must check if they have anything more severe than a traffic violation on their criminal record. Critics of this practice argue that it effectively bars the almost one in three American adults with an arrest or conviction from most gainful employment. Ban-the-box legislation prohibits employers from asking candidates about their criminal history until after they have had a chance to interview them and assess their other qualifications. Today, over a dozen states and more than 100 localities have implemented some form of ban the box. In April, Virginia did so when Gov. Terry McAuliffe signed an executive order banning the box for most state jobs. Some localities have also extended the limits to private employers in general. Last year, three localities in the Fifth District (Washington, D.C., and Montgomery and Prince George’s counties in Maryland) passed ban-the-box legislation affecting private employers over a certain size. There are exceptions for certain employers that are required by law to check an applicant’s criminal history. Employers are also still free to rescind job offers after a later background check, but they typically must provide an explanation for doing so. The Equal Employment Opportunity Commission contends that federal law already prohibits employers from barring candidates with criminal records unless their offense is job-related. Regardless, the evidence suggests that a criminal record does have a large negative effect on employability. In an oft-cited 2009 study, sociologists Devah Pager and Bruce Western of Harvard University and Naomi Sugie of the University of California, Irvine conducted an experiment in which teams of black and white men were matched and applied for low-wage jobs in New York City. Each pair presented equivalent resumes except one of the individuals had a criminal record. The authors found that callback rates from employers were 50 percent lower on average for the individuals with criminal records. Postponing questions about an individual’s criminal record seems to reduce such negative stigma. “Employers in our study who first had the chance to talk with the applicant and build more of a rapport before seeing that they had a criminal record were much more likely to give them an opportunity to explain,” says Pager. Criminal records have always been public in the United States, but advocates of banning the box point out that the  By T i m S a b l i k  Internet has made it much easier for employers and other interested parties to access this information. According to a 2012 study by the Society for Human Resources Management, nine out of 10 employers conduct criminal background checks for employment. “It has become extremely easy now to find out about criminal records. A whole industry has arisen around it,” says James Jacobs, the director of New York University’s Center for Research in Crime and Justice and author of the 2014 book The Eternal Criminal Record. Employers, however, argue that they have legitimate reasons for seeking this information. “They have both a legal requirement and a moral responsibility to ensure a safe workplace,” says Bob Moraca, the vice president of loss prevention for the National Retail Federation. Advocates of ban-the-box legislation say that it represents a compromise between the interests of job candidates and employers, since most ban-the-box laws still allow employers to consider criminal records later in the hiring process. But Jacobs is skeptical that it will help the majority of candidates with criminal backgrounds. “Many of them are not really work-ready,” says Jacobs. “So they’re not going to come to the top of a big pool of applicants.” It’s also unclear whether the effects of limiting employer access to criminal records would be entirely positive from the perspective of minority applicants. A 2006 study by public policy professors Harry Holzer of Georgetown University, Steven Raphael of the University of California, Berkeley, and Michael Stoll of the University of California, Los Angeles reported that employers that checked criminal backgrounds were actually more likely to hire black males than those that didn’t. “Low information is often the basis for what economists call ‘statistical discrimination,’ ” explains Holzer. “If you don’t have information about a particular individual, you will judge them by their group characteristics.” Holzer says that without access to criminal records, employers were more likely to assume that black men with less education and gaps in their work history had prior criminal convictions, even if that was not the case. When employers could confirm that those individuals did not have criminal records, they were more likely to give them a chance. Ultimately, says Moraca, criminal history is just one piece of the hiring puzzle. “As an employer, you’re going to choose the best candidate,” he says. “It’s not always about whether or not the candidate has a criminal conviction. EF  CORRECTION: In this column in the Third Quarter 2014 issue of Econ Focus, the article “Cracking Down on Fraud?” incorrectly stated that the Department of Justice’s suit claimed that Four Oaks Fincorp and Four Oaks Bank & Trust Company received complaints from its customers; it should have said that the complaints allegedly came from customers of the payday lenders involved in the claim. The article also stated that the institutions allegedly granted access to bank customer accounts; it should have noted that this access was said to have been provided through direct access to the Automated Clearing House payments network. E c o n F o c u s | F o u rt h Q u a rt e r | 2 0 1 4  3  FEDERALRESERVE Averting Financial Crises: Advice from Classical Economists BY T H O M A S M . H U M P H R E Y  Editor’s Note: The story of how central banks handled the global financial crisis in 2007-2008 is now familiar: They bent the traditional rules of lending to provide emergency funds to a wide array of institutions that lacked short-term financing, hoping to keep the institutions alive and minimize recession and job loss. Since then, scholars have continued to debate central bank crisis procedures. The starting point for many is the 19th century classical economists, whose prescriptions would go on to govern some of the world’s most successful central banks. Two economists in particular, Henry Thornton and Walter Bagehot, are credited with literally writing the books, in 1802 and 1873 respectively, on crisis management by the Bank of England. These writings established rules for what is today called the “lender of last resort.” Why the need for special rules? Emergency lending comes with a longer-term risk: that when investors expect to be protected from losses, they’ll overfund risky activity, leading potentially to greater and deeper crises — and still more bailouts. In a crisis, modern policymakers, including those within the Fed in 2007-2008, are left to weigh the degree to which financial turmoil threatens the broader economy today against the likelihood that moral hazard from emergency lending will create more panics in the future. A well-designed last-resort lending mechanism may address both sides of the equation: establishing a clear, reliable system in advance that reassures markets, while making the loans sufficiently unsavory to borrowers that financial markets will want to minimize the risk-taking that might lead to bailouts. For that reason, the prescriptions of the classicals are as relevant as ever. One student of the topic is Thomas Humphrey, a historian of monetary thought who retired in 2005 from the Richmond Fed as a senior economist and research advisor and editor of the Bank’s Economic Quarterly. The following is adapted from talks that Humphrey delivered in 2014 at the annual meeting of the American Economic Association and at James Madison University concerning the classical lessons and whether the Fed followed them during the crisis of 2007-2008.  N  ineteenth century English classical economics left a mixed legacy. Its Ricardian model of production and distribution, though pathbreaking and pertinent at the time, seems quaint, outmoded, dated, even wrong today. Questionable elements include the model’s labor and cost-of-production (rather than marginal utility) theories of value, its Malthusian population mechanism and iron law of wages, its prediction that a capitalist economy will converge to the classical stationary state where all growth stops, its theory of relative income shares in which land’s rental share comes to dominate, and its relative neglect of technological progress at the very time that such progress was transforming British society. Nobody pretends that these obsolete notions describe the operation of developed market economies now. But the classical school got at least one thing right. I’m referring to its explanation of how central banks operate as lenders of last resort (LLR) to resolve financial panics and crises and so prevent them from deteriorating into recessions and depressions. This theory is as relevant and useful today as when it was first formulated. True, it suffered neglect during the Great Moderation, the period from roughly 1985 to 2008, when crises and panics came to be regarded as things of the past. But the recent financial crisis showed how wrong this view was and stimulated renewed interest in the classical theory. Central bankers needing all the help they could get sought to tap into the accumulated wisdom 4  E CO N F O C U S | F O U RT H Q U A RT E R | 2 0 1 4  of the classicals and to use their benchmark LLR model as a source of expert advice. Here’s a prime example of how the history of economic thought, particularly monetary thought, earns its keep. It still has much to teach. Indeed, its lessons continue to inform policymakers to this very day.  Classical Teachings What was classical LLR theory? By classical here, I mean the work primarily of two Englishmen, namely Henry Thornton (1760-1815), a prominent banker, member of Parliament, evangelical reformer, anti-slavery activist, and all-time great monetary theorist writing in the early years of the 19th century, and Walter Bagehot (1826-1877), a financial writer and longtime editor of The Economist magazine who wrote in the century’s middle decades. Classical LLR theory referred to the central bank’s duty to provide emergency injections of liquidity to a banking system facing massive cash withdrawals when no other liquidity source is available. The central bank fulfills this duty either through discount window loans to stressed banks or through open market purchases of Treasury bills, bonds, or other assets. Because open market operations were infrequently used in 19th century England, classicals instead advocated discount window loans, albeit at high interest rates so as to discourage too-frequent resort to the loan facility, to creditworthy, cash-strapped borrowers offering good collateral. The goal was to prevent bank runs that cause sudden, sharp  contractions in the money stock, and thus declines in spending and prices. Given downward inflexibility of nominal wages, these declines lead to rising real wages and corresponding falls in profits that induce collapses in output and employment, collapses the classicals fervently sought to avoid. But classicals noted that the LLR has no business bailing out unsound, insolvent banks. Its mission is to stop liquidity crises, not insolvency ones. Nevertheless, if the LLR acts swiftly, aggressively, and with sufficient resolve, it can prevent liquidity crises from deteriorating into insolvency ones. By creating new money upon demand for sound but temporarily illiquid banks, the LLR makes it unnecessary for those banks, in desperate attempts to raise cash, to dump assets at fire-sale prices that might render banks insolvent. Two lessons emerge from classical LLR theory. Lesson number one: Filling the market with liquidity — or, even better, credibly pre-committing to do so in all current and future panics — is sufficient to still panics and end crises. Liquidity provision by itself is enough to do the job. There is no need also to bail out insolvent, poorly managed institutions or to charge below-market subsidy interest rates on LLR loans. Lesson number two: The panic- and run-arresting duties of the LLR are part and parcel of its monetary stabilization responsibilities. The two tasks are not mutually exclusive. They are one and the same. By keeping the money stock — or better still, that stock adjusted for shifts in the demand for it so as to preserve money supply-demand equilibrium — on track in the face of shocks, panics, and crises that otherwise would shrink it, the LLR preserves nominal income and spending at their full capacity, non-inflationary, non-deflationary paths.  IMAGE: © LOOK AND LEARN/BRIDGEMAN IMAGES  Thornton’s Contributions Although Walter Bagehot is the economist most often identified with classical LLR theory, Henry Thornton, writing decades before him, can lay claim to being its true father. What did Thornton do? For starters, he identified the LLR’s distinguishing feature as its open-ended power to create base or high-powered money in the form of its own notes and deposits. The Bank of England possessed this power in spades during the Napoleonic Wars when the government had released it from the obligation of maintaining gold convertibility of its currency. Henry Thornton (1760-1815) Thornton also noted that the LLR has a macroeconomic duty to the entire economy, or the “general interest,” as he called it. This duty differentiates the LLR from an individual banker whose duties extend only to his bank’s  owners and customers. Let a panic occur. The individual banker will seek to contract his loans and deposits knowing that such contraction will boost his safety and liquidity without much affecting the whole economy. By contrast, the LLR, because it governs the entire money stock whose shrinkage will have widespread adverse effects, can make no such assumption. Thus, when panic hits, the LLR must act opposite to the banker, expanding its operations at the very time the banker is contracting his. Another thing Thornton did was to identify the LLR’s chief purpose as a monetary rather than a banking or a credit one. To be sure, the LLR acts to forestall bank runs and avert credit crises. But these credit-market actions, although vitally important, are not the end goal of policy in and of themselves. Rather, these actions are the means, albeit the most expedient and efficient means, through which the LLR pursues its ultimate objective of protecting the quantity, hence purchasing power, of the money stock. The crucial task is to prevent sharp and sudden shrinkages of the money stock since hardship follows from these rather than from bank runs and credit crises per se. Why did Thornton see the LLR’s function as a monetary rather than a credit one? Simple. He thought that money does what credit cannot do, namely, serve as the economy’s unit of account and means of exchange. Since money forms the transaction medium of final settlement, it follows that its contraction, rather than credit crunches and collapses, constitute the root cause of lapses in real economic activity and of breakdowns of the payments mechanism. To show how the failure of LLR policy allows panicinduced money-stock contraction to cause falls in output and employment, Thornton presented his theory of the monetary transmission mechanism. He traced a chain of causation running from external shocks (he mentions agricultural crop failures and rumors or alarms of a big bank failure or of an invasion of foreign troops) to a financial panic, thence to a flight-to-safety demand for high-powered money, thence to the broad money stock, spending, and the price level, and finally, via sticky nominal wages (which together with falling prices produce rising real wages and thus falling business profits), to real activity itself. According to Thornton, a panic triggers doubts about the solvency of banks and the safety of their note and deposit liabilities. Anxious holders of these items then run on the banks seeking to convert notes and deposits into cash money of unquestioned soundness, namely gold plus the central bank’s own note and deposit liabilities (considered as good as gold). These aggregates, whether circulating as cash or held in bank reserves, comprise the high-powered monetary base. Unaccommodated increases in the demand for this base in a fractional reserve banking system cause multiple contractions of the broad money stock. Thornton noted that panics cause the demand for base money to be increased in two ways. Not only does the public wish to convert bank notes and deposits into cash and currency, but bankers, too, are trying to augment their reserves E C O N F O C U S | F O U RT H Q U A RT E R | 2 0 1 4  5  6  E CO N F O C U S | F O U RT H Q U A RT E R | 2 0 1 4  Bagehot’s Contributions Building upon Thornton’s earlier work (although never once citing him, for which I have no explanation), Bagehot added four propositions of his own. First, the LLR, when quelling panics, should lend to all sound borrowers — nonbanks as well as banks — offering good security, namely assets that would be deemed creditworthy and valuable in ordinary or normal times if not in panics. Second, the LLR has no duty to bail out unsound borrowers, no matter how big or interconnected. Such bailouts produce moral hazard: They encourage other banks to take excessive risks Walter Bagehot (1826-1877) under the expectation that they too will be rescued if their risks turn sour. To Bagehot, lender-borrower interconnectedness and the purported associated danger of systemic failure constitute no good reason to bail out insolvent banks. Better to let bad banks fail and prevent their failure from spreading to the sound banks of the system. And the best way to do this is to pre-commit to pour liquidity without stint into the market in a crisis. Here it would be remiss not to note that even on the moral hazard issue, Thornton had scooped Bagehot 70 years before the latter published Lombard Street. In a prescient footnote on page 188 of his 1802 book An Enquiry into the Nature and Effects of the Paper Credit of Great Britain, Thornton wrote that it was not up to the central bank “to relieve every distress which the rashness of country banks bring upon themselves.” Relief instead should go to protect “the general interests” and not “those who misconduct their business.” The latter must be left to suffer “the natural consequences of their fault.” Thornton noted that unsound banks “no matter how ruinous their state” would nevertheless plead that rescuing them was necessary to save the general interest. Bagehot’s third point was that the LLR should charge above-market or penalty rates of interest on its accommodation. This is the famous Bagehot Rule: Lend freely but at a high rate. The high rate does several things. It discourages unnecessary resort to the discount window. It encourages would-be borrowers to exhaust all market sources of liquidity and even to develop new sources before applying to the central bank. It discourages overcautious hoarding of scarce cash. It attracts gold from abroad and encourages gold’s retention at home, thus protecting Bagehot’s cherished gold standard while bolstering the monetary base. A high rate also rations liquidity to its highest-valued uses. It serves as a partial test of borrower soundness since only solvent banks can afford to pay the penalty rate, even though unsound banks facing credit risk premia in excess of the penalty  IMAGE: PRINT COLLECTION, MIRIAM AND IRA D. WALLACH DIVISION OF ART, PRINTS AND PHOTOGRAPHS, THE NEW YORK PUBLIC LIBRARY, ASTOR, LENOX AND TILDEN FOUNDATIONS  of high-powered money both to meet cash withdrawals and to allay public suspicion of their financial weakness. The result in a fractional reserve banking system is a sudden, sharp multiple contraction of the broad money stock and equally sharp collapses in spending and prices. Because nominal wages are downwardly sticky and therefore respond sluggishly to falls in spending and prices, such falls tend to raise real wages, thereby reducing profits and so inducing firms to slacken production and lay off workers. The upshot is that output and employment bear much of the burden of adjustment, and the impact of monetary contraction falls on real activity. To prevent this sequence of events, the LLR must stand ready to accommodate all panic-induced increases in the demand for high-powered money. It can do this by virtue of its open-ended capacity to create base money in the form of its own notes and deposits. By so doing, the LLR maintains the quantity and purchasing power of money and so the level of economic activity on their non-inflationary, non-deflationary full-capacity paths. Thornton noted a further complicating factor. Not only do panics, if unopposed, produce multiple contractions of the money stock, they also produce falls in its circulation velocity, or rate of turnover of the money stock against total dollar purchases, due to flight-to-safety spikes in the demand for money, considered the safest liquid asset in times of panic. In this case, the LLR cannot be content merely to maintain the size of the money stock. It also must expand that stock to offset the fall in velocity if it wishes to preserve the level of spending and real activity. This means that the money stock must temporarily rise above its long-run non-inflationary path. But it will revert to that path at the end of the panic when velocity returns to its normal level and the LLR extinguishes the emergency issue of money. The lesson is clear: Deviations from the stable-money path are short-lived and minimal if the LLR promptly does its job. There need be no conflict between LLR emergency actions and long-run stable, non-inflationary monetary growth. These were Thornton’s pathbreaking and seminal contributions. After him came Bagehot. Writing in the 1850s, ’60s, and ’70s, most famously in his 1873 book Lombard Street: A Description of the Money Market, Bagehot wasn’t as emphatic as Thornton on the money stock stabilization function of the LLR. This was because by the time Bagehot was writing, Britain had restored the gold convertibility of its currency. The convertibility constraint meant that the Bank of England had less room to maneuver than in Thornton’s time when the constraint was suspended. Still, the central bank, even under the gold standard, possessed some wiggle room, especially in the short run. And indeed, in one of his earliest publications, written when he was only 21, Bagehot stated the essence of the LLR’s function, namely its quick issue of additional currency to accommodate sudden, sharp increases in the demand for money that threaten to depress spending and the price level and to disrupt the payments mechanism.  rate-market rate differential may be tempted to try. It also appeals to distributive justice on the grounds that it is only fair that banks pay handsomely for the security and protection provided by the LLR. And it encourages prompt repayment of LLR loans — and removal and extinguishment of money used to pay them — at panic’s end, thus eliminating inflationary monetary overhang. Fourth, not only must the LLR act promptly, vigorously, and decisively so as to erase all doubt of its determination to end current panics. It must also pre-announce its commitment to lend freely in all future panics. Such credible pre-commitment dispels uncertainty and promotes full confidence in the LLR’s willingness to act. It generates a pattern of stabilizing expectations that ease the LLR’s task. Confident that the LLR will deliver on its commitment, the public will not run on the banks, perhaps obviating the need for emergency liquidity in the first place. The Thornton-Bagehot precepts served England well. After 1866, the nation suffered no bank runs until 2007. By contrast, in the United States, the Federal Reserve honored the classical doctrine as much in the breach as in the observance, and the nation suffered dearly for it. The Fed disregarded the classical advice altogether in the 1930s and so failed to stop a massive monetary contraction that contributed mightily to the Great Depression. Most recently, however, the Fed seems to have absorbed some, but not all, of the classical wisdom. In the recent financial crisis, the Fed followed the Thornton-Bagehot prescription regarding liquidity provision while departing from other of its precepts.  Classicals on Fed Crisis-Management Policy What would the classicals have thought about the Fed’s handling of the crisis? Certainly they would have applauded the Fed’s filling the market with liquidity. Likewise, they would have approved of the Fed’s expansion of its balance sheet and of the monetary base. These things were precisely what the classical prescription called for — expanding the monetary base to match corresponding increases in the public’s and bankers’ demand for money. At the same time, classicals might have noted that the Fed’s expansion of the monetary base, while sufficient to offset the panic-induced fall in the multiplier relationship between base and bank money in a fractional reserve system, was insufficient to counter falls in velocity caused by the public’s flight to money as the safest liquid asset. The result of this increased money demand (or fall in velocity) was a shortfall of the supply of broad money below the demand for it, leading to a prolonged fall of spending, output, and employment below their pre-crisis paths. [Editor’s Note: For  elaboration on this view and those that follow, see Readings.] Likewise, the classicals would have approved of the Fed’s Bagehot-like actions to lend to a wide variety of borrowers on a wide array of assets. But they would have looked askance at the Fed’s acceptance of opaque, dubious, hardto-value collateral that arguably would have been deemed questionable even in normal times. The same holds for the Fed’s direct purchase of tainted assets. Most important, Thornton and Bagehot would have condemned both the Fed’s bailout of arguably insolvent, too-big-to-fail firms such as American International Group Inc. and Citigroup and its charging of subsidy rather than penalty rates for its assistance. And they would have scolded the Fed for extending its loan deadlines beyond very short-term (week- or at most month-long) intervals, for its failure to pre-commit to ending all future crises, and for not spelling out the conditions and indicators that would trigger its actions in future crises. Thornton, who sharply distinguished between the monetary and credit rationales of LLR policy, would have disagreed with the Fed’s credit-market rationale. To Thornton, the LLR’s purpose was to protect the money stock from contraction and to expand it to offset falls in velocity. This was in sharp contrast to the Fed’s stated LLR rationale, which was to free up credit markets, shrink panic-widened yield spreads, and get banks lending again. Thornton would have shunned the Fed’s credit-market rationale even though it achieved much the same result as his monetary one. Finally, classicals might have opposed the Fed’s payment of positive interest on excess reserves. The Fed implemented this measure in 2008 to prevent its credit interventions from resulting in monetary expansion. And it retained the interest-on-excess-reserves measure even when it later shifted to a policy of monetary expansion. Such payments, which boost demand for idle reserves and keep them immobilized in reserve accounts rather than getting them lent out into active circulation in the form of bank deposit money, would be inconsistent with the classicals’ goal of expanding or maintaining the stock of broad money as required to keep economic activity at its pre-panic level. Bankers’ demands for reserves already are extraordinarily elevated during crises. Paying interest on excess reserves only raises those demands further. Despite claims to the contrary, the Fed never acted as an unmitigated classical LLR in the recent financial crisis. Instead, it adhered to parts of the classical prescription while deviating from others. So when you hear the Fed described, often by Fed policymakers themselves, as a classical LLR, be skeptical. EF  Readings Bagehot, Walter. Lombard Street: A Description of the Money Market. Reprint. Homewood, Ill.: Richard D. Irwin, 1873. Humphrey, Thomas M. “Lender of Last Resort: What it is, Whence it Came, and Why the Fed Isn’t it.” Cato Journal, vol. 30, no. 2, Spring/Summer 2010, pp. 333-364.  Humphrey, Thomas M. “Arresting Financial Crisis: The Fed Versus the Classicals.” Levy Institute Working Paper No. 751, February 2013. Thornton, Henry. An Enquiry into the Nature and Effects of the Paper Credit of Great Britain. New York: A. M. Kelley, 1802. E C O N F O C U S | F O U RT H Q U A RT E R | 2 0 1 4  7  Last-Resort Lending for the 21st Century Why the continued interest in Henry Thornton and Walter Bagehot so long after their time? They were two of the first to navigate what today’s central bankers accept as a fundamental trade-off of crisis policy: the need to limit panics today without encouraging greater risk-taking in the future. Their broad principles for striking this balance were to supply ample liquidity in crises but in a way that is sufficiently painful to borrowers — lending only to worthy borrowers at high interest rates and against sound collateral — that they’ll want to take measures to avoid vulnerability in the future. In the middle of a crisis, that can be harder to achieve than one might think. Here are some of the issues that central banks face.  Illiquidity vs. Insolvency Most central bankers would prefer never to bail out insolvent firms. But crises unfold quickly and it can be unclear who is solvent and who is not. So how can central banks distinguish firms experiencing a temporary liquidity shock from those that are fundamentally insolvent? “I would say that it’s very well near impossible to make that distinction,” says Charles Goodhart, an economist at the London School of Economics who has written extensively on lender-of-last-resort policy. “Illiquidity is almost always a function of concern about potential insolvency, even if that concern is misguided.” There’s a complicating factor: Are there some cases where insolvent firms should, in fact, be saved — perhaps if their failure would hurt many others? Typically, markets minimize spillover risk by charging premiums to borrowers that are riskier. But economists have modeled scenarios in which firms are not forced to bear the costs of the ways in which their actions would affect others. Such models — many of which describe a far more complex financial system than what existed in 19th century England — suggest the possibility of outcomes where risks become contagious, leading to runs or widespread liquidity crises. The extent to which these characterized the 2007-2008 crisis is still an open question; an alternative view is that a more important component of the crisis was markets adjusting to previously unknown risks emanating from the housing market. Either way, there is a moral hazard problem to contend with. If central banks routinely prevent systemic losses, firms will choose to become too systemically linked, increasing the likelihood of contagion. That means market failures may be better addressed with regulatory measures than with emergency lending. And for the lending that does take place, it provides a strong argument for making it costly for firms to borrow in a crisis so they’ll want to use it as truly a last resort — for example, with penalty rates.  8  E CO N F O C U S | F O U RT H Q U A RT E R | 2 0 1 4  What Constitutes a Penalty Rate? In principle, penalty rates — often discussed in terms of interest rates — come down to whether the loan from the central bank is cheaper than private alternatives in a crisis. If it is, the lending might encourage excessive risk-taking because investors won’t pay the price, so to speak, of financial market turmoil. Thornton and Bagehot advocated a “high” interest rate but didn’t spend much time defining it. Much of Bagehot’s case was based on the need to keep the gold standard functioning, and strict usury laws were in place in Thornton’s time, notes monetary economist David Laidler, professor emeritus at the University of Western Ontario. But many scholars agree at least in principle that a penalty rate is funding which is costlier than a firm could get in normal times but cheaper than the panic-induced crisis rate (since a central bank offering loans above the latter would find no takers). But the right penalty rate can be hard to identify in practice. As noted, the essence of a crisis is often that the true values of assets become uncertain after previously unknown risks come to light. Some research suggests that this problem can be exacerbated by so-called “fire sales” that artificially depress asset prices as firms struggle to raise funding. But erring on the side of high penalty rates would have costs. It would deplete the borrower’s capital further, which might worsen the panic. Another concern is that markets know that only the weakest banks will be desperate enough to pay penalty rates. The classical-era Bank of England dealt with this potential problem by providing loans through institutions known as discount houses that kept the borrowers essentially anonymous. In 2007 and 2008, both the Fed and the Bank of England argued that a “stigma” left their traditional discount window facilities underutilized in the early days of the crisis. In the United States, the Fed launched an alternative facility in which firms bid for funds. The winning bid often landed at sub-penalty rates. A final challenge is that penalty rates simply may not provide the amount of funds that policymakers wish to funnel to markets. For example, one of the Fed’s recent crisis programs gave special loans at sub-penalty rates to banks willing to purchase troubled asset-backed commercial paper from money market mutual funds. Fed policymakers argued at the time that charging a penalty rate would not provide the funds necessary to support the economic activity dependent on those markets. The program seemed to help calm markets, but to some observers, this type of lending is simply a handout to certain sectors, not a lender-of-last-resort function. Richmond Fed President Jeffrey Lacker has argued that there was no unmet funding need in some markets that were supported — only prices that investors didn’t want to pay due to the risky environment.  Money vs. Credit At the broadest level, no one disagrees that the fundamental goal of last-resort lending is to prevent financial market problems from causing recession and job loss. But among modern observers, there are two views on how the central bank should go about it: Should the central bank expand the supply of money to meet the panic-induced demand for safe assets? Or should it extend credit directly to firms to stop failures and panics at the source? Laidler describes this “money vs. credit” debate as “a swamp from which few return once they enter it.” In other words, the division between the two has not been entirely clean in practice. The 19th century Bank of England, for example, conducted monetary expansion via lending to firms. Today, the Fed conducts monetary policy largely through open market operations that inject liquidity broadly. More recently, the Fed mixed the money and credit functions with “quantitative easing” that expanded its balance sheet — an act of monetary easing — but by purchasing mortgage-backed securities. Moreover, Goodhart expresses doubt that there is sufficient time in a crisis for a central bank to provide money and for that expansion to spread to illiquid but solvent institutions. “People will be thinking, ‘Who is next in line to fail?’ and run from them. You’ve got to stop contagion very, very quickly.” Once again, this interpretation depends on the view that market failures make it impossible for firms to adequately protect themselves from contagion. Another view turns the complexity of today’s financial markets on its head: Firms have more alternatives to central bank funding than ever before, and will find ways of directing money to sound borrowers if only the perverse incentives provided by the central bank’s backstop would get out of the way. A 1988 article by Marvin Goodfriend, a former research director of the Richmond Fed who is currently at Carnegie Mellon University, and Robert King of Boston University argued for doing away altogether with the Fed’s ability to lend directly to firms. That would leave broad open market operations as its only means of pumping liquidity into the economy. More recently, Goodfriend has argued against a credit role for central banks on the ground that they face an incentive to err on the side of lending perhaps too broadly. That wasn’t the case for the 19th century Bank of England; it was held by private shareholders, so the profit motive created a natural inclination to lend conservatively. That may be one reason Bagehot felt the need to encourage liberal lending. Modern central banks, in contrast, lend with public funds. They also face intense political pressure to protect the economy at all costs — whereas central banks in classical times faced no macroeconomic objectives. On balance, modern central banks are naturally likely to overlook the longer-term moral hazard costs and lend too liberally,  according to Goodfriend. The Fed has expanded the scope of its emergency lending since the 1970s, which some observers argue is one reason firms have made themselves so vulnerable to systemic events in the first place. These issues are far from resolved. For better or for worse, central banks largely chose the credit function in 2007 and 2008. Doing so creates significant long-term challenges, but as Laidler puts it, central banks facing crisis have tended “to swallow hard and get on with it.”  Where Do We Go From Here? Without a clearly defined crisis policy in advance, “by history and tradition, the central bank has always leaned toward liquidity provision,” Chairman Bernanke noted to his fellow policymakers in 2009. This leaves regulatory reform to clean up the moral hazard repercussions after the crisis has passed. That is just what Congress attempted to do with the regulatory provisions of the 2010 Dodd-Frank Act. The Act also sought to restrict the Fed’s emergency lending powers. Now the Fed cannot bail out one particular firm; its emergency credit programs must have broad-based eligibility. Dodd-Frank also required the Fed to get more specific about its crisis procedures. According to an August 2014 letter from a bipartisan group of 15 members of Congress to Fed Chair Janet Yellen, “By directing the Board to establish a clear lender-of-last-resort policy, where both policymakers and the marketplace know the rules of the game beforehand, Congress sought to ensure that banks fully internalized both the risks and rewards of their decisions.” The letter argued that the Board’s first attempt at such a policy did not achieve that end. In response, they requested further crisis rules that sound similar to the methods proposed by the classicals to avoid moral hazard. Among their requests: for the Fed to establish a clear timeline for a financial institution’s reliance on emergency lending, with concrete limits on the duration of each facility; preset guidelines for winding down lending facilities to ensure they are truly temporary; a broader definition of “insolvent”; a method for ensuring that lending is intended to help financial markets broadly instead of being designed for one specific institution; and a commitment to lend only at penalty rates. (As this issue went to press in the spring of 2015, legislation had been introduced dealing with some of these concerns.) Plenty of observers have offered broad principles on crisis lending. But no one has definitively figured out how to implement them in practice. To some, that is an argument for central banks erring on the conservative side, lending to as few parties as possible to enhance market discipline. In practice, as Bernanke has said, central banks have tended to err liberally to prevent financial and real losses. The 2007-2008 financial crisis provides the largest modern case study of crisis lending, warts and all, for the pursuit of clearer answers.	 — R e n e e H a l t o m  E C O N F O C U S | F O U RT H Q U A RT E R | 2 0 1 4  9  JARGONALERT  Aggregate Demand  W  hat determines how much the economy produces in any given period? One way to think about it is through the concept of aggregate demand, along with a partner concept, aggregate supply. An aggregate demand curve displays the quantity of goods and services that are demanded at every possible price level in the economy. The aggregate quantity of goods and services demanded generally is high when prices are low and low when prices are high (the opposite being true for aggregate supply, which slopes upward). Where the two intersect is, in theory, at the current level of gross domestic product (GDP). This theoretical framework can help economists think through the causes of business cycles. For example, four components of aggregate demand cause the aggregate demand curve to shift outward when they increase: the amounts households want to consume, businesses want to invest, governments want to spend, or foreigners want to purchase (minus the amount we purchase from them) at any given price level. Each component is driven by different factors; consumption, for example, is affected by interest rates, disposable income, and expectations for the future. Aggregate demand is easily confused with GDP, the broadest and most commonly used measure of economic activity. GDP in the United States is measured regularly by the Bureau of Economic Analysis as the sum of final spending on consumption, investment, government spending, and net exports. Thus, these four measures are both the accounting components of GDP and the causes of a shift in the theoretical concept of aggregate demand. When the BEA reports that GDP has declined, that’s what economist call a recession — which often ignites debates about whether the government should attempt to boost aggregate demand. This idea stems from the work of British economist John Maynard Keynes. In the throes of the Great Depression, he proposed that the government should counteract declines in aggregate demand by stepping in to spend itself. Up to that point, the prevailing view of business cycles held that recessions last about as long as it takes for the price system to reallocate goods and services, a process thought to be reasonably quick. This view focused on the economy’s long-run potential as the primary determinant of the level of economic activity. Keynes, in contrast, argued that prices can be quite sticky, forcing output to contract for sustained periods in response to negative shocks to aggregated demand. By the 1960s, the theory of aggregate demand shortfalls became widely accepted as not just a description of business 10  E CO N F O C U S | F O U RT H Q U A RT E R | 2 0 1 4  cycles but as a workable prescription for how policymakers should respond to them. This backfired when attempts to continually boost aggregate demand worked a little too well, resulting in inflation. The lesson was that the economy can’t be pushed beyond its sustainable level of supply for long. But many economists continue to argue that economists should counteract demand shortfalls in recessions. This is what the 2009 fiscal stimulus law tried to do. And in the aftermath of the Great Recession, Christina Romer, then head of the President’s Council of Economic Advisers, noted the presence of factors that Keynes might have agreed would be harmful to aggregate demand: a fall in wealth following the 2007-2008 financial crisis, disruptions of credit, shrinking government spending, and cautious spending from nervous consumers. The remedy, she said, would be “new actions aimed at stimulating aggregate demand” such as federal assistance to state governments, tax incentives for hiring, funding for small businesses, and even consumer incentives to make homes energy efficient. Critics argue that appeals to aggregate demand shortfalls often are simply an excuse for constituent-pleasing spending that risks distorting the allocation of resources. Moreover, there are circumstances when aggregate demand should fall or grow less quickly, namely, when the economy’s productive potential has done the same. It can be hard to identify such effects in real time, which explains the heated debates during and after the Great Recession about whether unemployment was the result of structural or cyclical forces. In the critics’ view, it is somewhat pointless to try and disentangle whether a recession stems from aggregate demand or from aggregate supply. Instead, policy should focus on the factors that gum up the economy’s adjustment to shocks. For example, recessions tend to make consumers nervous about future job prospects, thus causing them to postpone major purchases or vacations and increase savings, called a spike in precautionary savings. The reaction may make perfect sense for each individual household while worsening the recession in the aggregate. The fundamental problem — the fact that it is hard for households to insure themselves against the risk of unemployment — could be addressed with enhanced unemployment benefits. Such policies can appeal to both camps; Romer also suggested an expansion of unemployment benefits to help boost spending by those households — and thus aggregate demand. EF  ILLUSTRATION: TIMOTHY COOK  BY R E N E E H A LT O M  RESEARCH SPOTLIGHT  Revisiting the ‘Paradox of Choice’  T  BY H E L E N F E S S E N D E N  he “paradox of choice” is the idea that decisionmaktoward cost containment over the years. The authors also ing becomes more difficult as one’s options multiply, controlled for several health factors, such as dementia. leaving the status quo as the default preference. They discover that switching rates held steady at about It has produced a rich literature that spans markets from 11 percent a year. But experience also mattered: The cohort retirement plans to laundry detergent. Medicare’s prescripthat began in 2006 and stayed on Plan D through 2010 exhibtion-drug benefit, known as Part D, might seem to be a parited higher switching rates over time than those of newer ticularly good example: It offers a wide array of private drug enrollees. Most of those who changed plans, often more than plans with complex information on coverage and pricing. 80 percent, saved money. By 2010, almost 28 percent of all Some health care analysts have argued that its consumers, enrollees had swapped plans at some point, resulting in total who are retirees, may be unable to keep up with detailed savings of almost $1.07 billion under the elastic scenario. changes in plan offerings. Indeed, many experts initially Did the number of plans affect switching decisions? The brought up this concern to make the point that drug compaauthors contend it did — as long as the additional offerings nies wouldn’t have an incentive to compete on price unless stayed within $500 of the minimum-cost option. Every beneficiaries were making cost-based switching decisions. time a new plan was made available during open enrollment, A new article in the American Economic Review, howprovided it stayed within $100 of the minimum-cost plan, ever, finds that “choice overload” has not flummoxed Part the chance of a consumer switching rose by 0.6 percentD enrollees. Authors Jonathan age point; adding expensive plans Ketcham of Arizona State ($500 or more) did not affect “Paying Attention or Paying Too Much in University, Claudio Lucarelli of switching at all because enrollMedicare Part D.” Jonathan D. Ketcham, the Universidad de los Andes in ees tended to ignore options they Claudio Lucarelli, and Christopher A. Chile, and Christopher Powers considered beyond their budget. of the Centers for Medicare and And if enrollees faced an extra Powers. American Economic Review, Medicaid Services (CMS) ana$100 in out-of-pocket costs by January 2015, vol. 105, no. 1, pp. 204-233. lyze CMS data on millions of sticking with the status quo plan, Part D consumers to see whether the chance of their switching rose expanding choice mattered. They look at the program’s by 2.9 percentage points to 4.0 percentage points. Enrollees first five years, 2006 through 2010, and conclude that more tended to become less responsive to cost, however, if certain choice actually increased enrollees’ likelihood of switching factors applied, notably aging or the onset of dementia. — as long as the additional options were not significantly The authors suggest that CMS took effective steps to more expensive than their current plan. Furthermore, as reduce the risk of “asymmetric learning,” in which enrollees time went on, consumers who stayed in one plan became know less than the drug companies do and cannot make more sensitive to cost if it became substantially pricier and informed decisions. For example, CMS offers a “plan finder” therefore became more likely to shop for alternatives. And to compare plans’ coverage and ensure that drug companies when enrollees did change, they tended to reap savings and can’t sow confusion by offering plans that are too similar. reduce their out-of-pocket expenses closer to the level covThis research could fill in part of the bigger puzzle over ered by the cheapest (“minimum-cost”) plan. Part D: It’s a rare example of a subsidized government benThe researchers devise their sample by taking the entirety efit that is much cheaper than expected. According to the of the Part D population who were not eligible for the Congressional Budget Office (CBO), government spending on low-income subsidy (where plan enrollment is automatic) Part D is only about half of initial projections. A July 2014 CBO and randomly selecting one-fifth of that group. They then report noted a broader deceleration of national drug spending, calculate average out-of-pocket costs, how those costs comfrom 13 percent annual growth before 2003 to 2 percent by pared to the cheapest available plan, the number of plans 2007-2010, when many brand drugs lost their patent prooffered each year, and how those plans stacked up to each tection and generics boomed. The CBO report did not other by cost category. They also run two regressions for analyze switching behavior, but it found that the share of each finding to account for drug price elasticity: Under one generic prescriptions in Part D rose in those latter years from scenario, drug prices were completely inelastic (i.e., changes 63 percent to 73 percent. Other possible factors may help in cost did not affect demand at all), and in the other, elasexplain Part D’s surprising economy, but it’s notable that, startticity was moderate (-0.54, considered the benchmark for ing in 2006, enrollees exercised choice based on cost while having Medicare enrollees); in Part D, it turns out, the level of more cost-saving generic drugs available. This new research elasticity did not fundamentally affect switching or the trend suggests that plan choices were a boon rather than a burden.	EF E C O N F O C U S | F O U RT H Q U A RT E R | 2 0 1 4  11  The Sharing Economy Are new online markets creating economic value or threatening consumer safety? BY T I M S A B L I K  T  he first time she rented out her bedroom to strangers on the Internet, Shela Dean admits, it was “a little weird.” After she retired from practicing law at the end of 2013, she and her husband Dale realized they had more space than they needed in their four-bedroom home in Richmond, Va. They decided to move into their guest bedroom and put the master suite up for rent on Airbnb, a website that allows users to book nights in other people’s homes, much like a hotel. “I’m sort of an old hippie from the San Francisco Bay Area and I liked the idea of sharing your home,” Dean says. “Plus, it would give us an opportunity to meet new and interesting people.” Still, she wasn’t completely at ease as they awaited their first guests. “I told my husband, either they’re serial killers or they’re lovely people,” she says. Fortunately, they were lovely. As members of Airbnb, the Deans are participants in a growing phenomenon that has been called the “sharing economy.” A common thread that unites Airbnb and a number of similar businesses is that they create online platforms where individuals can share their possessions (such as a car or home) or market their skills. While some of these services allow participants to make a profit, others focus on free sharing. For example, one can earn rent from travelers through Airbnb or advertise free sofa space on Couchsurfing for guests needing minimal accommodations. 1000 Tools lets owners of seldom-used tools like power drills or hacksaws rent them to someone looking to do a quick home improvement project; Freecycle, 12  E CO N F O C U S | F O U RT H Q U A RT E R | 2 0 1 4  on the other hand, lets users give away those same items for free. Services like Lyft and Uber allow car owners to turn their vehicles into taxis and charge fares to shuttle travelers around; Ridejoy matches drivers with passengers traveling in the same direction, leaving them to work out the details of any compensation. New sites are springing up seemingly every day, and some are enjoying meteoric success. Uber, which launched in San Francisco in 2009, today operates in over 200 cities and was recently valued at more than $41 billion, making it one of the most lucrative tech startups in history. This suggests that investors see potential for these companies to generate huge economic benefits, but where will those benefits come from? Supporters say the sharing economy is already increasing consumer welfare by opening up markets and providing more options to consumers. But detractors argue that many of these companies have ignored laws designed to protect consumers, giving them an unfair advantage over traditional services and making them a public safety disaster waiting to happen.  Market Power As a group, economists have tended to view the sharing economy favorably. In a September 2014 poll by the University of Chicago’s IGM Forum, a diverse panel of 40 economists unanimously agreed that allowing new car services like Uber and Lyft to compete with traditional taxis would raise consumer welfare. They have good reason for being optimistic. Economic theory states that increasing the supply of goods or services in a market improves welfare by enabling more  gains from trade, particularly when the increased supply comes from the use of previously idle resources. Evidence suggests that sharing economy firms have greatly increased supply in sectors like transportation and lodging. The Bureau of Labor Statistics reports that there were 233,000 taxi drivers and chauffeurs in the United States as of 2012, but new services are substantially adding to that number. According to a recent study by Uber’s head of policy research Jonathan Hall and Princeton University economist Alan Krueger, the company had more than 160,000 active U.S. drivers in 2014. That alone nearly doubles the supply of short-term transportation, not counting Uber’s competitors like Lyft and Sidecar. Similarly for the hotel industry, Airbnb boasts over a million properties in nearly 200 countries, surpassing the capacity of major hoteliers like Hilton Worldwide, which had 215,000 rooms in 74 countries in 2014. Initial research suggests that consumers are benefiting from the wider range of options. In a March working paper, Samuel Fraiberger and Arun Sundararajan of New York University modeled the economic effect of ride-sharing services using data from Getaround, a company that allows individuals to rent cars from other users. They estimated that such services lower used vehicle prices and improve consumer welfare by allowing individuals (particularly those with below-median income) to rent transportation instead of owning it. For hotels, Georgios Zervas, Davide Proserpio, and John Byers of Boston University reported in a February working paper that an increase in Airbnb listings in Texas had a similar effect on hotel room revenue as an increase in the supply of hotel rooms, suggesting that travellers viewed Airbnb as an “alternative for certain traditional types of overnight accommodation.” Another benefit of the sharing economy may be the flexibility of supply. “The hotel business is a very efficient way to have short-term housing for a stable number of people, but it’s not so great for variable demand,” says Jonathan Levin, a professor of economics at Stanford University who studies Internet markets. “Either you’ve got a lot of empty rooms, or you’ve got super expensive rooms and a lot of people who can’t find a place to stay.” In contrast, firms like Airbnb allow for a more fluid supply of short-term accommodations. During events like the Super Bowl that draw many tourists, more property owners may choose to rent out space to take advantage of the increased demand and higher prices. But during lulls, those properties remain occupied by their owners rather than sitting idle. In addition to expanding supply for existing markets, the sharing economy is also creating entirely new markets for goods and services. While it is theoretically possible for markets to exist for anything, transactions aren’t free. It takes time and effort for buyers to find the best price, to locate sellers, to ascertain the true quality of the good being sold, and to make sure a seller will follow through on the commitment once the transaction is complete. Economists refer to these as “transaction costs.” While pre-Internet institutions like classified ads and dedicated intermediaries  such as real estate agents helped reduce the costs of many transactions, new technology has greatly expanded the range of viable exchanges. “Before, if you wanted to borrow someone’s hacksaw or couch, you’d first have to determine who in your area has those things available for rent,” says Matthew Mitchell, a senior research fellow at George Mason University’s Mercatus Center. “The beauty of these websites is that they dramatically lower transaction costs and allow people to interact and exchange in new ways.” This creates more opportunities for entrepreneurs as well as consumers. Many sharing economy participants, like the Deans, see these platforms as a way to earn some extra spending money in their spare time. According to Hall and Krueger’s study of Uber drivers, more than half drove 15 hours or less each week. But for some, the sharing economy offers an alternative to traditional full-time work. Nearly 20 percent of drivers in Hall and Krueger’s study drove 35 hours or more each week, and on average they made about $19 an hour — $6 more than traditional taxi drivers and chauffeurs. The authors note that Uber drivers must pay for expenses like gas and car maintenance that some taxi companies may cover, but many professional drivers still view the new services as viable alternatives to traditional options. The San Francisco Cab Drivers Association reported in 2014 that nearly a third of the city’s taxi drivers had switched to driving for services like Uber, Lyft, or Sidecar. Economic benefits from improved selection and greater market efficiency are only some of the potential gains from the sharing economy. Many supporters have touted the environmental benefits of reducing consumption by using underutilized resources more efficiently. While it is still too early to tell what the final environmental impact will be, one study of vehicle-sharing services found that about a quarter of users in North America sold their vehicles after joining and their carbon dioxide emissions from transportation fell by as much as 56 percent due to the reduction in vehicle ownership and vehicle miles traveled. Critics, however, contend that many of these benefits come at a huge risk. They say that companies like Airbnb, Uber, and others have enjoyed success largely by ignoring laws designed to protect consumers — laws that their traditional competitors must still adhere to.  Whom Do You Trust? Many, if not all, of the markets that the sharing economy touches are regulated in some fashion. Zoning laws partition cities into commercial and residential areas; hotels are allowed in some areas and not in others. Professional drivers carry special licenses requiring additional training and more comprehensive background checks than personal driver’s licenses. Restaurants must comply with health codes that don’t apply to personal kitchens. A common goal of regulations is to prevent harm to consumers by providing them with information and certifying goods and services as trustworthy. E C O N F O C U S | F O U RT H Q U A RT E R | 2 0 1 4  13  were illegal because they broke zoning laws and other rules related to safety such as maximum occupancy limits. Legislators in the state have cited complaints from constituents in residential apartment buildings that have seen increased commercial traffic thanks to sites like Airbnb. Uber has also been in the news for safety issues. In December, a woman in New Delhi, India, reported being raped by an Uber driver. Similar incidents have been reported in other cities, including Chicago and Boston. The company has been accused of failing to perform sufficient background checks on its drivers, and several countries, including India, have banned the service. But it is not clear that top-down regulations perform better than markets at establishing trust and policing bad behavior. For one thing, economists note that regulations often have hidden costs. Licensing requirements can help ensure minimum quality, but they can also be used to reduce competition by making it harder for new firms to enter the marketplace. (See “May I See Your License, Please?” Region Focus, Summer 2003.) For example, the cost of a taxicab medallion in New York surpassed $1 million in 2011 — creating a substantial barrier for new entrants that might provide better service. Firms have their own incentives to establish trustworthiness and quality in order to maintain and expand their market share. This can lead to novel market solutions designed to solve Akerlof’s “lemons problem.” For example, in the 1990s, it was not obvious that online retailers like eBay and Amazon would succeed. After all, they faced the challenge of courting customers who couldn’t inspect their products before they bought them and had no guarantee of receiving a good in the mail after they ordered it. Those initial online firms developed rating and review systems to allow market participants to provide measures of quality. Today, sharing economy businesses rely on the same underlying framework, and technological developments in the last decade have improved the reach and effectiveness of these systems. Widespread adoption of Internetenabled smartphones gives consumers instant access to prices and reviews. “I think the rating systems definitely help,” says Katie Frantes. As a representative to colleges for International Studies Abroad, she travels frequently and prefers to use Airbnb rather than a hotel for longer trips. “We’re used to reviewing hotels and restaurants, and I feel like this is the same. It’s just as safe as a hotel, if not more so.” The spread of online social networks like Facebook have also helped build trust by making Internet commerce less anonymous. Indeed, many sharing economy businesses allow users to verify their identities by logging in through social media accounts. Economists have long known that social  Establishing trust is particularly important when markets are prone to what economists call “asymmetric information” — meaning one party in a transaction, often the seller, has more information about the quality of the good or service in question than the other party. If these asymmetries are severe and there is no way for buyers to learn the true quality of the good or service, market efficiency suffers — even, or especially, when the numbers of buyers and sellers might seem plentiful enough to eliminate any monopoly power. This was the insight of Nobel Prize winner and University of California, Berkeley economist George Akerlof. In a famous 1970 paper, Akerlof looked at the market for used cars and reasoned that each car could either be of good quality or be a “lemon.” When buyers don’t know whether a given car is a lemon, good and bad cars will sell for the same price. This price will be lower for sellers of good cars than they would get in a market with full information, and this will tend to drive good cars out of the market, leaving more lemons. Government regulations are one way to counteract such asymmetric information. For example, taxi drivers typically must display licenses in their car to signal they have undergone proper training to operate a commercial vehicle. Hoteliers are also required to follow state safety regulations, so guests can assume they are reasonably well protected when renting accommodations. Critics argue that sharing economy firms have willfully ignored regulations like these to gain an unfair advantage against traditional businesses, and they say such actions put consumers at risk. In October 2014, New York Attorney General Eric Schneiderman issued a report stating that roughly three-quarters of Airbnb listings in New York City 14  E CO N F O C U S | F O U RT H Q U A RT E R | 2 0 1 4  PHOTOGRAPHY: UBER  Rideshare services like Uber leverage smartphones equipped with GPS to link passengers with nearby drivers. The Uber app allows users to see how previous passengers have rated drivers, and drivers who fall below a certain threshold can be removed from the service.  networks reduce transaction costs for local physical markets, and initial studies of online social networks suggest they perform a similar function on a wider scale. Researchers at the University of Maryland found that online social networks mitigate information asymmetries in online lending markets, improving transactions between borrowers and lenders. For Frantes and many others, the increased opportunities for social connections are a large part of the appeal of the sharing economy. “What’s great about Airbnb is you get to meet locals and socialize,” she says. “It’s not as lonely as a hotel.” Levin says regulators should take note of such consumer sentiments. “Some of the value of these marketplaces comes from the fact that what they are replacing was not necessarily optimized to promote consumer welfare,” he says. “And that should cause you to rethink how many regulations we actually need. How much can markets take care of ensuring the right level of quality on their own?”  Looking Forward Despite their general enthusiasm, most supporters of the sharing economy don’t advocate that it should be unaccountable. Instead, Mitchell urges regulators to allow firms to experiment and seek solutions to problems after they arise rather than apply rules upfront. “To a lot of people, that doesn’t sound very appealing. We have to wait until someone gets hurt before we solve the problem?” Mitchell says. “But the benefit of that is it allows for a lot more experimentation. You’re foreclosing on a whole lot of opportunities for entrepreneurship, including potential safety enhancing opportunities, if you settle down too early and say this is exactly the model for what this industry should look like.” If the sharing economy is here to stay, though, it will undoubtedly require some changes to laws and regulations for businesses. In some cases, cities are already working to incorporate the new firms into the existing framework. Portland, Ore., has partnered with Airbnb to promote the service through its tourism bureau. The city may stand to gain from the deal. According to Airbnb’s own studies, its guests tend to stay longer and spend more than typical tourists. For its part, Airbnb agreed to work with the city to ensure hosts meet safety requirements. It also agreed to collect and remit  Sharing Economy Scope Just a few examples of the types of services in the sharing economy  Goods •	Swapdom – exchange goods through a barter network •	1000 Tools – rent tools from individuals Services •	TaskRabbit – hire individuals for chores or errands •	oDesk – post projects for professional freelancers Food •	Feastly – book a seat for a meal at a chef’s home •	LeftoverSwap – donate leftover food to neighbors Transportation •	Lyft – use your car to shuttle passengers around town •	Getaround – rent idle cars from owners Space •	HomeAway – list and book vacation homes •	PeerSpace – rent short-term workspace  lodging taxes to Portland on behalf of its hosts. Products like insurance, which have historically been separated into personal and commercial categories, may also need to adapt. The sharing economy blurs the line between personal and commercial use, and if it continues to grow, there may be increased demand for mixed-use insurance products. Some firms in the sharing economy have already begun to address this. Airbnb pledges to reimburse hosts for up to $1 million in property damages, and Uber has teamed up with San Francisco-based Metromile to offer per-mile commercial insurance for its drivers. As platforms, sharing economy companies may also require regulators to exercise greater vigilance against monopoly power. Jean Tirole, chairman of the Toulouse School of Economics, won the 2014 Nobel Prize in economics in part for his work on the regulation of platform markets. Platforms have an incentive to become monopolies because they gain more value the more users they have. While Tirole noted that this is not inherently bad, regulators need to be wary of firms that use their power to block more dynamic upstarts from challenging them. This is why Mitchell says consumers, economists, and regulators should be optimistic but still remain vigilant. “I am optimistic about the technology,” he says, “but cautious about any particular company, because any company has an incentive to eventually capture its own regulators.” EF  Readings Akerlof, George A. “The Market for ‘Lemons’: Quality Uncertainty and the Market Mechanism.” Quarterly Journal of Economics, August 1970, vol. 84, no. 3, pp. 488-500. Fraiberger, Samuel, and Arun Sundararajan. “Peer-to-Peer Rental Markets in the Sharing Economy.” NYU Stern School of Business Research Paper, March 6, 2015. Hall, Jonathan, and Alan Krueger. “An Analysis of the Labor Market for Uber’s Driver-Partners in the United States.” Report from Uber Technologies, Jan. 22, 2015. Koopman, Christopher, Matthew Mitchell, and Adam Thierer. “The Sharing Economy and Consumer Protection Regulation:  The Case for Policy Change.” Mercatus Working Paper, December 2014. Levin, Jonathan. “The Economics of Internet Markets.” In Daron Acemoglu, Manuel Arellano, and Eddie Dekel (eds.), Advances in Economics and Econometrics, Tenth World Congress, vol. 1, Cambridge: Cambridge University Press, May 2013. Zervas, Georgios, Davide Proserpio, and John W. Byers. “The Rise of the Sharing Economy: Estimating the Impact of Airbnb on the Hotel Industry.” Boston University School of Management Research Paper No. 2013-16, Feb. 11, 2015.  E C O N F O C U S | F O U RT H Q U A RT E R | 2 0 1 4  15  Money Talks Legal changes have opened the door to new kinds of political spending. What does the money buy? BY JESSIE ROMERO  T  here are two things that are important in politics. The first is money and I can’t remember what the second one is.” So said Mark Hanna, a wealthy Ohio businessman, who became famous (or infamous) as William McKinley’s campaign manager in 1896. Hanna — dubbed “Dollar Mark” by the press — set up a fundraising operation of unprecedented scale, going so far as to demand that banks and businesses pledge a percentage of their profits to McKinley’s campaign. More than a century later, Dollar Mark’s words seem truer than ever. In October 2014, the North Carolina contest between Democratic incumbent Kay Hagan and the Republican challenger Thom Tillis became the most expensive Senate race in history and the first to cross the $100 million threshold; the eventual total was more than $120 million, including the primaries. (The previous record holder in real terms was the 2000 race in New York between Rick Lazio and Hillary Clinton, which cost $70.4 million, or $96.8 million in 2014 dollars.) North Carolina wasn’t the only pricey Senate race last year; the Colorado contest eventually crossed the $100 million line as well, and races in Iowa and Kentucky each cost around $90 million. These races are part of a trend toward more spending in general. In 2014, total spending on congressional races, including by the candidates, the parties, and outside interest groups, was nearly $3.8 billion, compared to  Total Spending on Elections 7 6  ■ Presidential Race ■ Congressional Races  $BILLIONS  5 4 3 2 1 0 1998  2000  2002  2004  2006  2008  2010  SOURCE: Center for Responsive Politics, http://www.OpenSecrets.org  16  E CO N F O C U S | F O U RT H Q U A RT E R | 2 0 1 4  2012  2014  about $2.8 billion in 2006, another midterm election year (see chart). The last two presidential races have cost more than $2.6 billion each, compared to $1.4 billion in 2000 and $1.9 billion in 2004. Many people attribute the rise in spending to a pair of 2010 court decisions, Citizens United v. FEC and SpeechNow. org v. FEC, which lifted some restrictions on spending by corporations and unions and enabled the formation of the so-called “Super PAC.” But the relationship between those decisions and the current level and distribution of spending is far from certain. Even less certain is how much the money makes a difference.  Scandal, Reform, Repeat In 1902, shortly after he became president following McKinley’s assassination, Theodore Roosevelt directed his Justice Department to bring an antitrust suit against J.P. Morgan’s Northern Securities Company. It was the first of many such suits that would earn Roosevelt a reputation as a crusader against big business, so it caused quite a scandal when it was revealed after his re-election in 1904 that three-quarters of his campaign funds came from railroads and oil companies — not to mention a secret $150,000 donation from J.P. Morgan himself. Following the scandal, Roosevelt called for a ban on all corporate contributions to campaigns in his 1905 State of the Union address. Congress responded with the Tillman Act, which prohibited corporations and national banks from contributing to federal candidates. The Tillman Act became law in 1907, but, lacking any enforcement mechanism, it did little to actually curb contributions. Over the next few decades, Congress passed several bills that increased disclosure requirements and barred unions from contributing to campaigns. These laws contained numerous loopholes, however, and like the Tillman Act, they were largely ignored anyway. The extent to which they were ignored became obvious in 1972, after Congress passed the Federal Election Campaign Act (FECA) in 1971. In 1968, congressional candidates reported spending $8.5 million. In 1972, under FECA’s more stringent requirements, spending shot up to $88.9 million.  Also in 1972, five men were arrested trying to wiretap the Democratic National Committee’s offices at the Watergate complex. Over the next two years, the public learned that the Committee for the Re-election of the President was responsible for the break-in, as well as for a massive program of spying on Democratic candidates and trying to sabotage their campaigns. In response, a Senate committee recommended a variety of reforms to campaign regulations and contributions. Many of those recommendations were enacted as amendments to FECA in 1974, including stricter limits on contributions and spending and more reporting by election committees. The amendments also created the Federal Election Commission (FEC) to oversee compliance. (The spending limits were struck down in the 1976 case Buckley v.Valeo, in which the U.S. Supreme Court ruled that expenditures were a form of free speech, although limits on contributions to candidates and certain political groups were upheld.) Twenty years later, scandal was once again the impetus for campaign finance reform. After the FECA amendments, donors circumvented the contribution limits by making donations to the political parties rather than to the candidates themselves. These “soft money” contributions were not subject to any limits on the amount or source of the donations as long as they were used for “party-building” activities, such as voter registration drives. But both parties took an expansive view of what counted as party building, and it wasn’t long before soft money was being put to questionable use. The soft-money system reached its apex in 1996, when the Democratic National Committee used soft money to run ads critical of Bob Dole and gave large donors lavish rewards, including overnight stays in the Lincoln Bedroom of the White House. A Senate committee report on the 1996 campaign also criticized Republican practices and recommended banning soft money and placing greater restrictions on corporate and union spending. These recommendations became law as part of the McCain-Feingold Act in 2002.  The 2010 Court Decisions After a century of campaign finance legislation, individual donors, corporations, and unions were subject to a complicated set of rules about where they could donate money and how that money could be spent. For example, in addition to prohibitions on giving directly to candidates or parties, corporations and unions were barred from making “independent expenditures” — paying for advertisements that weren’t coordinated with a campaign but that advocated for or against a specific candidate. They also were prohibited from spending on electioneering communications, which are ads that mention a candidate’s name close to an election even if they don’t expressly say “Vote for (or against) Jane Smith.” Individuals also faced strict limits on how much they could donate to groups that made independent expenditures. But in 2008, the conservative nonprofit corporation Citizens United produced Hillary: The Movie, a film critical  of then-presidential candidate Hillary Clinton. The group wanted to run television ads for the movie, but the U.S. District Court for the District of Columbia ruled that the ads would be a violation of the rules against electioneering communications. The case eventually reached the U.S. Supreme Court. In 2010, the Court ruled that corporations (and, by extension, unions) have the same right to political speech as individual citizens, and that limiting their expenditures is a violation of the First Amendment. Previous court rulings had maintained that the government’s interest in preventing corruption justified the corporate restrictions, but the Citizens United decision stated that independent expenditures are not corrupting since they are uncoordinated with a candidate. Concern about possible favoritism short of quid-pro-quo corruption thus did not justify the suppression of free speech. (The decision did not address the prohibition on corporate giving directly to candidates, which remains in place.) Just days after the U.S. Supreme Court issued its ruling in the Citizens United case, the U.S. Court of Appeals for the D.C. Circuit heard a case brought against the FEC by SpeechNow.org, a group created to make independent expenditures. SpeechNow.org argued that the limits on how much individuals could give to the group were a violation of the First Amendment, and the appeals court agreed, noting the Supreme Court’s logic in Citizens United that independent expenditures did not raise concerns about corruption. In striking down the limits on contributions to groups like SpeechNow.org, the decision created the Super PAC — an organization that can raise and spend unlimited money in an effort to elect or defeat candidates. Since the decision, more than 1,300 Super PACs have been created. (Traditional PACs, or political action committees, give money directly to candidates and are subject to individual contribution limits.)  The Rise of Outside Spending The Citizens United ruling sparked widespread concern about corporate influence on the political process. But “Citizens United did not create the flood of corporate money that a lot of people predicted, and decried, in the aftermath,” says Jenny Shen, an attorney at the law firm Hogan Lovells who has studied campaign finance laws. During the 2012 Republican primaries, for example, only about 13 percent of Super PAC money came from privately held corporations, and less than 1 percent came from publicly traded corporations. And during the entire 2012 election cycle, corporations accounted for only about 1 percent of the $6 billion in total spending. (It is possible that corporations made large contributions to “social welfare” groups that don’t have to disclose their donors. Stanford University political scientist Adam Bonica has estimated, however, that these donations could have totaled at most about another $320 million, still a small share of the total spending.) What has changed is the share of spending by outside groups relative to spending by the candidates themselves. E C O N F O C U S | F O U RT H Q U A RT E R | 2 0 1 4  17  Overall, candidate spending still outweighs outside spending; outside groups accounted for about 22 percent of the total in 2014. But outside spending is on the rise: A study by Daniel Tokaji, a professor at Ohio State University’s Moritz College of Law, and Renata Strause, a clerk for the U.S. District Court of the Southern District of Texas, found that independent expenditures on express advocacy for all congressional campaigns increased from about $50 million per election cycle during the period 1980-2008 to $200 million in 2010 and $450 million in 2012, outpacing the increase in candidate spending. In some races, outside groups spend far more than the candidates. Kay Hagan and Thom Tillis’ record-breaking race was largely funded by outside groups, which spent more than $80 million. In contrast, the race between Hillary Clinton and Rick Lazio 14 years earlier was entirely funded by the candidates’ campaigns. Hagan and Tillis weren’t alone; in 2014, outside groups spent more than the candidates in 28 congressional races. In 2000, that was the case in zero campaigns, according to the Center for Responsive Politics, a nonpartisan research group that tracks political spending.  These independent expenditures appear to be disproportionately funded by a few wealthy individuals. In a recent paper, Ian Vandewalker of the Brennan Center for Justice at New York University’s School of Law found that just 195 donors and their spouses contributed almost 60 percent of the more than $1 billion that Super PACs have spent on Senate races since 2010. During the 2014 elections, the average donation to the Democrat-aligned Senate Majority PAC was more than $170,000; the average donation to the conservative Ending Spending Action Fund was more than $500,000. It’s tempting to attribute the rise in outside spending — and the concentration of that spending — to Citizens United and SpeechNow.org. But the shift started before 2010. “The era of wealthy donors and outside spending was definitely underway pre-Citizens United,” says Shen. That era may have been an unintended consequence of McCain-Feingold’s ban on soft-money contributions to political parties. As Robert Kelner, a partner at the law firm Covington & Burling, argued in a 2014 Harvard Law Review article, the law put the national party committees in a “legal vice grip,” while leaving  Corporations and K Street Corporations might not spend much on campaigns, but that doesn’t mean they don’t care about politics. Economists have found that, particularly in countries lacking strong legal and regulatory systems, firms can receive substantial benefits from having politicians as large shareholders or top officers. Even in the United States, where there are strict rules regarding conflicts of interest, companies may benefit from having political connections. In a 2009 paper, Eitan Goldman of Indiana University, Jorg Rocholl of the European School of Management and Technology, and Jongil So, then at the University of North Carolina at Chapel Hill, found that a U.S. company’s stock price tended to increase when a former politician joined the board of directors. The stock price also increased after the party of a politically connected director gained control of the presidency. Corporations (and other interest groups) can also try to influence the political process through lobbying. In theory, lobbying is a way for informed interest groups to share information with uninformed legislators, since it’s not possible for them to be an expert on every issue. But in practice, there may be truth to the belief that lobbying is a way to gain preferential access to politicians. In a 2014 paper, Marianne Bertrand of the University of Chicago and Matilde Bombardini and Francesco Trebbi of the University of British Columbia studied these two different views of lobbying. While they found evidence on both sides, overall, lobbyists appear to be compensated more for their connections than for their expertise. Whatever the motivation, lobbying is big business. In 2014, organizations spent $3.2 billion on official lobbying.  18  E CO N F O C U S | F O U RT H Q U A RT E R | 2 0 1 4  (The Lobbying Disclosure Act of 1995, amended in 2007, requires lobbyists to register and file quarterly reports on their activity). Although official lobbying expenditures and the number of registered lobbyists have declined since the late 2000s, likely as a result of stricter regulations, a significant amount of lobbying activity occurs under other names. Some estimates put the total amount of lobbying, including unofficial lobbying, closer to $9 billion. American University professor of government James Thurber estimates there may be as many as 100,000 unofficial lobbyists, compared to the roughly 12,000 who were registered in 2014. Lobbying also is concentrated in a relatively small number of organizations. In 2014, just 20 companies and trade associations accounted for 15 percent of the $3.2 billion spent on official lobbying that year. Research by William Kerr of Harvard University, William Lincoln of Johns Hopkins University, and Prachi Mishra of the International Monetary Fund and the Reserve Bank of India has shown that lobbying is highly correlated with firm size, and that the same firms tend to lobby from year to year. This is not surprising; there are significant upfront costs to lobbying, and smaller firms have fewer resources to employ and would in theory receive a smaller payoff for the same investment. But the concentration may be cause for concern, says Luigi Zingales, an economist at the University of Chicago. “While there is definitely informational value in lobbying, the problem is that over the years the concentration of lobbying interests has increased so that congressmen and women hear only one side of the equation. The system is not balanced.” — Jessie Romero  Types of Advocacy Groups certain outside groups free to raise and spend 501(c) Groups Nonprofit, tax-exempt groups organized under section 501(c) of the Internal as much as they wanted. (Even before Citizens Revenue Code. 501(c)(4) groups are commonly called “social welfare” orgaUnited and SpeechNow.org, certain groups not nizations and may engage in political activities, as long as these activities do not become their primary purpose. subject to FEC oversight, such as 501(c)(4) social welfare organizations and so-called “527” 527 Group A tax-exempt group organized under section 527 of the Internal Revenue groups, were allowed to accept unlimited conCode to raise money for political activities, including everything from voter mobilization to issue advocacy to ads asking the public to vote for or against tributions and spend unlimited amounts on a particular candidate. “issue ads,” which were often thinly disguised Political Action A political committee that raises and spends limited “hard” money contribupolitical ads.) In 2000, according to Kelner, tions for the express purpose of electing or defeating candidates. An organizaCommittee (PAC) the national parties aired about two-thirds of tion’s PAC collects money from the group's employees or members and makes all the ads in the presidential election. During contributions in the name of the PAC to candidates and political parties. the 2004 election, after McCain-Feingold, the Super PAC Technically known as independent expenditure-only committees, Super PACs share dropped to about one-third and to less may raise unlimited sums of money from corporations, unions, associations, than one-quarter in 2008. and individuals, then spend unlimited sums to overtly advocate for or against Still, the trend accelerated after 2010; the political candidates. Super PACs are prohibited from donating money directly parties aired just 6 percent of ads during to political candidates. the 2012 election. One reason could be that SOURCE: Center for Responsive Politics, http://www.OpenSecrets.org the rules surrounding 527s and social welfare groups had been murky. This “legal cloud” likely deterred both spending and contributions, according tend to attract less. To the extent that expected votes to Richard Hasen of the University of California, Irvine influence donations, this can cause researchers’ models to School of Law. By lifting that cloud and creating the entirely over- or underestimate the effects of spending, as Gary legal Super PAC, Citizens United and SpeechNow.org may have Jacobson, a political scientist at the University of California, encouraged more donors and more spending. San Diego, explained in a chapter of the 2006 book Capturing Campaign Effects. Does Money Influence Elections? Moreover, there may be a level beyond which spending In 1972, the late economist Gordon Tullock posed a provocceases to affect election outcomes. “There’s not much ative question: Why is there so little money in politics? At more you can do with your money after a certain point,” the time Tullock was writing, campaign spending totaled Jacobson says. “You’ve bought up all the airtime, everybody about $200 million, but the potential reward was the chance has seen your ad multiple times, voters’ mailboxes are next to control $230 billion in federal spending (about $1.3 trillion to the recycling bin and they’re just throwing your fliers in today’s dollars). If one viewed politics as a competitive away.” During the 2014 campaign, one TV station in New marketplace like any other, Tullock conjectured, firms and Hampshire actually ran out of airtime and had to cancel ads individuals should have been willing to invest a great deal that had already been purchased. more than they were. An analysis by the advocacy group Americans for Campaign spending has increased dramatically since the Campaign Reform found that once candidates reached a 1970s, but so too has the size of the prize: In fiscal year 2015, certain competitive threshold, additional spending did not the federal government will spend an estimated $3.8 trillion. increase the likelihood of winning an election. “It becomes So why aren’t people spending more on campaigns? an arms race,” says Jacobson. “Both sides throw in so much One reason might be that it has been difficult to identify money that the marginal returns are impossible to detect. how, and how much, spending influences elections. While They’re vanishingly small.” it would seem to be a straightforward question to answer — find out who spent more and see if they won — spending Does Money Influence Politicians? is influenced by a number of factors that muddy the cause Regardless of how money influences the outcome of elecand effect. Once campaign-spending data became available tions, it might affect how politicians act once they’re in in the 1970s, for example, researchers identified a puzzling office. Some research suggests that politicians are more fact: The more challengers spent, the better they did, but responsive to the views of high-income constituents than the more incumbents spent, the worse they did. Eventually, those of low-income constituents. In a 2012 book, for examresearchers determined that challengers spent more when ple, Martin Gilens of Princeton University showed that on they had a high likelihood of winning, but incumbents spent policy questions where the views of more- and less-affluent more when they faced a significant threat. Rather than voters diverge, the views of the more affluent are likely to money influencing the outcome of the election, the likely prevail. If 80 percent of voters at the 90th income percenoutcome of the election changed the candidates’ behavior. tile support a change, it has a 50 percent chance of passing, A similar dynamic is at play with donors; people generally versus a 32 percent chance when supported by 80 percent don’t like to invest in losing causes, so challengers attract of voters at the 10th income percentile. While it’s possimore donations when they’re doing well. Safe incumbents ble this could be because higher-income citizens are more E C O N F O C U S | F O U RT H Q U A RT E R | 2 0 1 4  19  involved in the political process, research by Larry Bartels of Vanderbilt University has found no correlation between politicians’ responsiveness to higher-income voters and the election turnout or political knowledge of those voters. Some legislators also might feel pressure to vote a certain way to avoid incurring the ire of groups that can spend large amounts of money to defeat them. Tokaji and Strause surveyed numerous members of Congress who spoke of threats, both direct and implied, about the consequences of their votes. Votes are only one way a politician might show favor to a particular interest group; it’s also possible that contributions could affect how legislation is drafted in the first place. While this is difficult to measure, research by Lynda Powell of the University of Rochester suggests there are circumstances in which contributions affect legislation. Still, other research suggests that legislators are mostly influenced by their own beliefs and by the preferences of their party and voters. In a 2003 paper, Stephen Ansolabehere and James Snyder of Harvard University and John de Figueiredo of the Duke University School of Law examined the relationship between contributions and congressional votes. They concluded that contributions explained only a tiny fraction of differences in legislators’ voting behavior. At the end of the day, politicians are unlikely to vote with moneyed interests if it will upset their constituents. Jacobson says, “Members of Congress care about money because they want to win elections. They’re not going to sacrifice votes to get money.” Given the uncertainty surrounding political investments, Tullock might have asked why there is any money in politics. The answer might be that political contributions shouldn’t always be viewed as an investment. Ansolabehere and his co-authors argued that political contributions by individuals are a form of consumption, akin to charitable donations. In their view, individuals give money not because they expect a specific return, but because they are excited about a particular election, they’re ideologically motivated, or they’re asked to participate by friends or colleagues. Viewing political spending as consumption might explain why wealthy donors are willing to spend so much even when it’s not clear their spending affects the outcome of an election.  Super PACs and “Dark Money” Regardless of the uncertainty surrounding money’s influence on politics, many observers remain concerned about how much is being spent and who is doing the spending. Super PACs are a particular focus of criticism. Democratic congressmen David Price and Chris Van Hollen have introduced multiple bills that would significantly curtail Super PACs. Before the 2014 midterms, Harvard University law professor Lawrence Lessig and political strategist Mark McKinnon started MayDay, “the Super PAC to end all Super PACs,” with the goal of electing candidates in favor of campaign finance reform. (Two of the eight candidates MayDay supported won.) Most famously, in 2011 comedian Stephen Colbert set up his own Super PAC, which purchased ads in several markets and was widely viewed as an apt illustration of campaign spending excess. One possible reason for concern is that Super PACs might make it easier for donors to remain anonymous. Although candidates and political committees, including Super PACs, are required to disclose their donors, the same is not true for 501(c)(4) and 501(c)(6) organizations. But since these groups can make unlimited donations to Super PACs, they may be a way for donors to cloak their giving. In his paper, Vandewalker calculated that nearly half of the money outside groups spent during the three Senate elections since 2010 was money from undisclosed sources, or what reform advocates refer to as “dark money.” The public should know where the money comes from, says Shen. “If we want to be an informed democracy, disclosure is important. Disclosure holds people accountable. It raises the level of debate and helps to ensure that the public is making informed decisions.” But some groups, including the American Civil Liberties Union, oppose stricter disclosure laws out of concern that they might deter people from donating to 501(c)s and thus violate free speech protections. Colbert shut down his Super PAC after the 2012 elections (and donated the remaining money to charity). But the Super PAC as a political entity, and the era of wealthy donors more generally, is likely to continue — as is the debate about the effects on the political system. EF  Readings  20  Ansolabehere, Stephen, John M. de Figueiredo, and James M. Snyder Jr. “Why Is There so Little Money in U.S. Politics?” Journal of Economic Perspectives, Winter 2003, vol. 17, no. 1, pp. 105-130.  Jacobson, Gary C. “Measuring Campaign Spending Effects in U.S. House Elections.” In Henry E. Brady and Richard Johnston (eds.), Capturing Campaign Effects. Ann Arbor: University of Michigan Press, 2006, pp. 199-220.  Bertrand, Marianne, Matilde Bombardini, and Francesco Trebbi. “Is It Whom You Know or What You Know? An Empirical Assessment of the Lobbying Process.” American Economic Review, December 2014, vol. 104, no. 12, pp. 3885-3920.  Tokaji, Daniel P., and Renata E.B. Strause. “The New Soft Money: Outside Spending in Congressional Elections.” Ohio State University Moritz College of Law, June 2014.  E CO N F O C U S | F O U RT H Q U A RT E R | 2 0 1 4  Zingales, Luigi. A Capitalism for the People: Recapturing the Lost Genius of American Prosperity. New York: Basic Books, 2012.  Bottoms Up  PHOTOGRAPHY: RICHMOND REGION TOURISM  I  n many places across the country, it’s hard not to notice the shift in product offerings at local bars and restaurants and in the beer aisle of the grocery store. The colorful, ornate tap handles of craft brewers have joined the classic blue, red, and silver posts of the traditional powerhouses, and bartenders play the role of consultant purveying the selections. Shoppers who once stood in the beer aisle trying to decide how many cans of beer to buy now stand in front of coolers filled with different brands and styles of beer available in single bottles, packs of four, six, or 12, and even on tap in a growing number of stores. Many of them have been made at a brewery down the street; according to the Brewer’s Association (BA), the trade association that represents the craft beer industry, approximately 75 percent of the drinking-age population in the United States lives within 10 miles of a brewery. In 2014, there were 615 new craft breweries that opened, pushing the number in the United States to 3,418, more than twice the number that existed just five years earlier. The BA defines a craft brewery as one that produces fewer than 6 million barrels a year, is less than 25 percent controlled by an alcoholic beverage industry member that is not itself a craft brewer, and produces a beverage “whose flavor derives from traditional or innovative brewing ingredients and their fermentation.” The ownership restriction excludes the craft-style subsidiaries — such as Shock Top, Goose Island, Leinenkugel, and Blue Moon — of large brewers like Anheuser-Busch InBev and MillerCoors (the two largest brewers in the United States). Although craft beer remains a relatively small segment of the market, accounting for only 11 percent of the beer produced in the United States in 2014, the segment is growing rapidly. Craft beer’s share of production has more than doubled since 2010, when it was just 5 percent. In 2014, craft beer sales volume increased nearly 18 percent, according to the BA, versus just 0.5 percent for the overall beer industry. The retail dollar value of craft beer grew 22 percent in 2014, while the total U.S. beer market increased only 1.5 percent in value.  Craft brewers raise the bar in the American beer industry — BY J A M I E F E I K A N D J O S E P H M E N G E D OT H  The growth of small breweries runs counter to the trend of consolidation in the beverage industry that persisted through much of the 20th century. Why are craft brewers thriving?  American Beer’s Backstory The recent growth of craft brewing is only the latest chapter in the centuries-old story of beer in America. Americans have brewed and consumed beer for a long time, with production dating back to some of the first European settlers in the mid-1600s. Before the Civil War, beer was mostly British-style ales or malts that were brewed locally, stored in a wooden keg, and consumed in the local tavern. In the years following the Civil War, beer became an industrial product that was mass produced and widely distributed. This change resulted in part from the large inflow of immigrants from beer-drinking countries such as Germany and Ireland, who brought both new styles of beer and a beer-drinking culture. Higher wages for some workers and the technological advancements that accompanied industrialization also helped fuel growth in aggregate beer consumption and production. In 1865, 2,252 breweries supplied 3.7 million barrels of beer annually to Americans, who consumed 3.4 gallons per drinking-age adult. By 1910, a total of 1,568 breweries produced 59.6 million barrels, and the consumption rate had increased to 20 gallons per drinking-age adult. The aggregate picture of the American beer industry during this chapter hides crucial brewery-level decisions that helped shaped the modern American beer industry. In the late 19th century, for example, some breweries decided to leverage the growing transportation network and new technologies, such as pasteurization, bottling, and refrigeration, to expand their product reach beyond what they could sell from their own establishments. When Prohibition was enacted in 1920, these firms were relatively less inclined to sell off their assets and cease operations than their smaller competitors. They decided that rather than divest from the brewing business altogether, they could stay in business by producing near-beer malt beverages, sodas, and syrups. E C O N F O C U S | F O U RT H Q U A RT E R | 2 0 1 4  21  While Prohibition was in effect, these firms were perfecting beverage bottling and packaging processes, developing relationships with retailers, gaining marketing experience, and improving manufacturing processes to achieve economies of scale. By the time Prohibition was repealed in 1933, they were much better situated to resume beer production than their former competitors. Prohibition changed the market in other ways as well. Prior to Prohibition, most breweries sold their beer on draft in saloons they either owned or controlled; temperance advocates believed this system contributed to the overconsumption that led to Prohibition. As a compromise reached during the repeal, a three-tier system was adopted: Brewers would now sell their beers to an independent wholesaler who would then sell the beer to independent retailers. The introduction of a middle-man to the market structure had several effects. First, it protected retailers from pressure from larger brewers to only sell certain products. Second, it provided small brewers with better access to the consumer, as the wholesaler would provide distribution equipment, marketing, and sales expertise — costs that could be barriers to entry for many small players. Finally, wholesalers made it easier for the government to tax alcohol and monitor overall alcohol consumption. In the years immediately following the repeal, many breweries attempted to resume operations; the number of legal breweries rose to around 850 in 1941. But the threetiered system that helped small brewers in some respects wasn’t enough to erase the advantage held by the firms that rode out Prohibition making syrups and malts. In addition to the experience they had gained, these firms did not have to incur the fixed costs associated with opening a brewery. After Prohibition, there was a huge unmet demand for beer. According to economists Eric Clemons and Lorin Hitt of the University of Pennsylvania and Guodong Gao of the University of Maryland, this demand could best be met by mass producing standardized products. As they wrote in a 2006 article, all the major producers of the time “followed Anheuser-Busch [which was founded in 1862] in a race for scale-based, quality production of largely undifferentiated products.” Rather than creating truly differentiated products, large producers created barriers to their competitors “through massive marketing and advertising investments intended to create perceived differentiation for otherwise similar products.” As a result, advertising became a major cost component in the production of beer and forced smaller brewers out of the market. The American beer market consolidated from the end of World War II to 1980. By that year, just 101 firms were producing 188.4 million barrels of mostly Americanstyle lager and drinking-age adults consumed an all-time high of 23.1 gallons per year.  Craft Beer Comes to a Head With domestic beer production heavily skewed toward light lagers, other styles of beer that were no longer being 22  E CO N F O C U S | F O U RT H Q U A RT E R | 2 0 1 4  produced domestically had to be imported. From 1950 to 1972, the quantity of beer imported annually by the United States grew from less than 100,000 barrels to about 1 million barrels. Then, in October 1978, President Jimmy Carter signed a bill that legalized home brewing and shortly after, individual states began legalizing brewpubs — restaurant-breweries that sell 25 percent or more of their housemade beer on premise. The craft brewing segment was born. In the early 1980s, there were only eight craft brewers in the country. They found a foothold in the market by producing beer other than lagers and thus did not compete directly with the non-craft brewers. From 1980 to 1994, the number of craft breweries nationally rose to just over 500. Then, the craft beer industry began its first major boom, growing rapidly through the end of the decade; the number of breweries nearly tripled to 1,509 in 2000. Many of these breweries were very small, however, with annual production levels of between 5,000 and 100,000 barrels, according to economist Martin Stack of Rockhurst University. As a result, the market was still highly concentrated, with the three largest producers at the time (Anheuser-Busch, Miller, and Coors) accounting for 81 percent of the domestic supply. In the early 2000s, the number of craft brewers slowly but steadily declined — perhaps as a result of “over-exuberance,” as economists Victor Tremblay, Natsuko Iwasaki, and Carol Tremblay of Oregon State University wrote in a 2005 paper. In other words, the first entrants may have proved the viability of the industry, which began a cycle of entry, success, and further entry that exceeded the market’s capacity, leading to the eventual exit of some players. But by 2008, craft beer was again on the rise. The number of craft breweries nationally reached 3,418 in 2014, more than double the number reached during the previous expansion.  Looking Through the Glass as an Economist The first boom in the craft brewing industry got the attention of a group of economists who worked to identify the economic circumstances that allowed a small, specialist submarket to grow within an existing, highly concentrated market. This research explored two different explanations: resource partitioning and niche market formation. The resource partitioning model suggests that a decline in organizational diversity leads to the entry of specialized firms. As an industry becomes highly concentrated, the large firms compete with each other for the largest segment of demand, in this case, beer drinkers who prefer lagers. This leaves room for small firms — the craft brewers — to create a product that satisfies the demand in smaller segments without directly competing with the large firms. Research by economists Glenn Carroll of Stanford University and Anand Swaminathan, now at Emory University, looked at concentration ratios and the number of small, specialist firms (microbreweries, brewpubs, and contract brewers) entering and exiting the market. They found that specialized segments of the market expand as the overall market becomes more concentrated, thus  Historical U.S. Brewery Count (1890-2014) 3,500 3,000 2,500 2,000 1,500 1,000 500 0 1890 1895 1900 1905 1910  1915 1920 1925 1930 1935 1940 1945 1950 1955 1960 1965 1970 1975 1980 1985 1990 1995 2000 2005 2010  SOURCE: Brewers Association  supporting resource partitioning as an explanation for the increase in craft breweries. In a separate article, Swaminathan compared resource partitioning to the theory of niche market formation. According to the latter theory, a specialist submarket can be created when some factor outside the control of the firm takes root. This differs from resource partitioning in that new entry is driven by factors external to the market, such as a change in consumer preferences or technology, rather than by internal factors, such as market consolidation. Swaminathan found that changing consumer preferences were the driving force in the entry decisions of potential new brewers, lending weight to niche market formation as a primary factor in the growth of small brewers. So what do those consumers want? In a June 2014 survey by the market research firm Mintel, craft beer drinkers said “style” was the number-one factor for purchasing a particular beer, and 44 percent of respondents said they were looking for full-bodied flavor. Less than half of those surveyed said brand was a factor in their choice, which suggests they are willing to try different beers from a variety of companies, including ones they’ve never heard of. Adam Worcester, co-owner of Triple Crossing Brewing in Richmond, Va., describes the typical craft beer consumer as informed and interested in trying new things; someone who is “a little promiscuous with what they like to drink.”  The Microeconomics of Microbreweries Craft brewing is a segment of an existing market, but it is possible to analyze it as a distinct market itself. Craft brewing exhibits many of the properties of a monopolistic competitive market. This market classification is characterized by low barriers to entry, a large number of firms, and some ability for firms to set prices due to product differentiation. Recently, according to Clemons, Hitt, and Gao, barriers to entry have been lowered by the Internet, in particular online review forums, which as “an alternative medium for promotion and advertising…reduces the relative importance of scale, creating new opportunities for market entry.” Additionally, the three-tiered system established after Prohibition is especially valuable to small brewers for getting their products to consumers through already-established  distribution networks, according to an economic impact analysis conducted at the University of Delaware for the National Beer Wholesalers Association. When Worcester and his co-owners were in the planning stages of Triple Crossing Brewing, which opened in 2014, “we weren’t going to distribute; we were going to serve right out of our tasting room.” However, they were approached by a distributor who offered to help get their product out to local establishments. While their main business model is still to sell out of the tasting room, Worcester says that distribution is a valuable tool for building awareness of their brand. With respect to product differentiation in the craft beer market, the word differentiation may be an understatement. In their paper, Clemons, Hitt, and Gao described the craft beer industry as “hyperdifferentiated,” meaning firms have the ability to “produce almost anything that any potential customer might want.” This is because craft beer makers, with their smaller-scale production, can more easily make changes to their recipe to create new flavor profiles in response to consumer demand, from stouts and porters to brown and pale ales. During the first quarter of 2014, more than 1,524 different India pale ales (IPA) were on sale nationally, a 37 percent increase from the first quarter of 2013. The craft brewing market is characterized by monopolistic competition, but there’s a lot of friendly competition as well. Sometimes, breweries join forces to produce new beers and share the profits of the resulting collaboration. For example, Sierra Nevada, the third-largest craft brewery in the United States by 2014 sales volume, partners with brewers across the nation to create a variety 12 pack of collaboration beers called “Beer Camp Across America.” Overall, “craft brewers have a very congenial relationship. Even though we compete against each other, we do help each other out,” said Ken Grossman, who founded Sierra Nevada in 1980, in a 2010 interview with Beverage Industry magazine. “People ask me all the time if I’m worried about these other breweries opening up,” says Worcester. “But if all the Richmond breweries could get people to buy more Richmond beer, or more craft beer in general, that’s good for everybody. We’re all trying to make better beer to raise the quality of the whole industry in Richmond. It’s like a big fraternity of people that want to make great beer.” EF  Readings Clemons, Eric K., Guodong “Gordon” Gao, and Lorin M. Hitt. “When Online Reviews Meet Hyperdifferentiation: A Study of the Craft Beer Industry.” Journal of Management Information Systems, Fall 2006, vol. 23, no. 2, pp. 149-171.  Ogle, Maureen. Ambitious Brew: the Story of American Beer. Orlando: Harcourt, 2006. For recent developments in the craft beer industry in the Fifth District, please see Web-exclusive material at http://www.richmondfed.org E C O N F O C U S | F O U RT H Q U A RT E R | 2 0 1 4  23  INTERVIEW  Claudia Goldin  Editor’s Note: This is an abbreviated version of EF’s conversation with Claudia Goldin. For the full interview go to our website: www.richmondfed.org/publications  24  E CO N F O C U S | F O U RT H Q U A RT E R | 2 0 1 4  Econ Focus: Much of your work has focused on the history of women’s employment in the United States. You’ve described the past few decades of that history as a “quiet revolution.” What do you mean by that? Goldin: The quiet revolution is a change in how young women perceive the courses their lives are going to take. One of the places we see this is the National Longitudinal Survey, which began in 1968 with women who were between 14 and 24 years old. One of the questions the survey asked was, “What do you think you’re going be doing when you’re 35 years old?” In 1968, young women essentially answered this question as if they were their mothers. They would say, “Well, I’m going to be a homemaker, I’m going to be at home with my kids.” Some did say they would be working in the labor market, but the fraction that said they would be out of the home was much smaller than the fraction that actually did end up working outside the home. But as these women matured and as successive cohorts were interviewed, their perceptions of their futures, their own aspirations, began to change. And so their expectations when young about being in the labor force began to match their actual participation rates once they were older. That meant these young women could engage in different forms of investment in themselves; they attended college to prepare for a career, not to meet a suitable spouse. College women began to major in subjects that were more investment oriented, like business and biology, rather than consumption oriented, like literature and languages, and they greatly increased their attendance at professional and graduate schools.  PHOTOGRAPHY: BRYCE VICKMARK  Harvard University economist Claudia Goldin is passionate about detective work. As a student at the Bronx High School of Science, she indulged that passion with a microscope and planned to study microbiology as a student at Cornell University. But it wasn’t long before she discovered economics as a tool to delve into life’s mysteries, and since then she has become known as an economic historian whose research sheds new light on the roots of present-day policy questions. Goldin’s 1990 book, Understanding the Gender Gap: An Economic History of American Women, was the first detailed accounting of how women’s labor force participation and earnings evolved in the United States. Contrary to conventional wisdom at the time, she showed that the gender gap in earnings and wage discrimination were not historical constants, but rather varied across industries and over time in response to both social and economic forces. More recently, she has studied the role that workplace attitudes and policies regarding flexible working arrangements play in the persistence of the earnings gap. In The Race between Education and Technology, her 2008 book with frequent co-author Lawrence Katz, Goldin studied the interplay between technological change, educational attainment, and wage inequality. Goldin and Katz demonstrated that the returns to education have changed over time in response to changes in the supply of and demand for educated workers. Beginning around 1980, they found, a slowdown in the pace of educational attainment sharply increased the returns to education, leading to greater wage inequality. Prior to joining Harvard University — where she was the first woman to earn tenure in the economics department — Goldin taught at Princeton University and the University of Pennsylvania. She served as president of the American Economic Association (AEA) from 2013-2014 and is a member of the National Academy of Sciences. Jessie Romero interviewed Goldin at the National Bureau of Economic Research in Cambridge, Mass., in December 2014.  EF: What changed in society that allowed this revolution to occur?  The “quiet revolution” is a change in how young women perceive the courses their lives are going to take. Their perceptions of their futures, their own aspirations, began to change.  Goldin: One of the most important changes was the appearance of reliable, female-controlled birth control. The pill lowered the cost to women of making long-term career investments. Before reliable birth control, a woman faced a nontrivial probability of having her career derailed by an unplanned pregnancy — or she had to pay the penalty of abstinence. The lack of highly reliable birth control also meant a set of institutions developed around dating and sex to create commitment: Couples would “go steady,” then they would get “pinned,” then they would get engaged. If you’re pinned or engaged when you’re 19 or 20 years old, you’re not going to wait until you’re 28 to get married. So a lot of women got married within a year or two of graduating college. That meant women who pursued a career also paid a penalty in the marriage market. But the pill made it possible for women who were “on the pill” to delay marriage, and that, in turn, created a “thicker” marriage market for all women to marry later and further lowered the cost to women of investing in a career. EF: What happened during previous periods of change in women’s labor force participation?  Goldin: A large fraction of employment in the early 20th century, outside of agriculture, was in manufacturing. And manufacturing jobs were not particularly nice jobs. Whitecollar jobs in offices greatly expanded in the 1910s and 1920s, but they required one to be literate and possibly numerate, and women who were older at the time would not have had the education to move into those jobs. And so there developed a social norm against married women working. It was OK if you were single, it was often OK if you were an immigrant or African American, but it wasn’t OK if you were an American-born white woman from a reasonable family, especially if you had kids. New technologies further increased the demand for white-collar workers, and the high school movement produced a huge increase in women’s education during the early decades of the 20th century. More positions were created that were considered “good” jobs, those that young women could start after high school and keep after marriage with far less social stigma. The income effect and the substitution effect come from a set of preferences. If individual families have more income in a period when there are various constraints on women’s work, they’re going to purchase the leisure and consumption time of the women in the family, and the income effect will be higher. But if well-paying jobs with lower hours and better working conditions open up, then the income effect will  decrease and the substitution effect will increase and both will serve to move women into the labor force. EF: You’ve written about a “grand convergence” in men’s and women’s roles over the past century. Are there areas where that convergence is incomplete?  Goldin: Women and men have converged in occupations, in labor force participation, in education, where they’ve actually exceeded men — in a host of different aspects of life. One can think about each of these parts of the convergence as being figurative chapters in a metaphorical book. And this metaphorical book, called “The Grand Convergence,” has to have a last chapter. But what will be in the last chapter? I approached this question as a detective — I didn’t know what I was going to find. But I thought about Sherlock Holmes, and Sherlock Holmes would say it doesn’t make any sense to theorize until you have a couple of facts. So I went looking for facts, and I found two big pieces of information suggesting that the last chapter, which is about gender equality in pay per unit of time worked, must have greater temporal flexibility without large penalties to those who work fewer hours or particular schedules. The first clue was that the gender gap in earnings per unit of time is fairly low when men and women first come out of college, or even out of high school. But then it widens enormously, until people are in their 40s, and then for older cohorts the gap starts to narrow again. The second clue appeared when I broke down the data from the American Community Survey [an annual Census Bureau survey] by occupation. I ran a gigantic regression — there were more than 3 million observations and 469 occupations — and then graphed the residual gender gap for each separate occupation. I categorized each occupation by groups — corporate and finance, health, technology, science, etc. — and found that the occupations with the greatest gender gaps, conditional on age and some other factors, are almost all in the corporate and finance groups. Occupations with the lowest gender gaps are in the technology and science groups, although the gap is also small in some health occupations, particularly pharmacy. One thing to note is that you can only do this breakdown for occupations with annual incomes above about $60,000. It’s a different story in the lower part of the distribution, where most workers are paid on an hourly basis. Women earn less than men mainly because they work fewer hours, and those who work fewer hours earn less on an hourly basis. Across the wage distribution, the vast majority of the gender gap is occurring within occupations, not between occupations. There’s considerable discussion about occupational segregation, but you could get rid of all occupational segregation and reduce the gender gap by only a small amount. E C O N F O C U S | F O U RT H Q U A RT E R | 2 0 1 4  25  EF: So it’s not just that women tend to be nurses and men tend to be doctors.  Claudia Goldin ➤ Present Position Henry Lee Professor of Economics, Harvard University Director of the Development of the American Economy Program and Research Associate, National Bureau of Economic Research  in the direction of having O*NET characteristics that meant employees were required to be there. And in the technology occupations, people were working more independently and there wasn’t a lot of face time. I also used longitudinal data on lawyers from the University of Michigan and survey data I collected on University of Chicago MBAs with Marianne Bertrand and Larry Katz. I also had access to data on a large sample of pharmacists. And from all these sources it became clear that the occupations with the largest gender gaps were those with the least temporal flexibility, where people are complements for each other rather than good substitutes. Saying workers are good substitutes for each other sounds like you’re commoditizing them. But it can be true even for very high-income professions. I got a note from my ophthalmologist after I had a minor procedure that essentially said, “You will probably never see me again because there are 20 different professionals in my group who can take care of you.” And pharmacy, which is my favorite example, is very highly paid. For women, pharmacy is the third highest in terms of annual income for full-time employed workers. For men, it’s the eighth highest.  Goldin: Right. So then the question is, why are there some occupations with large gender gaps and others with very narrow gaps? There are some occupations where people face a nonlinear function of wages with ➤ Selected Previous Positions respect to hours worked; that is, peoPresident, American Economic ple earn a disproportionate premium Association (2013-2014) for working long and continuous President, Economic History hours. For example, someone with a Association (1999-2000) law degree could work as a lawyer in Associate Professor, and Professor of a large firm, and that person would Economics, University of Pennsylvania make a lot of money per unit of time. (1979-1990) But if that person worked fewer than ➤ Education a certain number of hours per week, Ph.D. (1972) and M.A. (1969), University the pay rate would be cut quite a bit. of Chicago Or someone could work fewer or B.A. (1967), Cornell University more flexible hours as general counsel ➤ Selected Publications for a company and earn less per unit “A Grand Gender Convergence: Its of time than the large-firm lawyer. Last Chapter,” American Economic Pharmacy is the opposite — earnings Review, 2014; The Race between Education increase linearly with hours worked. and Technology, Belknap Press, 2008 There’s no part-time penalty. (with Lawrence Katz); “The ‘Quiet I started thinking about a very Revolution’ That Transformed Women’s simple framework in which temporal Employment, Education, and Family,” flexibility is the important issue and American Economic Review, Papers and I wondered if occupations with large Proceedings, 2006; numerous other artigender gaps are those with relatively cles in journals such as the Journal of high penalties for not putting in the Political Economy, Quarterly Journal of hours or not attending the meeting or Economics, American Economic Review, Journal of Economic Perspectives, and not going to Japan to see the client. Journal of Economic History And those are things that might be particularly difficult for parents. If EF: Why do you refer to the 20th women have a greater burden with century in the United States as respect to child care, then these occupations will be the occuthe “human capital century” ? pations where women pay the greatest penalties. So then I began to zero in on the occupations where the penalties were Goldin: In many different writings in the late 19th century the lowest and ask what was so different about them. and early 20th century in the United States, you start to To do so, I went to the Occupational Information sense that having more education, being more literate and Network (O*NET), a directory supported by the Department more numerate, got you a lot further in the labor market. of Labor. In O*NET, each of the 469 occupations in the Contemporary economists noticed it too; Paul Douglas census is covered and some are further subdivided, often by [who taught at the University of Chicago, among other industry. And for each of those occupations there are hunschools, before becoming a U.S. senator] described it as dreds of details about the job gathered, in part, by observing an era of “noncompeting groups” — individuals who had a or surveying workers — details ranging from the strength modicum of a high school education, let alone a college edurequirements to the lighting and other ambient conditions cation, did phenomenally better than others, because high of the workplace. But relevant to my research, O*NET proschool education simply wasn’t widespread. vides information on: How important is face time? What Larry Katz and I used data from the 1915 Iowa state types of interpersonal relationships are there? Do people census to show that these pecuniary returns were not just work on projects independently or in teams? a result of the shifting of individuals from blue-collar or This was a real beacon of light. Sure enough, the occupaagricultural occupations to white-collar occupations, but tions in the corporate and financial sector were all skewed in fact, even within the agricultural sector more-educated 26  E CO N F O C U S | F O U RT H Q U A RT E R | 2 0 1 4  farmers did better than less-educated farmers. The reasons are pretty obvious: The educated farmer did his accounting better, could figure out which crops to plant, and could read about different breeds of animals and how to protect them from disease. More-educated workers also did better than less-educated workers in the manufacturing sector and in the construction trades. Individuals observed the high returns to education, and this unleashed a nationwide movement — in large measure a decentralized, grassroots movement — to build and staff high schools across the country. In 1910, only 9 percent of 19-year-olds in the United States had a high school diploma. That climbed up to 51 percent by 1940. There was a huge shift during the century, as the physical capital we were using became relatively less important than the mental capital we carried inside ourselves. EF: What is the significance of the high school movement being a grassroots movement? Goldin: The education system in the early 20th century was a decentralized system that was very open, albeit with some important exceptions, such as African Americans and certain immigrant groups. But by and large, relative to Europe, America was educating all its children. European visitors would come to the United States and be shocked by how America was wasting its resources. European countries were cherry picking which students would get a good education; they set very high standards and had national exams. We didn’t. We had more of a free-for-all, grassroots, local system in which until recently there were few state exams for graduation. That served us very well by getting a large number of students to graduate from high school. By the 1950s, U.S. high school enrollment and graduation rates were relatively high, much higher than Europe. But then various European countries started looking more like the United States; they began to pull more individuals into high schools, some via technical schools but also by expanding more general education. And many of them did so without abandoning the higher standards of the more elitist period. The United States, on the other hand, has had a very hard time adopting uniform standards. The idea has been that the different parts of the country have different demands, so we don’t need to have national standards. And it’s true that we do have a far more heterogeneous population. But the enormous virtue of decentralization has more recently caused some difficulty. EF: You noted that high returns to education in the early 1900s were a major driver of the high school movement. But as you and Lawrence Katz documented in The Race between Education and Technology, the premium to education changes over time and sometimes actually declines. Goldin: Inequality measured by labor incomes is relatively high from the earliest that we can measure it, in the late 19th  century; educated workers did very well relative to everyone else until about 1920. But then the high school movement burst forth and the supply of educated workers increased, and the quasi-rents to higher education began to decline quite a bit, which was reinforced by the Great Depression and the narrowing of the wage structure in the 1940s that Bob Margo and I termed “the Great Compression.” But in the late 1970s and early 1980s both inequality and the education premium started rising again. (This is apart from what’s happening at the very top; my book with Katz is about the bottom 99 percent.) What’s going on? You can see in the data that education, in terms of years of education or the fraction of the population that graduated high school or college, increases beginning around 1910, but then around 1980 the rate of increase slows down. The easiest way to think about it is as a race between education and technology, or between the supply of skilled workers and the demand for skilled workers. The demand for educated workers is moving out at a constant rate, and as long as the supply keeps moving out at a pretty sturdy rate it keeps the premium to education in check. But when the supply stops moving out there’s a large increase once again in the premium to educated workers. That’s the very simple one-graph story. EF: An increasing number of students are turning to for-profit colleges. What’s driving those schools’ recent proliferation? Goldin: As we’ve discussed, there are huge returns to education, and many people have great desire to gain a skill or learn a trade. But we haven’t kept up with funding community colleges, and they’re under tremendous strain. If you go to a community college, you may encounter various barriers; the courses you want are all full, or they’re only offered at times when you can’t attend because you have to work. Plus, many students arrive unprepared and might not have taken (or understood) algebra, for example. So they have to take remedial courses; they have to pay for these courses and find time to attend them, and yet they get no credit for them toward graduation. But if they walk across the street to the school they’ve seen advertised on public transportation or on late-night TV, they will find a school that is going to help them apply for their Pell Grant and a student loan. It’s going to provide career counseling and it’s not going to make them take remedial courses. For-profits really know how to get people in the door. But students end up with very big bills, and those loans have to be paid off at some point. EF: Do students at for-profit schools earn the same returns as students at nonprofit schools? Goldin: That’s an important question to answer, but it’s hard to find evidence. We don’t have IRS records matched with where a person earned their degree. So what I did with E C O N F O C U S | F O U RT H Q U A RT E R | 2 0 1 4  27  David Deming, Noam Yuchtman, Amira Abulafi, and Larry Katz was to conduct an audit study. We sent out resumes designed to look like real resumes, but we varied them by where the person went to college, either a for-profit college (online or brick-and-mortar), a nonselective public college (where the students in many ways are indistinguishable from the ones who go to for-profit colleges), or a selective public college. We sent them out for two major types of jobs, business jobs and health jobs, and within those types, to jobs requiring or not requiring degrees. We then compared callback rates. Callback rates aren’t perfectly mapped onto what people eventually earn, but if people don’t get called back they’re not going to do well in the job market. We found the callback rates for business jobs were considerably lower for the candidates from the for-profit schools, particularly the online ones.  doesn’t mean they aren’t coming back. We aren’t seeing declines in the older ages. These women have at least 30 years left to their employment histories. EF: You’ve spent a lot of your career digging through dusty archives or visiting old school buildings in Iowa — what excites you about historical research? Goldin: It goes back to my passion about being a detective. That’s what it’s all about. The world is filled with mysteries, and somehow I have this incredibly optimistic view that I can figure them out. But there are many different moments when I look back and think, gosh, how could I have been so optimistic? For example, Cecilia Rouse and I decided that we would study the effect of orchestras switching to blind auditions. [In a 2000 paper in the American Economic Review, Goldin and Rouse found that the practice of having musicians audition behind a screen significantly increased the proportion of women in symphony orchestras.] Many orchestras did not know they had records on auditions. It wasn’t that they weren’t receptive to us — it was that they were disorganized. But it turned out that the orchestral manager of the New York Philharmonic had an interest in our research, and he opened up their archives (which are beautiful; they’re a joy to work in). So we started writing letters to other orchestras, and they said, “Well, if you’re working with the New York Philharmonic …” I remember Ceci and I went to Detroit and met the orchestral director, who said, “I don’t know what we have but it’s upstairs in some room, just go.” Thank goodness these places didn’t throw things out. Looking back, there was nothing that guaranteed we were going to find nine orchestras that had all this information about the auditions just sitting there.  EF: What are you working on currently? Goldin: My current project is called “Women Working Longer.” I’m working with a group of people who study aging, retirement, and health. We’re interested in the fact that labor force participation rates for younger women peaked in the 1990s, but that participation for older women has increased enormously. Among college graduates today, about 60 percent of those aged 60-64 and 35 percent of those aged 65-69 are in the labor force. Even among those aged 70-74, about 20 percent are in the labor force. This raises all sorts of interesting questions about why. Is it because these women were hit with divorce shocks? Do they want to retire but then they look at their savings and realize they can’t retire? Or is it that the world of work has changed and they love what they’re doing? There are a host of issues to study concerning family, occupations, education, health, financial resources, and retirement institutions.  EF: You spoke about your optimism that you can use economics to solve life’s mysteries. Which economists have most inspired you to try?  EF: You just noted that labor force participation for younger women peaked in the 1990s. Is that related to the trend — widely reported in the media — of highly educated women “opting out” of the labor force?  Goldin: Gary Becker was in many ways the greatest influence. Gary’s words, written and spoken, echo in my ears all the time. He is always there asking me, “Is this an equilibrium? Have you gotten to the heart of the issue?” He had this ability to use what I call the fine scalpel of the great economist to pare away all the fat and get to the heart of the problem. Bob Fogel, my other mentor, was a very bold empiricist from whom I learned a lot. There are also a host of great empiricists today, doing work like the research I mentioned earlier. These empiricists have the great ability and enormous belief that they can find some instrument to identify the effect, and I’ve learned a lot from their way of thinking. And, of course, Larry Katz is my constant guide and sounding board. EF  Goldin: There really isn’t any evidence for that. Heather Boushey has done some very nice work showing that there is no such thing as an “opting out” phenomenon. And Marianne Bertrand, Larry Katz, and I did a study of MBA graduates from the Booth School of Business at the University of Chicago. In our sample, which was individuals who received an MBA between 1990 and 2006, 17 percent of women were not working 10 or more years after graduation. But it’s not clear that they have dropped out permanently — they might re-enter the labor force at another time. Women now have the ability to invest in their education, then marry and have kids later in life and possibly take some time off. But that u  28  E CO N F O C U S | F O U RT H Q U A RT E R | 2 0 1 4  THEPROFESSION  Economics and Ideology BY T I M S A B L I K  E  conomists have a long history of weighing in on policy issues. In the early 19th century, British economists Thomas Malthus and David Ricardo debated tariffs in the House of Commons. Today, economists express their views on minimum wage legislation and tax reform in newspaper op-eds and blogs. Both sides in these debates bring standard economic theory and empirical techniques to support their opposing positions, which has led critics to question just how objective economics really is. Throughout the postwar era, economics has aspired to be scientifically objective. But some have still questioned whether mathematical modeling and scientific methodology insulate economics from ideology. In a 1948 speech to the American Economic Association entitled “Science and Ideology,” Harvard University economist Joseph Schumpeter described the challenge economics and other social sciences faced: “Logic, mathematics, physics and so on deal with experience that is largely invariant to the observer’s social location and practically invariant to historical change: for capitalist and proletarian, a falling stone looks alike. The social sciences do not share this advantage.” This leaves more room for interpretation in the social sciences, particularly when the evidence is still developing. In a 2013 study of economists’ responses to policy questions, Roger Gordon and Gordon Dahl of the University of California, San Diego found greater disagreement and uncertainty among economists on topics with less extensive economic literature. “One of the problems is that economic evidence is rarely conclusive,” says Roger Backhouse, a professor of the history and philosophy of economics at the University of Birmingham. Even research pertaining to extensively studied topics can be correlated with economists’ pre-existing worldviews. In a 2014 working paper, Zubin Jelveh of New York University and Bruce Kogut and Suresh Naidu of Columbia University matched data on individual economists’ campaign contributions and petition signings with the language they used in academic papers to identify words and phrases correlated with partisan political behavior. For example, “post Keynesian” was a phrase highly associated with left-leaning authors and “free banking” was a phrase highly associated with right-leaning ones. Using this information, they developed an algorithm that predicted economists’ political ideologies on the basis of their papers with 74 percent accuracy. They also found a correlation between research results and the authors’ predicted ideologies. Left-leaning economists were more likely to report results that aligned with a liberal ideology and vice versa for right-leaning economists. Jelveh, Kogut, and Naidu note that their results do not necessarily suggest that economists are “deliberately altering  empirical work in favor of preconceived political ideas.” They explain that the correlation they find may be the result of ideology driving research, research shaping ideology, or a third factor influencing both, though they suspect that ideology is the driver. Backhouse argues that “ideologies and economic analysis are not separate.” In his 2010 book The Puzzle of Modern Economics, he discusses the evolution of economics in the 1960s and 1970s under the influences of “saltwater” economists like Paul Samuelson of the Massachusetts Institute of Technology (MIT) and James Tobin of Yale University and “freshwater” economists like Milton Friedman and Robert Lucas of the University of Chicago. Each group drew from the same underlying economic theory, but their different interpretations of evidence pertaining to the competitiveness of markets and the effectiveness of government intervention led them to develop different models and reach different conclusions. Does this view mean economic research is tainted with hidden ideology? Not necessarily. Economists use mathematical models to craft and test theories, which presents all assumptions clearly and upfront. This makes it hard to disguise any assumptions that are purely driven by ideology. As Lucas famously said, economists “ask for equations that explain what words mean.” Testing models is another way to expose theories that are based in ideology instead of the real world. Today, economists have access to huge public and private datasets electronically and can use computers to test theories in realistically simulated economic environments. The use of academic laboratories to conduct experiments has also become more prevalent and accepted in the profession in recent decades, providing another avenue for testing theories. As a result, economic models have become more sophisticated, and there are a number of issues where economists across the political spectrum have reached consensus. Even better, economists have become more skilled at figuring out which models should be applied to which settings. Finally, professional peer review, as commonly employed by academic journals, can also help minimize ideological influence. On this front, the study by Jelveh, Kogut, and Naidu offers some encouragement: The authors found no correlation between the ideology of journal editors and the ideology of articles appearing in the journals they oversaw. The problem with rejecting economics as a science, says Backhouse, is that it leads to the conclusion that “anything goes.” As MIT economist Robert Solow wryly put it in a 1970 article, “It is as if we were to discover that it is impossible to render an operating room perfectly sterile and conclude that therefore one might as well do surgery in a sewer.” EF E C O N F O C U S | F O U RT H Q U A RT E R | 2 0 1 4  29  ECONOMICHISTORY BY KARL RHODES  Internal combustion cars zoomed past electrics more than 100 years ago, but is the horseless road race really over?  Car and Driver magazine staged a race from Detroit to New York between a 2013 Tesla Model S and a 1915 Ford Model T. Find out which iconic car won at the end of this story.  I  n February, the Electric Vehicle Association of America released specifications for standard plugs that would allow different makes of electric vehicles to use the same charging stations. If you don’t remember reading about it, that’s because this attempt at standardization occurred not last February, but in February 1914. By 1914, electric vehicles (EVs) already had lost nearly all their market share to internal combustion cars. But at the turn of the century, EVs and steam-powered cars were leading the horseless road race. Counting motor vehicles for the first time in 1900, the U.S. Census Bureau captured data from 109 manufacturers that built 1,681 steamers, 1,575 electrics, and only 936 internal combustion cars. In 1900, expectations for EVs were high, according to David Kirsch, associate professor of management and entrepreneurship at the University of Maryland, who has studied the technology competition between EVs and internal combustion cars. “If you had surveyed leading engineers and transportation experts, I think they would have said, ‘We are just waiting for that breakthrough in battery technology,  and then we are going to have electric cars everywhere.’ And they probably would have said, ‘Internal combustion is kind of smelly and noisy, not very efficient, and kind of dangerous. It’s probably not going to go completely away, but it will just be a little sideline, never more than 2 percent of the market.’ ” The actual outcome, of course, was a mirror image of expectations. Internal combustion cars won the first leg of the race, but EVs never disappeared completely from American markets. Electric delivery trucks survived into the 1920s, and electric golf carts and forklifts earned dominant roles on the links and inside factories and warehouses. Growing environmental concerns in the 1960s and the energy crisis of the 1970s rekindled interest in EVs. And during the 1980s and 1990s, some economists and historians started questioning whether automakers and motorists sped down the wrong technology path by choosing internal combustion. Today, most economists would say that — with the invisible hand holding the reins — market forces made sure that the best horse won in the early 20th century. They also might suggest that this technology competition continues in the early 21st century. A group of engineers in Silicon Valley, for example, founded Tesla Motors in 2003 “to prove that electric cars could be better than gasoline-powered cars.” And by some accounts, the company already has achieved that goal. “The Tesla Model S outscores every other car in our test ratings,” raved Consumer Reports in 2013. “It does so even though it’s an electric car. In fact, it does so because it is electric.”  The Electric Head Start There were several reasons why steamers and EVs led the race in 1900. Engineers had more experience with steam engines, which had been powering ships and trains for decades. More recently, 30  E CO N F O C U S | F O U RT H Q U A RT E R | 2 0 1 4  PHOTOGRAPHY: COURTESY OF CAR AND DRIVER MAGAZINE AND MARC URBANO  Car Wars  electric trolley systems had revolutionized transportation in urban areas, beginning with Richmond, Va., in 1888. “The 1890s was the decade of traction — street cars, electrified trolleys — and the 1900s was going to be the decade of taking electricity off the rails,” Kirsch says. “That’s sort of what everyone expected.” Early conditions were favorable to EVs, agrees Stephen Margolis, professor of economics at North Carolina State University. “Automobiles were mostly used within cities for short trips — deliveries, things like that. That was the context in which electrics were at their best. You didn’t have to worry so much about going someplace where you couldn’t get a charge.” Internal combustion cars were faster than electrics, and the widespread availability of gasoline (a cheap byproduct of making kerosene) gave internal combustion cars much greater range. But in 1900, those advantages were less important. City streets were clogged with horses pulling buggies and wagons, and highways beyond urban areas were terrible. The initial push for better roads came from bicycle promoters such as Albert Pope. His Pope Manufacturing Co. was the largest producer of bicycles in the United States in the 1890s, and in the late 1890s, the company also became the largest producer of motor vehicles in the United States — mostly electrics. In 1899, Pope’s former motor carriage division merged with Electric Vehicle Co. (EVC), a new venture that was trying to monopolize transportation services in major U.S. cities by developing huge fleets of electric taxicabs. EVC also acquired George Selden’s patent on the internal combustion car. “What the promoters of a project based on the electric automobile wanted with a patent for a gasoline automobile was never spelled out,” wrote automotive historian John Rae in his 1965 book, The American Automobile. “However, they were shrewd businessmen with a fondness for monopoly, and it was an understandable precaution for them to secure a foothold in the gasoline car field at a time when the course of automotive development was unpredictable.” EVC’s ambitious plans to build and operate 12,000 cabs failed. “About two thousand were built and put into service,” Rae reported, “but they were clumsy, expensive vehicles to operate, with batteries that weighed a ton and had to be replaced after each trip.” The company soon encountered major problems with its batteries, its business model, and its bottom line. When word got out that the company was losing lots of money, the press dubbed it the “Lead Cab Trust.” EVC went into default in 1907, and its failure was blamed largely, perhaps unfairly, on electric vehicle technology, Kirsch says. “Far from creating an opportunity for the future development of an electric-vehicle-based urban system, the shadow of EVC hung over the industry for years.”  Internal Combustion Takes the Lead Internal combustion cars were dirty, noisy, and smelly, and their crank-starting mechanisms were physically demanding  and downright dangerous. Also, some manufacturers put internal combustion engines directly under the driver’s seat. “There was a joke that went around that nobody is going to want a gas car because no one is going to want to sit on top of an explosion,” says Matt Anderson, curator of transportation at the Henry Ford Museum in Dearborn, Mich. Despite their drawbacks, internal combustion cars were gaining market share rapidly. By the time Motor World published the 1900 census data on automotive manufacturing in September 1902, internal combustion cars had taken the lead. The 1900 census “conclusively shows that conditions two years ago were not as they are today,” the journal noted. “Gasoline undoubtedly leads in the total output of the three different classes of motor vehicles at the present time.” Kirsch contends that this turning point was created in part by consumer expectations that a miracle battery was “only a day away” — so it seemed prudent to postpone buying any electric car. Thomas Edison heightened those expectations in 1901, when he announced that he was on the brink of a major breakthrough. Edison’s iron-nickel battery was better (and more expensive) than the lead-acid batteries of his day, but its performance fell far short of miraculous, and it did not make it to market until 1909 (not counting a false start in 1903). “During this period, many would-be electric drivers either bought no car at all or bought an internal combustion vehicle,” Kirsch says. EVs cost significantly more than internal combustion cars in 1900, but price was not the key issue at that time because automobiles were almost exclusively rich men’s toys. The ability to tour the countryside was far more important to these men than short city trips, and internal combustion was by far the best option for touring. Some wealthy families owned both an EV and an internal combustion car. But early automakers realized that it would be a stretch for most families to afford even one automobile. So they were trying to develop a “universal” vehicle that could satisfy the vast majority of service requirements at a price middle-class families could afford. While EV enthusiasts waited for Edison to break the battery barrier, the top makers of internal combustion cars continued to improve their products, lower their prices, and increase their market shares. Most notably, Olds Motor Works produced more than 5,000 low-priced Oldsmobiles in 1904, more than the auto industry’s combined output of EVs and steamers. In 1908, the Ford Motor Co. started selling the Model T for about $850, and it became a complete market-changer. Ford’s advances in mass production allowed the company to increase quality and decrease cost at the same time. This development made it virtually impossible for EVs to compete for the mass market. The Model T’s price plummeted to $600 by 1912, the same year when Cadillac started selling cars with Charles Kettering’s electric starter motor. Kettering’s innovation “took away the final big hurdle to driving a gasoline car,” says Anderson at the Henry Ford Museum. “I would say that was the last nail in the coffins of the electric car and the steamer.” E C O N F O C U S | F O U RT H Q U A RT E R | 2 0 1 4  31  Technology Choice The horseless road race among EVs, steamers, and internal combustion cars has been called a quintessential technology choice. “The end result would have enormous consequences for the remainder of the twentieth century, economically and environmentally,” wrote automotive historian John Heitmann in his 2009 book, The Automobile and American Life. Did internal combustion win because it was inherently superior? Noted automotive historian James Flink says yes. But Kirsch insists that other key factors also contributed to internal combustion’s rise and EVs’ demise between 1900 and 1912. EVC, the electric taxicab company, may have bet on the wrong horse, he concedes, but the company also bet on the wrong business model: EVC relied on what Kirsch calls the “service model” of centralized public transportation instead of the “product model” of decentralized individual ownership. Individually owned internal combustion cars owed much of their success to the nationwide distribution network for kerosene and gasoline that was already in place in 1900. In sharp contrast, electrification — the new standard for public transportation — was almost nonexistent in small towns and rural areas. So the charging infrastructure was good enough to support electric taxicabs in urban areas, but it was inadequate for individually owned EVs that ventured much beyond city limits. An electric’s range was generally 40 to 60 miles on good, flat roads, and it took many hours to recharge its battery. Social factors were significant too. Many women preferred EVs because they were relatively clean, quiet, odorless, and easy to start. Henry Ford’s wife, for example, drove an EV. But most men preferred the superior range and power of internal combustion cars. And in 1900, men generally drove the cars and made the car-buying decisions. Entrepreneurs also made a difference. “No outstanding automotive engineer appeared in an entrepreneurial role in connection with either steam or electric automobiles,” Rae wrote in a 1955 article in Explorations in Entrepreneurial History. Edison and Ford collaborated briefly on two experimental electrics, but Ford was an internal combustion engineer from start to finish. “I don’t think he ever really considered any other power source,” Anderson says. “He would have appreciated the advantages in a gasoline engine being much lighter than either a steam-powered plant or an electric plant.” At least one historian, the late Charles McLaughlin of American University, claimed that steamers would have won the technology competition if Ford had chosen steam over internal combustion. In a 1965 speech to the Steam Automobile Club of America, McLaughlin noted that the Stanley brothers, Freelan and Francis, were building excellent steam-powered cars — Stanley Steamers — in the early 1900s. But the brothers eschewed mass production, ultimately leaving steam cars, like EVs, out of the running. “The great triumph of mass production has left us without much technological choice,” McLaughlin lamented. “It would be easier to assume that the smog-producing 32  E CO N F O C U S | F O U RT H Q U A RT E R | 2 0 1 4  automobile of today is the end product of a technological evolution which has been automatically beneficient — that technical progress is unfaltering and always in the right direction. But I think we must look again at this story.”  Path Dependence and Lock-In The market dominance of internal combustion cars is an example of what economists call path dependence. In other words, events at the beginning of the 20th century established a technology path based on internal combustion that cannot be abandoned without incurring substantial costs. As early adopters of automotive technologies experimented with EVs, steamers, and internal combustion cars, they gradually learned which technology served them best. At some point between 1900 and 1905 — for a variety of reasons — the vast majority of those early adopters chose internal combustion cars, and as their numbers grew, the relative appeal of internal combustion was magnified. “People who learned to drive in their parents’ or friends’ car powered by an internal combustion engine almost certainly were drawn to similar cars,” wrote economist Richard Nelson in his 2005 book, Technology, Institutions, and Economic Growth. “At the same time, the ascendency of automobiles powered by gas-burning internal combustion engines made it profitable for petroleum companies to locate gasoline stations at convenient places along highways. It also made it profitable for them to search for new sources of petroleum, and to develop technologies that reduced gasoline production costs. In turn, this increased the attractiveness of gasoline-powered cars to car drivers and buyers.” Similar network effects could have accrued to other horseless technologies, concluded Nelson, who studied and taught economics at Yale and Columbia. “If the roll of the die early in the history of automobiles had come out another way, we might today have steam or electric cars.” Nelson is echoing the arguments of former Stanford economist Brian Arthur, who asserted in 1989 that seemingly insignificant historical events can sometimes give a significant head start to an inferior technology that becomes locked-in even when another technology would be significantly better. Margolis, the economics professor at N.C. State, takes the more conventional view that if motorists ever decide that EVs or steamers would serve them better, entrepreneurs would facilitate a transition as soon as the total benefits of switching — including attractive profits for the entrepreneurs ­— clearly exceed the total costs of switching. “I am skeptical, but for all we know, Tesla is doing that right now,” he says. One wrinkle in this cost-benefit calculation, however, is accounting for the cost of air pollution caused by internal combustion cars versus that of EVs. It is impossible for governments to reduce or redistribute these costs with 100 percent equity, but tighter emission standards, a carbon tax, or significantly higher gas taxes could favor EVs or some other technology from the past, present, or future. These measures also could favor various combinations of those technologies.  “Someone said a few years ago that the Prius was ‘yestertech’ and that electric cars were the future. But the reality is that nearly every manufacturer that makes a car now makes hybrids,” said Bill Reinert, the retired national manager of Toyota’s advanced technology group. In an interview with Yale Environment 360, Reinert promoted gas-electric hybrids as the most promising motive technology. “If you look at Le Mans race cars, they’re all 230-mile-per-hour hybrids that have both phenomenal power and phenomenal fuel economy. And we continue to improve them.” Reinert predicted, however, that the market for EVs will remain small. “Given that the bar gets raised all the time, it’s hard to see where the case for an electric car really comes in. Is it for carbon reduction? No, you’d have to decarbonize the whole grid to make that case, and that’s not likely to happen.”  Back to the Future? Plugs for charging stations still have not been standardized, as the EV association recommended in 1914, but some of the other barriers that stymied the development of EVs in 1900 are shrinking. Electricity is cheap and ubiquitous in the United States, and EV batteries are far more energy-efficient, reliable, and durable. Do these developments, coupled with concerns about carbon emissions, signal an EV resurgence? EV enthusiasts have been predicting the second coming of electric cars since the mid-1960s. “From the New York Times to Motor Trend, one can hardly pick up a newspaper or magazine today without encountering an article about electric automobiles,” Arizona anthropologist Michael Brian Schiffer wrote in his 1994 book, Taking Charge: The Electric Automobile in America. “Even television is lavishly covering ‘the car of the future.’ ” Two years later, General Motors started a pilot program to lease its EV1 electric vehicle to a small group of early adopters, but the company scrapped the  market test — and the cars themselves — when the experiment became too costly. Since then, a new breed of automotive engineers have picked up the EV baton, but even well-funded Tesla Motors still struggles with some of the same challenges that discouraged the adoption of EVs more than 100 years ago. The Tesla Model S price — starting around $70,000 — is beyond the reach of most middle-class motorists. And charging the battery every 200 miles or so takes the spontaneity out of cross-country touring. Car and Driver magazine recently staged a road race between a 2013 Tesla Model S and a 1915 Ford Model T from Detroit to New York. The Model S won the 682-mile race by about one hour, but only because the Model T experienced a breakdown along the way and because it had to take a less-direct route to avoid expressways. In a forum on Tesla’s website, owners of the Model S noted that the Model S would have won easily if Car and Driver had waited until Tesla installed supercharging stations along the route. They also noted that after-market modifications had made the Model T significantly faster than it was in 1915. (Car and Driver highlighted both of these issues in its coverage of the race.) “It’s not an exact comparison,” Margolis concedes, “but I find the race interesting in the context of the claim that electric cars could have been better than contemporary gasoline cars.” The Tesla Model S represents “the best of contemporary technology — taking advantage of all we have learned about electronics, semiconductors, electric motors, and materials.” On the other hand, “the Model T was not the best car in 1915. It was just the best value. So they are comparing an elite car from our era to the workman’s car from 1915, and the workman’s car didn’t come out too badly.” EF  Readings Flink, James J. America Adopts the Automobile, 1895-1910. Cambridge, Mass.: The MIT Press, 1970. Heitmann, John A. The Automobile and American Life. Jefferson, N.C.: McFarland & Company, 2009. Kirsch, David A. The Electric Vehicle and the Burden of History. New Brunswick, N.J.: Rutgers University Press, 2000.  Pund, Daniel, and Don Sherman. “The Race of the Centuries: 2013 Tesla Model S vs. 1915 Ford Model T.” Car and Driver, February 2014, pp. 66-74. Rae, John B. The American Automobile. Chicago: University of Chicago Press, 1965. Schiffer, Michael Brian. Taking Charge: The Electric Automobile in America. Washington, D.C.: Smithsonian Institution Press, 1994.  Check out our Web-exclusive interview with Dani Rodrik, an economist at the Institute for Advanced Study and soon to be returning to Harvard University, who studies globalization, economic growth and development, and political economy.  http://www.richmondfed.org/publications/research/econ_focus/2014/q3/interview.cfm E C O N F O C U S | F O U RT H Q U A RT E R | 2 0 1 4  33  AROUNDTHEFED  How Real is the U.S. Manufacturing Revival? BY L I S A K E N N E Y  “The Competitiveness of U.S. Manufacturing.” Federico J. Díez and Gita Gopinath, Federal Reserve Bank of Boston Current Policy Perspectives No. 14-3, June 2014.  T  he U.S. manufacturing share of GDP has increased every year between 2010 and 2012, prompting suggestions of a revival in the sector, according to a recent paper from the Boston Fed. In light of this GDP data, authors Federico Díez and Gita Gopinath set out to discover if U.S. manufacturing is truly gaining an edge against foreign competition. Their answer: No, but it might happen in the not-too-distant future. To determine whether the increase in GDP share reflected an improvement in the competitiveness of U.S. manufacturing — or, perhaps, a temporary shrinking of the U.S. financial sector following the Great Recession — the authors looked at data for 1999 to 2012 on the U.S. import ratio (the share of domestic U.S. demand met by imports). They found that the competitiveness of U.S. manufacturing had not increased overall. The result of their data analysis is not all negative with regard to U.S. trade balances, however. In energy-intensive industries, there was a relatively large decline in import ratios. The authors also note that labor costs are declining in the United States relative to the rest of the world. This energy channel and labor cost channel are considered recent phenomena, and the authors conclude that it is possible that these two channels may interrupt the “historical trend of rising import shares for the United States.” “Are Concerns About Leveraged ETFs Overblown?” Ivan T. Ivanov and Stephen L. Lenkey, Federal Reserve Board Finance and Economics Discussion Series No. 2014-106, November 2014.  L  everaged exchange-traded funds are often seen as contributing to the volatility of financial markets, but according to research from the Federal Reserve Board of Governors, these ETFs are falling victim to “exaggerated” concerns. Leveraged and inverse ETFs “track a multiple of the performance of an underlying index, commodity, currency, or some other benchmark over a specified time frame, which is usually one day.” The belief in their volatility comes from the idea that they exert upward price pressure on the underlying assets with positive returns and downward pressure on assets with negative returns — a belief based, in turn, on the perception that leveraged ETFs rebalance their portfolios in the same direction as the returns on their assets. Ivanov and Lenkey argue that critics likely ignore the effects of capital flows — money moving in and out of ETFs 34  E CO N F O C U S | F O U RT H Q U A RT E R | 2 0 1 4  as investors buy and sell shares — on the rebalancing of leveraged ETFs. The authors claim that capital flows “substantially reduce the need for ETFs to rebalance when returns are large in magnitude and, therefore, mitigate the potential for these products to amplify volatility.” For instance, the rebalancing of an ETF’s portfolio has the largest effect on volatility when the underlying returns are large — but capital flows mitigate the need for rebalancing in these cases. The key is that capital flows change the size of an ETF, which then alters the amount of leverage needed to reach the target leverage ratio. The authors use a sample of U.S. equity-based ETFs to determine that capital flows are frequent and that they offset the need for portfolio rebalancing, therefore lessening the potential for these ETFs to exacerbate volatility. “Home Hours in the United States and Europe.” Lei Fang and Cara McDaniel, Federal Reserve Bank of Atlanta Working Paper No. 2014-5, June 2014.  W  hen it seems as if there just aren’t enough hours in the day, how does one decide how to divide his or her time between work and home? Researchers at the Atlanta Fed have asked this question and discovered that, over the last 50 years, the amount of time people spend engaged in “home hours” has declined in both the United States and Europe. They looked at data that breaks a person’s day into two categories: home hours and market hours. Home hours include household work such as cooking and cleaning, as well as shopping, errands, home repair, and child care. Market hours are all time spent working for pay and commuting to and from work. Combined work is the sum of market and home hours. The authors say “the allocation of time for home activities not only is interesting in itself but also may be important for facilitating our understanding of the market labor supply.” They find breakdowns by sex and age group to be of particular interest. They found that women in all countries reduced their home hours, while men in almost all countries increased their home hours. In all countries, the women’s decline occurred at a much larger rate than the men’s increase. This leads the authors to conclude that the overall decline in home hours is a result of female time-allocation decisions. Looking at age groups, the researchers found that members of the prime-age group (25-54) tend to have a more equal allocation of time between the two categories of hours than do the young and old groups. The authors also found that across countries, decades, and sexes, the young spent less time at home and the old spent more time at home than the prime-age group. EF  BOOKREVIEW  Valuing Economists TRILLION DOLLAR ECONOMISTS: HOW ECONOMISTS AND THEIR IDEAS HAVE TRANSFORMED BUSINESS BY ROBERT E. LITAN HOBOKEN, N.J.: JOHN WILEY & SONS, 2014, 363 PAGES REVIEWED BY DAVID A. PRICE  I  n 1951, rhythm and blues singer Louis Jordan posed the musical question, “If You’re So Smart, How Come You Ain’t Rich?” Economists have been on the receiving end of that question in the decades since. Brookings Institution economist Robert Litan, in Trillion Dollar Economists, implicitly offers an answer: Just as Thomas Edison captured only a minuscule share of the value of the light bulb, the value created by economists has flowed to companies and to society at large. Writing in a breezy, conversational style — he says he was inspired by the Freakonomics books — Litan argues that the ideas of economists have been crucial to improving the performance of firms and creating the business models of new ones. After a brief introduction to a few of the field’s big ideas, such as marginal analysis and market failure, he sets out on a tour of industries that have directly benefited, he says, from economists’ insights. Some of those insights involve strategies for price-setting. Students of economics might assume every firm is among the beneficiaries here, with all of them relying on the lessons of microeconomics 101. But no: Litan concedes that at least for new products, economic theory gives a firm relatively little practical information about where to set prices. “The best you can do is price by trial and error, or you can fix a price and add more value to the product or service over time and hopefully convince customers to buy it.” What Litan does point to are more specific applications of economic theory to price-setting, especially in the context of auction design. Clever economic ideas about auctions, of course, have built fortunes. Litan recounts the creation of Google’s algorithm for auctioning its online ads, in which the winner pays a penny more than the second-highest bid (to simplify somewhat) — an algorithm that has long powered the company’s financial engine. Based on its experience with ad auctions, Google also used a novel auction process to sell its shares when it went public in 2004. The travel booking company Priceline patented an auction-like process, the “name your price” conditional offer, and employed it to help hotels and airlines sell unsold space to the most price-sensitive travelers. Litan notes that the concept of price discrimination, charging consumers different prices for the same product based on their price sensitivity, was itself the work of a  University of Cambridge economist, Frank Ramsey, as well as others who built on his research. Other areas of business innovation — and policy innovations beneficial to businesses — that Litan credits in whole or in part to economists range from index-based mutual funds to the design of more efficient dating “markets” for dating websites to the deregulation of trucking, airlines, and energy. Looking ahead, he foresees the resurgence of prediction markets and expresses hope for the adoption of financial engineering to support medical discoveries (in particular, “research-backed securities” to fund testing of drugs and earn royalties on the successful ones, an idea proposed by Andrew Lo of the Massachusetts Institute of Technology). A mildly disorienting aspect of Trillion Dollar Economists arises from Litan’s benignly imperialistic view of economics. Although his stated mission, in part, is to see that economists get due credit for their “largely overlooked” contributions, many of the innovations he describes did not actually come from economists — as he makes clear. For instance, although the method behind Google’s ad auctions was previously the subject of Nobel Prize-winning work, Google engineers developed it independently. Some other areas on which he reports, such as big-data analytics, are mainly the provinces of statisticians. Presumably Litan’s view is that these individuals fit his argument because they have been practicing economics even though they aren’t economists themselves. But then his argument seems to become almost tautological: If one views anybody whose ideas influence business as a practitioner of economics, it isn’t surprising that one would conclude practitioners of economics have influenced business. Regardless of whether Litan’s examples necessarily support his claims for economics, his thesis is accurate: Economists and their ideas are influential in business. Certainly they’re well represented. According to a 2006 paper by Patricia Flynn and Michael Quinn of Bentley College, economics is the third most common major among large-company (S&P 500) CEOs, after business administration and engineering. The top 20 U.S. business schools in 2003-2004 had over 500 economists on their faculties teaching future business and finance leaders. A National Science Foundation survey in 2013 found that 19.3 percent of economics Ph.D.s work for companies directly. A number of prominent technology companies have economic researchers on staff, including Google, Microsoft, eBay, and Airbnb. Besides correcting what he sees as underappreciation of economists’ roles, Litan notes that he has another agenda — namely, to set out a late-career statement of affection for the field and its people. Economists and non-economists alike will probably find his enthusiasm infectious. EF E C O N F O C U S | F O U RT H Q U A RT E R | 2 0 1 4  35  DISTRICTDIGEST  Economic Trends Across the Region  Building the Aerospace Cluster in South Carolina BY R I C H A R D K AG L I C  A  t the time of the Wright Brothers’ first successful powered flight at Kitty Hawk, N.C., in 1903, few recognized just how big the industry would become or how transformative the location decisions of aircraft companies would be to regional economies. Today, aircraft manufacturing generates a tremendous amount of economic activity in clusters such as the Puget Sound area of Washington, Southern California, and St. Louis, Mo. — and, more recently, in South Carolina. State governments that recognize the tremendous economic value that aircraft manufacturing can bring their communities are actively courting such plants to bolster their aerospace clusters. Boeing’s 2009 decision to locate a 787 final assembly plant in North Charleston made South Carolina one of only two states with a large civilian aircraft final assembly plant. (Alabama will make it three when Airbus completes its A320 family assembly plant in Mobile later this year.) It is just the third site worldwide that is capable of assembling and delivering twin-aisle aircraft. Boeing’s two decisions — first, to pursue the 787 project, and second, to locate a final assembly plant in South Carolina — resulted in a “big bang” for aerospace manufacturing in the state, creating an industry cluster out of virtually nothing. Inevitably, when a cluster grows so rapidly in such a short period of time, there are bound to be growing pains. The area around North Charleston, where the 787 assembly plant is located, is already suffering from shortages of skilled labor. And a Chamber of Commerce-sponsored report on the outlook for skills gaps in the region paints a challenging picture. How quickly South Carolina is able to build up its human and capital infrastructure will go a long way toward determining how much bang the state will get from its incentive bucks. This article explores why aircraft manufacturing facilities are such attractive economic development targets, and how well positioned South Carolina is to maximize the return on its economic development investment in the aerospace manufacturing cluster.  Targeting Aerospace Clusters Targeting industry clusters is a common regional development strategy, and for good cause. Economic theory suggests there are considerable benefits to having similar businesses agglomerating in a region. Most notable among the benefits are the synergies and efficiencies that clustered firms can derive from attracting labor with specialized skill sets to the region, as well as inputs common to the production process. Moreover, productivity within the cluster increases as knowledge “spills over” from one industry participant to another. An aircraft final assembly plant falls into a more narrowly defined industry cluster known as a traded, or exporting, 36  E CO N F O C U S | F O U RT H Q U A RT E R | 2 0 1 4  cluster. As opposed to a non-traded industry cluster, where the majority of the industry’s output is consumed locally, traded industry clusters sell the majority of their output outside the region. State and local economic development entities have limited funds, so they strategically focus those resources toward industries, or firms within industries, that will provide the highest return on investment and limited risk. Two of the most important criteria in decisions to deploy economic development dollars are the potential for strong growth over the long run and the creation of high-paying, high-value-added jobs.  Growth Potential With regard to the first investment criterion, potential for growth, the outlook for manufacturing of large civilian aircraft is quite favorable. The demand for these aircraft is a function of the demand for air transportation. As the global economy becomes ever more connected, and consumers and businesses in developing economies become more affluent, demand for air travel is expected to grow steadily for decades to come. The International Air Transport Association forecasts that the number of boarded passengers worldwide will increase from roughly 3.3 billion in 2014 to 7.3 billion by 2034. That is an average annual increase of 4.1 percent over the 20-year span. Increasing air travel means stronger demand for civilian aircraft. Moreover, with expectations that air transportation will be increasing in all regions, the demand for commercial jet liners is geographically diverse. The first 787 that rolled out of Boeing’s North Charleston final assembly plant was destined for Air India, and the vast majority of that platform’s orders are coming from foreign-owned and operated airlines. As of the first quarter of 2015, more than 70 percent of Boeing’s 787 backlogs were destined for foreign carriers. More geographic diversity in a company’s orders limits its exposure to economic downturns in one region or another. In addition, producing large civilian aircraft is a very complex undertaking that requires a highly specialized, high-tech set of inputs. Thus, civilian aircraft manufacturing is a subset of a larger and rapidly growing cluster of goods-producing and service-providing industries: aerospace. Components of the broader aerospace manufacturing cluster include, among others, aircraft and parts manufacturing (civil and defense related); search, detection, guidance, and instrument manufacturing; and guided missile and space vehicle manufacturing. All of these manufacturing pursuits have something in common: powered flight. As a result, the core components of aerial vehicles are made up of precision parts and  South Carolina Average Annual Wages  $THOUSANDS  specialized materials that are held to a higher standard of quality. This is because the movements are more complex, and the costs of component failure so much higher, for vehicles that leave the ground. Thus, many of the materials, parts, or components used in civilian aircraft can be adapted for use in other aerospace pursuits (military aircraft or unmanned aerial vehicles, for example) and vice versa. So in terms of economic development recruitment, Boeing South Carolina certainly offers high growth potential in a fast-growing manufacturing cluster. Moreover, given the level of investment the company has made into its facilities in the state, there is virtually no risk that the company will close the facility in at least a generation.  90 80 70 60 50 40 30 20 10 0  Average aerospace manufacturing wage Average manufacturing wage Average wages 2005  2006  2007  2008  2009  2010  2011  2012  2013  SOURCE: Bureau of Labor Statistics Quarterly Census of Employment and Wages  Job Quality  Does South Carolina Have ‘The Right Stuff’? Landing the Boeing plant is more than just a success, however. It represents a tremendous opportunity for South Carolina. While the aerospace product and parts manufacturing industry has seen significant growth in the state between 2005 and 2013 as Boeing’s 787 project advanced, there is considerable room to expand further as more firms concentrate in the state. One of the ways in which analysts measure industry concentration in a region is by calculating  employment location quotients. Location quotients, or LQs, are a measure of relative concentration that compare an area of interest to a base area (in this case, South Carolina relative to the United States). To calculate an LQ for South Carolina’s aerospace product and parts manufacturing industry, one calculates the industry employment share for the state (aerospace employment divided by total employment) and then divides that result by the comparable measure for the nation. An LQ of 1.0 indicates that the industry employment concentration in the state is the same as the national concentration. If the LQ is greater than 1.0, the region is said to have a heavier employment concentration in the industry; if the LQ is less than 1.0, it has a lighter industry employment concentration. The chart below shows the aerospace product and parts manufacturing employment LQs for South Carolina between 2005 and 2013. There are two striking points to take away from these data. First, the industry concentration is quickly growing in South Carolina. Second, despite that rapid increase, the state’s location quotient in 2013 was still just 0.948, indicating that aerospace product and parts manufacturing accounted for a smaller share of total employment in South Carolina than it did in the nation as a whole.  South Carolina Aerospace Manufacturing Density  LOCATION QUOTIENT  The second key criterion for investing economic development dollars is the number and quality of jobs being created by the targeted cluster. On this score, the aerospace manufacturing cluster ranks high as well. Employment growth in aerospace product and parts manufacturing was a big boost to South Carolina’s manufacturing sector, which was particularly hard hit during the Great Recession. The Bureau of Labor Statistics (BLS) estimates that there were only around 450 workers employed in the state by firms classified in the aerospace product and parts manufacturing industry in 2005. By 2013, that number had increased more than 14-fold, to roughly 6,500 workers. Employment growth in the state began to increase rapidly in 2008 when Boeing started to buy out some of the companies and joint ventures that were supporters of the 787 project in North Charleston and consolidated those operations. Those new jobs were particularly welcome during the first two years coming out of the trough of the jobs recession. Aerospace product and parts manufacturing was responsible for approximately 23 percent of all net new manufacturing jobs created in the state between 2010 and 2012, despite accounting for only 1.5 percent of the state’s total manufacturing job base. And the jobs created in aerospace manufacturing are well compensated. The average annual wage for workers in South Carolina’s aerospace product and parts manufacturing industry was $80,757 in 2013, which is 52 percent higher than the average manufacturing wage in the state and more than twice the state’s economy-wide average wage. Moreover, average wages are increasing faster in the industry than in manufacturing or across the state’s economy (see chart).  2.0 1.8 1.6 1.4 1.2 1.0 0.8 0.6 0.4 0.2 0.0  Equal to U.S.  2005  2006  2007  2008  2009  2010  2011  2012  2013  NOTE: The location quotient is the industry’s employment share for the state (industry’s employment divided by total employment) divided by the equivalent figure for the nation. SOURCE: Bureau of Labor Statistics Quarterly Census of Employment and Wages  E C O N F O C U S | F O U RT H Q U A RT E R | 2 0 1 4  37  With 787 production ramping up, and Boeing’s footprint expanding in the state, that location quotient will increase. How much it changes depends on aerospace firms’ location decisions in the future. In the near term, regions are not going to be competing for final assembly plants; those decisions are made very infrequently and with long lead times. But South Carolina’s existing production facilities will be competing with those in other states for large component projects, especially as new variations on the existing 787 platform are developed. But the state will also compete for all of the firms that augment the aerospace product and parts manufacturing industry. There are myriad industries, both goods-producing and service-providing, that support the cluster. For example, there are firms that produce the lightweight, high-strength metals and composites that are used in aerospace applications which may choose to locate or expand in the state, as well as those firms that forge, machine, and mold those materials. Similarly, there are a host of services provided to aerospace product and parts manufacturing firms, such as engineering services and staffing services firms, which can help build out the cluster. There are several factors that determine how competitive a region is in its pursuit of aerospace-related firms, whether goods-producing or service-providing. Two of the most important location considerations are incumbency and labor availability. Incumbency refers to a region’s existing aerospace footprint. In that respect, having a final assembly plant in South Carolina provides the state with a sizable competitive advantage over most states as long as the plant is in operation, particularly when it comes to platform-related, large-scale components. Yet South Carolina is not the only state with such an advantage. Washington state, Kansas, Texas, and North Carolina, often mentioned in industry competitiveness assessments as the primary competitors to South Carolina for aircraft product and parts manufacturing firms, also have large and well-established aerospace clusters. Thus, the determining factors in those decisions may come down to labor factors: cost, labor-management relationships, and skills. South Carolina has some key competitive advantages in this regard — as well as some challenges.  Labor Costs and Relations Average wage rates in South Carolina are lower than the nationwide averages, including those for the manufacturing industry broadly and the aerospace product and parts manufacturing industry specifically. Moreover, labor-related taxes such as those for unemployment insurance and workers’ compensation are competitive. Beyond labor costs, worker-management relations can have a big influence on an aircraft manufacturer’s production location decisions, as the industry has a recent history with disruptive labor strikes. In September 2008, a 57-day work stoppage against Boeing’s manufacturing facilities in Everett, Wash., and elsewhere idled approximately 27,000 of the company’s workers, according to the Bureau of Labor 38  E CO N F O C U S | F O U RT H Q U A RT E R | 2 0 1 4  Labor Union Representation Union-represented workers as percent of workforce  Union members as percent of workforce  U.S.  11.3  12.4  KS  7.4  9.0  NC  1.9  3.2  SC  2.2  3.2  TX  4.8  6.2  WA  16.8  18.4  SOURCE: Bureau of Labor Statistics, 2014  Statistics (BLS). The stoppage was costly: A 2009 aerospace industry competitiveness study prepared by Deloitte Consulting for the Economic Development Council of Snohomish County, which is home to Boeing’s Everett operations, estimated that the 2008 strike cost the company about $6.5 billion in lost revenues and $1.3 billion in lost profits. Moreover, this stoppage was the second against the company in less than five years. South Carolina is a “right to work” state with a very low unionization rate and a history of very few work stoppages. In fact, according to the BLS, South Carolina has one of the lowest percentages of union membership in the nation (see table). Regardless of the broader advantages and disadvantages of organizing labor, or of the responsibility for previous work stoppages, the prospect of such events is clearly material to siting decisions. Even though they are not common, the historically high costs associated with work stoppages make a strong argument — from a company’s perspective — to minimize those risks whenever possible. This is an area in which South Carolina possesses a clear advantage over some of the other states competing for large aircraft manufacturing operations.  Skills, Skills, Skills But it is not enough to have a low-cost workforce that presents a low risk of walking off the job. The aircraft manufacturing industry, and aerospace more generally, requires a highly skilled labor force. Each aircraft flying the skies today is built from highly precise parts that took years of R&D, engineering, and systems integration before they were brought to the factory floor. And the aircraft produced today are manufactured with high-tech composite materials, advanced lightweight metal alloys, and precision parts for which there is little room for error. Thus, the jobs that are created to produce aircraft are well compensated because they require specialized skills, especially in science, technology, engineering, and math, the so-called STEM skills. Ensuring a pipeline of workers with those skills will help attract more of Boeing’s work, as well as build out the supplier network. There are a variety of ways to measure a state’s workforce readiness. Among the most popular in the aerospace competitiveness analysis are measures of educational attainment. This is an area in which South Carolina can improve if it is  Workforce Preparedness  going to make the most of its aerospace cluster. The challenges to the state are evident on virtually Percent of Population 25 and older with: Percent of bachelors all levels of education. In terms of the percentage HS diploma Bachelors degree degrees in science APS SERI of the population over the age of 25 with at least a or greater or higher and engineering Index* high school education, South Carolina is below the U.S. 86.2 29.1 11.8 2.82 national average as well as lower than three of the KS 90.1 30.5 9.7 3.00 four competitor states mentioned above (only Texas has a lower percentage). The comparisons grow less NC 85.2 27.6 10.7 2.34 favorable for the state when bachelor’s degrees are SC 84.9 25.0 10.0 2.20 added into the mix. Here again, South Carolina (at TX 81.5 26.9 13.5 2.45 25.0 percent) trails the national average (29.1 perWA 90.2 32.1 14.1 2.86 cent) in terms of population 25 years and older with at least a bachelor’s degree, and it is lower than all NOTE: Data for the APS SERI Index are from 2011; all other data are from the 2011-2013 American Community Survey 3-Year Estimate. four of the competitor states (see table). SOURCE: Bureau of the Census, American Community Survey; American Physical Society South Carolina not only lags the national aver*American Physical Society Science and Engineering Readiness Index for K-12 age and most of the aerospace competitor states in the broad measures of educational attainment, it also trails in some important measures of STEM-specific readiness and educational attainment. The American Aerospace Manufacturing Establishments in South Carolina Physical Society (APS) compiled a Science and Engineering 30 Readiness Index, or SERI, to measure states’ K-12 progress 25 in preparing students for careers in the physical sciences 20 and engineering using standardized eighth grade science and math test scores, as well as a teachers’ qualification score and 15 other measures. Once again, South Carolina fell below the 10 national average, and its SERI score was lower than each of 5 the four competitor states. 0 Thus, it should come as no surprise that the state’s 2005 2006 2007 2008 2009 2010 2011 2012 2013 averages for STEM-related higher educational attainment measures fall short of the national average. According to the SOURCE: Bureau of Labor Statistics Census Bureau’s 2011-2013 American Community Survey, the percentage of total degrees awarded by South Carolina for science and engineering is below the national average and been produced around the globe, which has diluted the below three of the four competitor states mentioned above. program’s potential impact. So South Carolina is competing With the preponderance of data showing South Carolina against regions near and far to bring more of the parts and lagging key states (and the national average) in important subassemblies to the area. measures of educational attainment, this appears to be the By its mere presence, the final assembly plant puts the obvious area where the state can focus its efforts to maxistate in contention for more of the work associated with mize the impact of Boeing’s location decision. the program. Indeed, since the decision to locate the final assembly plant in North Charleston, Boeing announced that Conclusion it would make further investments in the area, adding a new Boeing’s decision to locate its 787 final assembly plant interiors parts manufacturing facility on its campus. But while and delivery center in South Carolina has been a boon to Boeing continues to increase its investment in the state, and the state’s economy and has presented it with a unique aerospace manufacturing employment has taken off, the opportunity. Rarely do regions get the type of kick-start number of firms in the industry has grown only slowly. The to an industry cluster that South Carolina received. For all number of establishments in aerospace product and part manpractical purposes, the 787 program created an aerospace ufacturing increased threefold between 2005 and 2009, but product and parts manufacturing cluster in South Carolina it has been flat since, suggesting that virtually all of the jobs where none had existed previously. in the cluster are being created by very few firms (see chart). Still, it is unlikely that the cluster will reach the concenDiversifying the aerospace manufacturing cluster’s tration that it has attained in areas like Seattle because the employment base, building out the supply chain, and enticindustry’s production process has changed dramatically over ing ancillary firms to locate or expand in the area will require the past decade. Whereas once large civilian aircraft were a highly skilled workforce. South Carolina would do well built virtually from the ground up employing a very short to build on its current competitive advantages by focusing supply chain, much of it sourced from within the region, more attention on closing the skills gaps with its primary the 787 is assembled from parts and subassemblies that have competitor states. EF E C O N F O C U S | F O U RT H Q U A RT E R | 2 0 1 4  39  State Data, Q2:14   DC  MD  NC  SC  VA  WV  Nonfarm Employment (000s) 752.2 2,617.8 4,133.3 1,944.5 3,775.1 766.1 Q/Q Percent Change 0.3 0.6 0.9 0.8 0.5 0.7 Y/Y Percent Change 0.7 1.0 2.2 2.7 0.5 0.0  Manufacturing Employment (000s) 1.0 103.6 447.4 230.1 231.8 47.9 Q/Q Percent Change 0.0 -0.2 0.4 1.4 0.2 0.0 Y/Y Percent Change 0.0 -2.4 1.0 2.6 0.5 -0.9  Professional/Business Services Employment (000s)	 157.1 424.0 569.4 254.8 680.1 66.3 Q/Q Percent Change 0.8 1.1 2.3 3.6 0.9 1.9 Y/Y Percent Change 1.2 2.0 4.5 5.7 -0.1 2.3  Government Employment (000s) 235.0 504.0 716.5 355.7 705.4 156.1 Q/Q Percent Change -0.2 0.3 0.3 0.4 0.2 2.8 Y/Y Percent Change -2.7 -0.2 0.3 0.5 -0.7 1.5  Civilian Labor Force (000s) 374.1 3,101.5 4,631.3 2,179.7 4,241.7 790.7 Q/Q Percent Change 0.6 0.0 -0.1 0.4 -0.1 -0.5 Y/Y Percent Change 0.1 -0.6 -0.7 0.3 0.4 -0.9  Unemployment Rate (%) 7.8 5.8 6.3 6.2 5.2 6.7 Q1:14 7.8 6.0 6.5 6.2 5.3 6.8 Q2:13 8.6 6.6 8.1 7.8 5.6 6.6  Real Personal Income ($Bil) 46.3 302.3 360.3 163.7 379.5 62.2 Q/Q Percent Change 0.7 0.8 0.6 1.4 0.5 1.0 Y/Y Percent Change 1.7 1.4 1.6 3.0 0.7 1.0  Building Permits 191 3,926 12,392 6,693 7,729 514 Q/Q Percent Change -84.3 8.9 12.4 -3.5 23.0 36.3 Y/Y Percent Change -78.2 -22.9 -9.6 6.9 -6.4 -34.6  House Price Index (1980=100) 698.7 424.7 311.4 314.4 411.8 222.8 Q/Q Percent Change 3.2 1.8 1.6 2.1 2.1 0.9 Y/Y Percent Change 11.4 3.5 3.0 3.0 3.1 2.7  40  E CO N F O C U S | F O U RT H Q U A RT E R | 2 0 1 4  Nonfarm Employment  Unemployment Rate  Real Personal Income  Change From Prior Year  Second Quarter 2003 - Second Quarter 2014  Change From Prior Year  Second Quarter 2003 - Second Quarter 2014  Second Quarter 2003 - Second Quarter 2014  4% 3% 2% 1% 0% -1% -2% -3% -4% -5% -6%  8% 7% 6% 5% 4% 3% 2% 1% 0% -1% -2% -3% -4%  10% 9% 8% 7% 6% 5% 4% 3% 	 04	 05	 06	 07	 08	 09	 10	 11	 12	 13	 14  	 04	 05	 06	 07	 08	 09	 10	 11	 12	 13	 14  Fifth District  7% 6% 5% 4% 3% 2% 1% 0% -1% -2% -3% -4% -5% -6% -7% -8%  	 04	 05	 06	 07	 08	 09	 10	 11	 12	 13	 14  United States  Nonfarm Employment Major Metro Areas  Unemployment Rate Major Metro Areas  Building Permits  Change From Prior Year  Change From Prior Year  Second Quarter 2003 - Second Quarter 2014  Second Quarter 2003 - Second Quarter 2014  Second Quarter 2003 - Second Quarter 2014  	 04	 05	 06	 07	 08	 09	 10	 11	 12	 13	 14 Charlotte  Baltimore  13% 12% 11% 10% 9% 8% 7% 6% 5% 4% 3% 2% 1%  Washington  Change From Prior Year  40% 30% 20% 10% 0% -10% -20% -30% -40% -50% 	 04	 05	 06	 07	 08	 09	 10	 11	 12	 13	 14 Charlotte  Baltimore  FRB—Richmond Services Revenues Index  FRB—Richmond Manufacturing Composite Index  Second Quarter 2003 - Second Quarter 2014  Second Quarter 2003 - Second Quarter 2014  Fifth District  30 20  20  10  10  0 -10 -20 -30 -40 -50  0 -10 -20 -30 	 04	 05	 06	 07	 08	 09	 10	 11	 12	 13	 14  United States  House Prices Change From Prior Year Second Quarter 2003 - Second Quarter 2014  16% 14% 12% 10% 8% 6% 4% 2% 0% -2% -4% -6% -8%  40 30  	 04	 05	 06	 07	 08	 09	 10	 11	 12	 13	 14  Washington  	 04	 05	 06	 07	 08	 09	 10	 11	 12	 13	 14  	 04	 05	 06	 07	 08	 09	 10	 11	 12	 13	 14 Fifth District  United States  NOTES:  SOURCES:  1) FRB-Richmond survey indexes are diffusion indexes representing the percentage of responding firms reporting increase minus the percentage reporting decrease. The manufacturing composite index is a weighted average of the shipments, new orders, and employment indexes. 2) Building permits and house prices are not seasonally adjusted; all other series are seasonally adjusted.  Real Personal Income: Bureau of Economic Analysis/Haver Analytics. Unemployment Rate: LAUS Program, Bureau of Labor Statistics, U.S. Department of Labor, http://stats.bls.gov. Employment: CES Survey, Bureau of Labor Statistics, U.S. Department of Labor, http://stats.bls.gov. Building Permits: U.S. Census Bureau, http://www.census.gov. House Prices: Federal Housing Finance Agency, http://www.fhfa.gov.  For more information, contact Jamie Feik at (804)-697-8927 or e-mail Jamie.Feik@rich.frb.org  E C O N F O C U S | F O U RT H Q U A RT E R | 2 0 1 4  41  Metropolitan Area Data, Q2:14   Washington, DC  Baltimore, MD  Hagerstown-Martinsburg, MD-WV  Nonfarm Employment (000s) 2,541.1 1,351.3 Q/Q Percent Change 1.9 2.9 Y/Y Percent Change 0.6 1.1  Unemployment Rate (%) 4.9 6.0 Q1:14 4.8 6.0 Q2:13 5.5 7.0  Building Permits 5,340 1,727 Q/Q Percent Change -19.2 42.8 Y/Y Percent Change -21.8 -17.1    Asheville, NC Charlotte, NC  103.0 1.2 0.2 6.5 6.4 7.2 207 33.5 -16.5  Durham, NC  Nonfarm Employment (000s) 177.5 1,065.9 291.4 Q/Q Percent Change 2.7 2.2 1.6 Y/Y Percent Change 1.4 3.7 2.5  Unemployment Rate (%) 4.9 6.3 5.0 Q1:14 4.8 6.5 5.0 Q2:13 6.5 8.4 6.4  Building Permits 397 3,591 816 Q/Q Percent Change 36.4 -1.8 29.1 Y/Y Percent Change -7.0 -0.2 -19.3     Greensboro-High Point, NC Raleigh, NC Wilmington, NC Nonfarm Employment (000s) 348.8 557.6 116.0 Q/Q Percent Change 1.6 2.3 3.7 Y/Y Percent Change 0.9 3.9 3.1  Unemployment Rate (%) 6.7 5.2 6.6 Q1:14 6.8 5.2 6.6 Q2:13 8.7 6.6 8.6  Building Permits 556 2,947 629 Q/Q Percent Change 27.5 15.2 9.6 Y/Y Percent Change 0.2 -15.2 -31.8  42  E CO N F O C U S | F O U RT H Q U A RT E R | 2 0 1 4   Winston-Salem, NC Charleston, SC Columbia, SC Nonfarm Employment (000s) 253.4 323.5 373.6 Q/Q Percent Change 1.8 3.1 1.4 Y/Y Percent Change 1.5 3.4 2.9  Unemployment Rate (%) 6.0 4.7 5.0 Q1:14 6.1 5.1 5.4 Q2:13 7.7 6.5 6.9  Building Permits 548 1,282 1,028 Q/Q Percent Change 84.5 -40.1 9.1 Y/Y Percent Change 62.6 -0.2 -9.8    Greenville, SC  Richmond, VA  Roanoke, VA  Nonfarm Employment (000s) 389.6 632.8 160.8 Q/Q Percent Change 2.1 2.2 1.8 Y/Y Percent Change 2.9 1.7 1.3  Unemployment Rate (%) 4.5 5.4 5.3 Q1:14 4.8 5.3 5.2 Q2:13 6.5 6.0 5.9  Building Permits 1,456 1,429 139 Q/Q Percent Change 57.9 82.0 117.2 Y/Y Percent Change 88.8 -3.8 -49.8    Virginia Beach-Norfolk, VA Charleston, WV Huntington, WV Nonfarm Employment (000s) 757.6 124.5 141.1 Q/Q Percent Change 2.4 2.2 2.2 Y/Y Percent Change 0.3 -0.5 0.5  Unemployment Rate (%) 5.6 5.9 6.3 Q1:14 5.5 5.7 6.5 Q2:13 6.1 6.0 7.1  Building Permits 1,596 5 37 Q/Q Percent Change 60.7 150.0 -14.0 Y/Y Percent Change 4.7 -89.4 184.6   For more information, contact Jamie Feik at (804) 697-8927 or e-mail Jamie.Feik@rich.frb.org E C O N F O C U S | F O U RT H Q U A RT E R | 2 0 1 4  43  OPINION  A New Payments Role for the Fed? BY J O H N A . W E I N B E RG  S  ome have argued that the U.S. payment system has not kept up to date with technology and that it is, as a result, too slow and too expensive. A white paper published by the Fed in January advocates adding a new option to the payment system: a real-time electronic payment mechanism that would enable consumers and businesses to make payments instantly, cheaply, and safely. The white paper suggests a collaborative effort to create such an option, with a leadership role for the Fed itself. The potential role described for the Fed — initially that of “leader/catalyst,” possibly later “service provider” — raises interesting questions. Historically, the adoption of new payment technology in the United States has been driven primarily by market forces. The role of the Fed in payments has been focused on payments among banks and, to a lesser degree, other financial institutions. Since both economic theory and our own experience tell us that private competition usually brings about the most efficient provision of goods and services, what is the basis for the Fed to take the lead in establishing a real-time payment network? What is the market failure that weighs against relying on market forces in this setting? One possible objection to a purely market-based approach is the potential for monopoly, which arises because of economies of scale or the network aspects of a payment service. But if monopoly power is a concern, regulatory actions to maintain a competitive market are a more modest form of intervention. And with the rise of smartphones and other such devices, barriers to entry into the payments market may be declining. Still, much of the entry we’ve seen has been on the “front end,” bringing new ways of interfacing at the point of sale. By contrast, the processes of clearing and settlement may continue to have the network characteristics that tend to favor small numbers of large providers. An alternative objection, one raised by proponents of a greater Fed role, is just the opposite — namely, in the words of the white paper, the risk of “further fragmentation of payment services.” But fragmentation is what we would expect to see in the early years of a relatively new market, such as electronic real-time payments. Even in the longer term, some degree of fragmentation may be desirable both to maintain competitive pressure and to avoid a payments monoculture that would render the entire payment system vulnerable if it were successfully hacked. To the extent that there is value in having a small number of platforms for payment services, it is reasonable to assume that competitive forces and network effects will lead to appropriate consolidation of the industry without the intervention of public policy. These opposing concerns do suggest that there may be a tension in payment services between competition and 44  E CO N F O C U S | F O U RT H Q U A RT E R | 2 0 1 4  cooperation. Ultimately, the establishment of standards that can make for efficient, broadly available payment services can require some coordination among a range of market participants. And as a significant participant in payments clearing and settlement, the Fed has a role to play in this coordination. But I am skeptical of the existence of market failures that would justify the Fed creating and providing a new payment system; there is good reason to believe markets can efficiently provide this service. To be sure, the Fed has had a longtime role as a provider of payment services, for instance in the check-clearing system. But as Jeffrey Lacker, Jeffrey Walker, and I documented in a 1999 article in Economic Quarterly, the Fed’s entry into check clearing was primarily based on a desire to increase bank membership in the Fed — not on any insurmountable deficiency in the private, decentralized system of check clearing. In particular, the Fed used its legal privileges in the market for check clearing to reallocate common costs in a way that made Fed membership more attractive. Not only is it unnecessary for the Fed or another public institution to drive the development of a real-time payment system, there is a risk that it could lead to inefficient outcomes. Payment networks are, in large part, communication networks; as with other communication networks, much of the costs of payment networks are common costs that must be allocated among participants. In a payment network that is public or is effectively a public-private partnership, cost allocation can be driven as much by political concerns as by economic forces. Much as the Fed sought to use its power to allocate the costs of check clearing to induce banks to join, the leadership of a public or public-private payment network will have an incentive to allocate costs in the interest of one or more groups of constituents. Without a doubt, new technologies present an opportunity to improve the speed, cost, and security of our payment system. The Fed can play a valuable role by carrying out research, among other activities. But there does not seem to be a market failure for the Fed to solve by taking an organizing or operational role. Unless such a failure can be demonstrated, those roles are best left to private institutions and private consortia. As the Fed deliberates whether and how to proceed with a new electronic payments option in the interest of efficiency, it will be important to bear in mind the risk that in some circumstances, the politics of cost allocation may drive decisionmaking more than efficiency. EF John A. Weinberg is senior vice president and special advisor to the president at the Federal Reserve Bank of Richmond.  NEXTISSUE Does Marriage Matter?  Many policymakers are concerned about the growing “marriage gap”: Affluent, college-educated people are more likely to get married and less likely to get divorced, and their children are more likely to be affluent and college educated themselves. To what extent does marriage affect the well-being of American families? Are policies that support marriage a viable way to reduce income inequality?  The Secession Question  Last year’s referendum on Scottish independence from the United Kingdom garnered widespread attention, and the episode invigorated secessionist movements around the world. Why do some regions seek separation from their home nation? There are many reasons, but recently economic factors have been a key consideration.  Crop Insurance  Subsidized crop insurance has grown to become the biggest assistance program in U.S. agriculture. Supporters argue that it helps farmers manage risk and reduces the need for disaster relief. But some economists say it is more of an income-transfer tool than an insurance policy.  Federal Reserve In the fall of 1910, a small group of prominent politicians and bankers held a secret meeting on Jekyll Island, off the coast of Georgia. After 10 long days, they emerged with a plan for a new U.S. central bank — the starting point of the Federal Reserve.  Economic History Today, many call for an overhaul of mortgage giants Fannie Mae and Freddie Mac. The last time Congress tried to reform them — in the early 1990s — may hold lessons.  Interview Campbell Harvey of Duke University on Bitcoin, the risk tolerance of CEOs, and the differences for financial economists between working in the private sector and academia.  Visit us online: www.richmondfed.org •	To view each issue’s articles and Web-exclusive content •	 To view related Web links of 	 additional readings and 	 references •	To subscribe to our magazine •	To request an email alert of our online issue postings  Federal Reserve Bank of Richmond P.O. Box 27622 Richmond, VA 23261  Change Service Requested  To subscribe or make subscription changes, please email us at research.publications@rich.frb.org or call 800-322-0565.  Don’t miss the latest issue of  Economic Quarterly  An online publication containing articles about monetary theory and policy, banking and finance, and the payment system This issue features the following articles:  “Flows To and From ‘Working Part Time for Economic Reasons’ and the Labor Market Aggregates During and After the 2007–09 Recession” by Maria E. Canon, Marianna Kudlyak, Guannan Luo, and Marisa Reed  “Large U.S. Bank Holding Companies During the 2007–09 Financial Crisis: An Overview of the Data” by Peter S. Debbaut and Huberto M. Ennis  “The Real Bills Views of the Founders of the Fed” by Robert L. Hetzel  For this and other issues of Economic Quarterly, visit www.richmondfed.org/publications/research/economic_quarterly/index.cfm