Full text of Economic Report of the President : 2019
The full text on this page is automatically extracted from the file linked above and may contain errors and inconsistencies.
Economic Report of the President Together with The Annual Report of the Council of Economic Advisers March 2019 Economic Report of the President Together with The Annual Report of the Council of Economic Advisers March 2019 x Contents Economic Report of the President. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1 The Annual Report of the Council of Economic Advisers. . . . . . . . . . . . . . . . . . . . . 7 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13 Chapter 1 Evaluating the Effects of the Tax Cuts and Jobs Act. . . . . . . . . . . . 35 Chapter 2 Deregulation: Reducing the Burden of Regulatory Costs. . . . . . . . 77 Chapter 3 Expanding Labor Force Opportunities for Every American. . . . . 139 Chapter 4 Enabling Choice and Competition in Healthcare Markets. . . . . . 195 Chapter 5 Unleashing the Power of American Energy. . . . . . . . . . . . . . . . . . 247 Chapter 6 Ensuring a Balanced Financial Regulatory Landscape. . . . . . . . . 297 Chapter 7 Adapting to Technological Change with Artificial Intelligence while Mitigating Cyber Threats. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 339 Chapter 8 Markets versus Socialism. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 381 Chapter 9 Reducing Poverty and Increasing Self-Sufficiency in America.. . 427 Chapter 10 The Year in Review and the Years Ahead.. . . . . . . . . . . . . . . . . . . . 485 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 533 Appendix A Report to the President on the Activities of the Council of Economic Advisers During 2018.. . . . . . . . . . . . . . . . . . . . . . . . . 613 Appendix B S tatistical Tables Relating to Income, Employment, and Production. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 625 ____________ *For a detailed table of contents of the Council’s Report, see page 23. iii Economic Report of the President 1 x Economic Report of the President To the Congress of the United States: For the past two years, my Administration has been focused on strengthening the United States economy to enable greater opportunity and prosperity for all Americans. During my first year in office, we began by building a foundation of progrowth policies. We initiated sweeping regulatory reform—issuing 22 deregulatory actions for every new one added—and signed into law the Tax Cuts and Jobs Act, the biggest package of tax cuts and tax reform in our country’s history. Consumer and business confidence skyrocketed as we reversed incentives that had driven away businesses, investment, and jobs for many years. With these cornerstones of a robust economy in place, we restored enthusiasm for doing business in America. This has achieved enormously positive results for American workers and families. The United States economy has created 5.3 million jobs since I was elected to office. Wage growth continued in 2018, with the lowest-earning workers experiencing the strongest gains. By the fourth quarter of 2018, real disposable personal income per household was up more than $2,200 from the end of 2017. The national unemployment rate reached a nearly 50-year low of 3.7 percent in September 2018, hovering at or below 4 percent for 11 consecutive months— the longest streak in nearly five decades. Opportunity is expanding so fast that there are more job openings in our economy than there are current job seekers. These positions will be filled as more Americans join the labor force or rejoin it after years of discouragement and pessimism. In January 2019, more than 70 percent of workers entering employment were previously out of the labor force, and the labor force participation rate reached 63.2 percent—the highest since 2013. For the second consecutive year, economic growth has either matched or surpassed my Administration’s forecast, and the economy has grown at a 3.1 percent rate over the last four quarters. This progress is remarkable. It is a victory for all Americans now benefiting from a strengthened economy. But the greatest triumph of all is this: we have created an era of opportunity in which Americans left behind by previous Administrations are finally catching up and even getting ahead. An Economic Agenda for the Success of Every American An economic agenda that enables struggling Americans to succeed begins with the creation of opportunities. Years of misguided policies, however, diminished opportunity, disregarded the importance of American workers for our country’s success, and turned millions of our hard-working citizens into collateral damage. On a massive scale, jobs were lost as unfair trade deals Economic Report of the President | 3 gutted American manufacturing and a backward tax code drove away businesses and investment. The American people suffered the consequences of past leaders’ unalloyed aspirations for global trade; which enriched other nations and impoverished our working families, as we increasingly imported goods formerly made here by American workers. Those seeking hope from Washington received dismissive explanations. They were told that low growth and meager opportunity were the “new normal”—that nothing could be done to stop the damage. Meanwhile, economic hardship derailed families and communities: Hopelessness deepened, and drug abuse and other maladies spread. Our country could not achieve its highest economic potential with a workforce hollowed out by the mistaken policies of the past—policies that treated our citizens as an afterthought, hurt our most vulnerable workers, and crippled our economy. Over the past two years, my Administration has implemented a pro-growth policy agenda that puts Americans first and creates conditions that enable all our citizens to succeed. By strengthening the United States economy, we have empowered many groups that historically have had a harder time getting ahead. Unemployment among those without a high school degree is the lowest in nearly 30 years. In the past year, the unemployment rate among women fell to 3.3. percent, matching its lowest rate since 1953. Teenage unemployment reached its lowest rate in nearly 50 years. My Administration has presided over the lowest unemployment rates for people with disabilities on record. Poverty rates for both black Americans and Hispanic Americans reached record lows in 2017. Homelessness among veterans fell by 5.4 percent in the past year. The bottom 10 percent of earners are experiencing the highest wage growth on record, and we have lifted nearly 5 million Americans off food stamps since my election. Revitalized American manufacturing—something once thought impossible—has restored opportunities for American blue-collar workers. In the first two years of my Administration, we have created manufacturing jobs at six times the pace of the previous Administration’s last two years, for a total of nearly half a million jobs. Blue-collar workers, on average, are on track to see almost $2,500 more in annual wages. The success of America’s workers is essential to the success of our country. We will continue to prioritize workforce development in the years ahead, and we will keep fighting on behalf of all Americans seeking opportunities to contribute. In establishing the National Council for the American Worker, my Administration is emphasizing the importance of results-driven job training and reskilling programs; we must equip our students and workers with competitive skills adapted to our rapidly changing economy. This initiative has already secured commitments from the private sector to invest in over 6.5 million retraining opportunities. An economic agenda that lifts all Americans must also address the destructive effects that over-incarceration has on our families and our communities. With the enactment of the First Step Act of 2018, we have achieved 4 | Economic Report of the President a bipartisan victory for criminal justice reform. The First Step Act modifies sentencing for less serious crimes and prioritizes rehabilitation to enable former prisoners to reenter society as productive, law-abiding citizens. Well-designed prison programs that help bring families together and give reformed prisoners the tools to find work are crucial for reducing the costs of crime and our over-incarceration. Finally, we remain committed to encouraging self-sufficiency and advocating for work as the best way to foster human dignity and escape poverty. In our strengthened economy, long-awaited job opportunities have become available to millions of Americans who are eager to support themselves. Although help must be accessible to those who are struggling, expanding work requirements can further reduce both poverty and dependency among those able to work. Over half of all nondisabled, working-age adults receiving food stamps are not working. By finding ways to put their talents to productive use, we would both enrich our society and help them live more fulfilling lives. My Administration values the capabilities of all Americans, and we will continue to implement a pro-growth, pro-opportunity agenda that puts self-sufficiency within reach. Investing in Innovation and the Future of American Greatness To maintain economic momentum and expand opportunity in our Nation, we will continue to champion American innovation and entrepreneurship. Smart deregulation and technological advances have unleashed American energy dominance, and made American energy the way of the future. The United States is now the world’s single largest producer of crude oil and natural gas. Our strength in the energy sector has invigorated our economy, created jobs, and reduced our dependence on energy from countries that do not share our values. The instinct to invent and create has driven America forward since its founding and has enabled our country to export ideas that have rapidly improved the world. To do right by our researchers and inventors, we must hold foreign nations to account for stealing our intellectual property and forcing technology transfers. To do right by American taxpayers and consumers, we must continue fighting for lower pharmaceutical drug prices and end global free-riding on Americans’ transformative research. And to bolster growth, we must continue to unleash the power of possibility by revolutionizing our Nation’s technological capabilities within the industries of the future, including artificial intelligence, advanced manufacturing, and 5G technology. By reducing the costs and confines of oppressive, growth-killing regulation, we have improved the ability of American entrepreneurs to start and expand their businesses. Many aspiring entrepreneurs, however, live in areas of our country that are starved of the capital that entices business investment and creates jobs. The Investing in Opportunity Act, part of our historic tax reform law, is addressing this problem. It is using tax incentives to draw investment into Opportunity Zones, areas struggling with higher unemployment and Economic Report of the President | 5 poverty. These areas are experiencing increases in commercial real estate transactions, as investors seize on the potential for Opportunity Zones to reignite the American Dream for those who have been left behind. Our dedication to investing in a brighter future must be paired with a commitment to fixing past mistakes. We have made significant strides to reverse the damage of trade policies that harmed our country for many years. We renegotiated the destructive North American Free Trade Agreement and reached a new agreement, the United States–Mexico–Canada Agreement. We also negotiated a revised United States–Korea Free Trade Agreement. At the time of this Report’s publication, we are conducting negotiations with China, the European Union, and Japan. In addition, we intend to begin negotiations with the United Kingdom as soon as it leaves the European Union. With these historic achievements, we have begun an era of trade policy that finally puts the interests of the United States and our hard-working families first. To improve the welfare of our Nation and its citizens, we are redoubling our efforts to fix an immigration system that has been broken for decades. The chaos at our Southern Border comes at an intolerable cost to American citizens, who deserve peaceful, prosperous communities. We cannot tolerate the crime, drug smuggling, illegal entry, and human trafficking enabled by a porous border. The current system that allows dangerous gang members into our society, strains public services, and rewards those who ignore our laws over those who respect our citizenship process is simply unsustainable for our Nation. We must have an orderly immigration system that honors United States citizenship as the unrivaled privilege we all know it to be. As shown in the Report that follows, we are ushering in an era of renewed dedication to our citizens. It is my great honor to champion the American people and to make their success and well-being my top priority. This progrowth, pro-opportunity agenda celebrates the irreplaceable value of America’s working families and embraces the extraordinary possibilities for American ingenuity to improve the human condition. It is an economic agenda that lays the foundation for the future of American greatness. The White House March 2019 6 | Economic Report of the President The Annual Report of the Council of Economic Advisers 7 x Letter of Transmittal Council of Economic Advisers Washington, March 19, 2019 Mr. President: The Council of Economic Advisers herewith submits its 2019 Annual Report in accordance with the Employment Act of 1946, as amended by the Full Employment and Balanced Growth Act of 1978. Sincerely yours, Kevin A. Hassett Chairman Richard V. Burkhauser Member Tomas J. Philipson Member Economic Report of the President | 9 Council of Economic Advisers Washington, March 19, 2019 Mr. President: In the 10 chapters that constitute this Report the Council of Economic Advisers provides a detailed account of the U.S. economy in 2018, and offers analysis of the Administration’s economic policy agenda for the years ahead. In preparing the Economic Report of the President the Council strives to incorporate the most recent data available at the time of the Report’s statutorily mandated transmittal to Congress, and to ensure through internal processes that our analysis of these data adheres to the strictest standards of verification and replication. Due to delayed data releases owing to a partial government shutdown from December 22, 2018, to January 25, 2019, it was not possible for the Council to incorporate preliminary estimates of gross domestic product and personal income and outlays in the fourth quarter of 2018 while upholding our replication procedures and a production schedule required to comply with the statute. However, I am pleased to report in this letter that the data confirm and reinforce the findings of this Report and do not materially alter its conclusions. Sincerely yours, Kevin A. Hassett Chairman Economic Report of the President | 11 Introduction In accordance with the Employment Act of 1946, the purpose of this Report is to provide the U.S. Congress with “timely and authoritative information concerning economic developments and economic trends” for the preceding year and, prospectively, for the years ahead. As required by the Employment Act, the Report also sets forth the Administration’s program for achieving the chartered purpose of: Creating and maintaining, in a manner calculated to foster and promote free competitive enterprise and the general welfare, conditions under which there will be afforded useful employment opportunities, including selfemployment, for those able, willing, and seeking to work, and to promote maximum employment, production, and purchasing power (79th U.S. Congress, 1946). In the 10 chapters that constitute this Report, we present evidence that the Trump Administration’s policy actions and priorities are thus far delivering economic results consistent with the 1946 mandate. For the second consecutive year, the U.S. economy outperformed expectations and broke from recent trends by a substantial margin. In June 2017, the Congressional Budget Office projected that during the four quarters of 2018, real gross domestic product (GDP) would grow by 2.0 percent, the unemployment rate would decline by 0.1 percentage point, to 4.2 percent, and employment growth would average 107,000 jobs per month. Instead, real GDP in the first three quarters of 2018 grew at a compound annual rate of 3.2 percent— above the Trump Administration’s own fourth quarter–over–fourth quarter forecast for the second successive year—the unemployment rate declined by 0.4 percentage point, to a near-50-year low of 3.7 percent, and employment growth averaged 223,000 jobs per month. Growth in labor productivity, which averaged just 1.0 percent between 2009:Q3 and 2016:Q4, doubled to 2.0 percent in 2018. Capital expenditures by nonfinancial businesses rose 13.9 percent at a compound annual rate through 2018:Q3. Figures I-1 through I-4 show that the strong economic performance in 2017 and 2018 was not merely a continuation of trends already under way during the postrecession expansion, but rather constituted a distinct break from the previous pace of economic and employment growth since the start of the current expansion in 2009:Q3. The figures depict observed outcomes before (blue) and after (red) the election, with the dotted lines representing the projected trend estimated on the basis of preelection data. Consistent with conclusions in the 2018 Economic Report of the President, investment, manufacturing employment, worker compensation, and new startups have all risen sharply in the two years since the 2016 election. Economic Report of the President | 13 Figure I-2. Durable Goods Manufacturing Employment, 2012–18 Figure I-X. I-1. Real Private Figure Nonresidential Fixed Investment, 2012–18 Real Private Nonresidential Fixed Investment, 2012–18 Employment (thousands ) Employment (thousands) Dollars(billions, (billions,2012) 2012) Dollars 2,800 2018:Q3 8,100 2,700 8,000 Dollars (billions, 2012) 2,600 7,900 2,500 7,800 2,800 2,400 7,700 2,300 7,600 2,200 7,500 2,700 2,100 2,600 2012:Q4 2014:Q4 Election 2018:Q3 7,400 Nov. 2012 Nov. 2014 Nov. 2016 Nov. 2018 2016:Q4 Figure I-4. Average Weekly Earnings of Goods-Producing Employees, 2012–18 Figure I-3. New Business 2,500 Applications, 2012–18 Number Numberofofapplications applications 900,000 2,400 Dec-18 Dollars perpeweek Dollars r we e k 2018:Q3 850,000 1,030 Dec-18 1,000 2,300 800,000 970 2,200 750,000 940 2,100 700,000 2012:Q4 2014:Q4 650,000 910 2016:Q4 880 600,000 2012:Q4 2014:Q4 2016:Q4 Election Preelection trend 850 Nov. 2012 Nov. 2014 Nov. 2016 Nov. 2018 Preelection Postelection Sources: Bureau of Economic Analysis; Bureau of Labor Statistics; U.S. Census Bureau; CEA calculations. Note: All trends are estimated over a sample period covering the entire preelection expansion from 2009:Q3 (2009, July) to 2016:Q4 (2016, November). Figure I-4 represents the average nominal weekly earnings of goods producing production and nonsupervisory employees in nominal dollars. Trends are estimated on compound annual growth rates and levels reconstructed from projected rates. 14 | Economic Report of the President In addition, overall economic output by the third quarter of 2018 was $250 billion, or 1.3 percent, larger than projected by the 2009:Q3–2016:Q4 trend, with the compound annual growth rate up 1.2 percentage points over trend. Higher output growth was driven by a marked rise in real private investment in fixed assets, which was 10.6 percent over the projected trend as of the third quarter. In the first three quarters of 2018, the contribution of real private nonresidential fixed investment to GDP growth rose from 0.6 percentage point, the average of the preceding expansion, to 1.0 percentage point, while investment as a share of GDP rose to its second-highest level for any calendar year since 2001. Real private nonresidential fixed investment by nonfinancial businesses rose 8.3 percent at a compound annual rate through 2018:Q3, climbing to a level 14.7 percent above that projected by the 2009:Q3–2016:Q4 trend. As of December 2018, average nominal weekly earnings of goods producing production and nonsupervisory workers had risen $2,300 above trend on an annualized basis. In the chapters that follow, we demonstrate that these departures from the recent trend are not accidental but rather reflect the Trump Administration’s deliberate measures to create and maintain conditions under which the U.S. economy can achieve maximum employment, production, and purchasing power. Specifically, a unifying theme throughout this Report is that these conditions are generally achieved by providing maximum scope for the efficiency of free enterprise and competitive market mechanisms, and ensuring that these mechanisms are operative in both domestic and global markets. Beginning with chapter 1, “Evaluating the Effects of the Tax Cuts and Jobs Act,” we use currently available data to examine the Tax Cuts and Jobs Act’s (TCJA’s) anticipated and observed effects, with particular attention to the relative velocities of adjustment along each economic margin. We find that by lowering the cost of capital, the TCJA had an instant and large effect on business expectations, with firms immediately responding to the TCJA by upwardly revising planned capital expenditures, employee compensation, and hiring. We also observe revised capital plans translating into higher capital expenditures and real private investment in fixed assets, with nonresidential investment in equipment, structures, and intellectual property products growing at a weighted average annual rate of about 8 percent from 2017:Q4 through 2018:Q3, climbing to $150 billion over the pre-TCJA expansion trend of 2009:Q3 through 2017:Q4. (Equipment investment trends are calculated through 2017:Q3, because the TCJA’s allowance of full expensing of new equipment investment was retroactive to September 2017.) In addition to tallying more than 6 million workers receiving bonuses directly attributed to the TCJA, with an average bonus size of $1,200, we also estimate that real disposable personal income per household rose to $640 over the trend by the third quarter of 2018, or 16 percent of the CEA’s estimated long-run effect of $4,000 per Economic Report of the President | 15 household. In real terms, median usual weekly earnings of all full-time wage and salary workers were up $805 over trend on an annualized basis. We also report evidence of a reorientation of U.S. investment from direct investment abroad to investment in the United States, as the TCJA attenuated incentives to shift productive assets and profits to lower-tax jurisdictions. Specifically, in the first three quarters after the TCJA’s enactment, U.S. direct investment abroad declined by $148 billion, while direct investment in eight identified tax havens declined by $200 billion. In the first three quarters of 2018, U.S. firms repatriated almost $600 billion in overseas earnings. Based on extensive evidence from a large body of corporate finance literature, we conclude that shareholder distributions through share repurchases are an important margin of adjustment to a simultaneous positive shock to cash flow and investment, constituting the primary mechanism whereby efficient capital markets reallocate capital from mature, cash-abundant firms without profitable investment opportunities to emerging, cash-constrained firms with profitable investment opportunities. In chapter 2, “Reducing the Burden of Regulatory Costs,” we examine the Administration’s important deregulatory efforts, which have also led to improved performance over the previous two years. We develop a framework to analyze the cumulative economic impact of regulatory actions on the U.S. economy. As the first Administration to use regulatory cost caps to reduce the cumulative burden of Federal regulation, the Trump Administration in 2017 and 2018 issued more deregulatory actions than regulatory actions and reversed the long-standing trend of rising regulatory costs. By raising the cost of conducting business, regulation can prevent valuable business and consumer activities. More important, however, we also stress that regulations in one industry affect not only the regulated industry or sector but also the economy as a whole. We find that this implies that official measures understate regulatory costs and therefore also understate the regulatory cost savings of the Trump Administration’s regulatory reforms because they do not account for relevant opportunity costs, especially those accruing outside the regulated industry. The official data show that from 2000 through 2016, the annual trend was for regulatory costs to grow by an average of $8.2 billion each year. In contrast, in 2017 and 2018 Federal agencies took deregulatory actions that resulted in costs savings that more than offset the costs of new regulatory actions. The official data show that in fiscal year 2017, the deregulatory actions saved $0.6 billion in annualized regulatory costs (with a net present value of $8.1 billion); and in fiscal year 2018, the deregulatory actions saved $1.4 billion in annualized regulatory costs (with a net present value of $23 billion). Looking at just three important deregulatory case studies, the CEA calculates that the three actions will reduce annual regulatory costs by an additional $27 billion. 16 | Economic Report of the President Chapter 3, “Expanding Labor Force Opportunities for Every American,” discusses the dramatic effect the revival of the economy has had on labor markets. Consistent with the robust pace of economic growth in the United States, the labor market is the strongest that it has been in decades, with an unemployment rate that remained under 4 percent for much of 2018. Employment is expanding and wages are rising at their fastest pace since 2009. Whenever both quantity and price go up in a market, this must be partly driven by a rise in demand. This suggests that an important change in the labor market has been an increase in the demand for labor, induced potentially by a supplyside expansion enabled by tax reform and deregulation. Although the low unemployment rate is a signal of a strong labor market, there is a question as to whether the rapid pace of hiring can continue and whether there are a sufficient number of remaining potential workers to support continued economic growth. This pessimistic view of the economy’s potential, however, overlooks the extent to which the share of prime-age adults who are in the labor market remains below its historical norm. As is explored in chapter 3, potential workers could be drawn back into the labor market through Administration policies designed to reduce past tax and regulatory distortions and to encourage additional people to engage in the labor market. Policies examined in this chapter that intend to increase labor force participation include reducing the costs of child care, working with the private sector to increase employer training and reskilling initiatives, and pursuing criminal justice reform to increase labor force engagement among affected communities. We also highlight the potential benefits of reducing occupational licensing, and incentivizing investment in designated Opportunity Zones to improve economically distressed areas, as provided for in the TCJA. In chapter 4, “Enabling Choice and Competition in Healthcare Markets,” we seek to address the 1946 mandate for this Report to analyze how to “foster and promote free and competitive enterprise” to a greater extent in the U.S. healthcare sector. We discuss the rationales commonly offered for government intervention in healthcare and explain why such interventions often, and unnecessarily, restrict choice and competition, demonstrating that the resulting government failures are frequently more costly than the market failures they attempt to correct. In light of recent public proposals to dramatically increase government intervention in healthcare markets, such as “Medicare for All,” we also analyze how these proposals eliminate or decrease choice and competition. As a result, we find that these proposals would be inefficient, costly, and likely reduce, as opposed to increase, the population’s health. Funding them would create large distortions in the economy, with the universal nature of “Medicare for All” constituting a particularly inefficient way to finance healthcare for lower- and middle- income people. Economic Report of the President | 17 We contrast such proposals with the Trump Administration’s actions that are increasing healthcare choice and competition for healthcare. We focus on the elimination of the Affordable Care Act’s individual mandate penalty, which will enable consumers to decide for themselves what value they attach to purchasing insurance and which we project will generate $204 billion in value over 10 years. Expanding the availability of association health plans and short-term, limited-duration health plans will increase consumer choice and insurance affordability. We find that taken together, these three sets of actions will generate a value of $453 billion over the next decade. On the pharmaceutical front, the Food and Drug Administration is increasing price competition by streamlining the drug application and review process at the same time that record numbers of generic drugs are being approved, price growth is falling, and consumers have already saved $26 billion through the first year and a half of the Administration. In addition, the influx of new, brand name drugs resulted in an estimated $43 billion in annual benefits to consumers in 2018. Chapter 5, “Unleashing the Power of American Energy,” discusses the important role of energy markets in the new economic revival and the Administration’s goal of stimulating free market innovation to enable U.S. energy independence. Coal production stabilized in 2017 and 2018 after a period of contraction in 2015 and 2016. The United States is now a net exporter of natural gas for the first time in 60 years, and petroleum exports are increasing at a pace that suggests positive net exports by 2020. Taking advantage of America’s abundant energy resources is a key tenet of the Trump Administration’s plan for long-term economic growth as well as national security. This is best achieved by recognizing that price incentives and the role of technological innovation—which is guided by the price incentive in a market economy like that of the United States—are critical for understanding the production of both renewable natural resources and nonrenewable natural resources like petroleum. By enabling domestic production, the Administration seeks to facilitate the evolution of the U.S. economy’s role in global markets. Since the President took office, the U.S. fossil fuels sector has set production records. These were led by technological improvements, tax changes that lowered the cost of investing in mining structures, elevated global prices, and deregulatory actions that raised the expected returns of energy projects. Chapter 5 documents 65 deregulatory actions affecting the energy sector that were completed through the end of fiscal year 2018, with projected present value savings of over $5 billion. In chapter 6, “Ensuring a Balanced Financial Regulatory Landscape,” we revisit the causes and consequences of, and responses to, the financial crisis of 2008. In particular, we identify that the absence of actuarially fair pricing of implicit government guarantees of financial institutions and markets was a major factor exacerbating the crisis. Unfortunately, we also find that the salient 18 | Economic Report of the President legislative response to the crisis—the 2010 Dodd-Frank Act—not only failed to resolve this flaw but also excessively raised regulatory complexity, with the increased cost of compliance falling disproportionately on small and midsized financial institutions, which account for a disproportionate share of commercial and industrial lending to small and medium-sized enterprises. In addition to articulating the Administration’s approach to achieving the Seven Core Principles for financial regulation, established by Executive Order 13772, chapter 6 also demonstrates how the Economic Growth, Regulatory Relief, and Consumer Protection Act of 2018 released small and medium-sized banks from the more restrictive provisions of Dodd-Frank, while preserving heightened regulatory oversight of genuinely systemically important financial institutions. Again reflecting the CEA’s 1946 mandate to evaluate “current and foreseeable trends in the levels of employment, production, and purchasing power,” chapter 7, “Adapting to Technological Change with Artificial Intelligence while Mitigating Cyber Threats,” analyzes how technological change in information technology is likely to affect future U.S. labor markets. We begin by reviewing the latest developments in artificial intelligence (AI) and automation, concluding that a narrow, static focus on possible job losses leads to a misleading picture of the likely effects of AI on the Nation’s economic well-being. Technological advances might eliminate specific jobs, but they do not generally eliminate work, and over time they will likely greatly increase real wages, national income, and prosperity. For example, technological change enabled many agricultural economies to transition from having a majority of the economy being devoted to food production to a small percentage of the economy being able to better feed its population than before. Automation can complement labor, adding to its value, and even when it substitutes for labor in certain areas, it can lead to higher employment in other types of work and raise overall economic welfare. That appears likely to be the case as AI applications diffuse through the economy in the future, though important new challenges will arise concerning cybersecurity. Indeed, AI appears poised to automate or augment economic tasks that had long been assumed to be out of reach for automation. Despite the economic resurgence of the past two years, there has been a rise in interest in vacating the free enterprise principles that have been instrumental to that recovery, and in turning instead to more socialized production methods that have generally been abandoned in countries that have tried them. Consistent with the 1946 mandate for this Report, we therefore turn, in chapter 8, “Markets versus Socialism,” to reviewing the empirical evidence on the economic effects of varying degrees of socialization of productive assets and the income generated by those assets. Hayek (1945) argued that the essential role of a competitive market price mechanism is to communicate dispersed and often incomplete knowledge, whereby firms will expand and consumers Economic Report of the President | 19 contract activity when prices are high and vice versa when prices are low, with both sides of the market thereby being guided by prices to equate demand with supply. We find that experiences of socialism that do not use prices to guide production and consumption this way have generally been characterized by distorted incentives and failures of resource allocation—in some extreme instances, on a catastrophic scale. In addition to quantifying the human and economic costs of highly socialist systems, we also estimate the effects of more moderate degrees of socialization. We find that even among market economies, average income and consumption are lower in those with relatively high levels of government taxes and transfers as shares of output—such as Denmark, Sweden, Norway, and Finland—than in the United States. This is because the relatively high average tax rates on middle incomes that finance this “Nordic model” also disincentivize generating income in the first place. Finally, we estimate that if the recent U.S. proposals for socialized medicine in terms of “Medicare for All” were implemented and financed by higher taxes, GDP would decline by 9 percent, or about $7,000 per person, in 2022. In chapter 9, “Reducing Poverty and Improving Self-Sufficiency in America,” we discuss the impact of the revival of the economy, more specifically on low-income households, and the Trump Administration’s approach to escaping poverty through economic growth and work-based public policies. President Lyndon B. Johnson declared a War on Poverty in January 1964. When using a full-income measure of poverty that is capable of capturing success in the War on Poverty, we find that poverty declined from 19.5 percent in 1963 to 2.3 percent in 2017. This far exceeds the decline from 19.5 to 12.3 percent according to the Official Poverty Measure. However, victory was not achieved by making people self-sufficient, as President Johnson envisioned, but rather through increased government transfers. A new war on poverty should seek to further reduce material hardship based on modern standards, but should do so through incentives to achieve work and self-sufficiency. We discuss the Trump Administration’s important actions along these lines: expanding work requirements for nondisabled, working-age welfare recipients in noncash welfare programs; increasing child care assistance for low-income families; and increasing the reward for working by doubling the Child Tax Credit and increasing its refundability. Finally, in chapter 10, “The Year in Review and the Years Ahead,” we analyze important macroeconomic developments in 2018 and present the Trump Administration’s full, policy-inclusive economic forecast for the next 11 years, including risks to the forecast. Overall, assuming full implementation of the Trump Administration’s economic policy agenda, we project real U.S. economic output to grow at an average annual rate of 3.0 percent between 2018 and 2029. We expect growth to moderate, from just over 3.0 percent in 2018 and 2019, as the capital-to-output ratio asymptotically approaches its 20 | Economic Report of the President new, postbusiness tax reform steady state and as the near-term effects of the TCJA’s individual provisions on the rate of growth dissipate into a permanent level effect. Partially offsetting this moderation are the expected contributions of the supply-side effects of the Trump Administration’s current and future deregulatory actions, as discussed in chapter 2; the permanent extension of the personal income tax provisions of the TCJA, as discussed in chapter 1; and the Administration’s infrastructure proposal, as analyzed in the 2018 Economic Report of the President. In chapter 10, we also explore potential downside risks to the forecast, including nonimplementation, or repeal, of the Trump Administration’s economic policy agenda, slowing economic growth in major economies outside the United States, and the possible adverse economic effects of recent public proposals for “Medicare for All” and a top marginal income tax rate of 70 percent. Collectively, the 10 chapters that constitute this Report demonstrate that the strong economic performance in 2017 and 2018 constituted a sharp break from the previous pace of economic and employment growth since the start of the present expansion, reflecting the Administration’s reprioritization of economic efficiency and growth over alternative policy aspirations that subordinated growth. We further demonstrate that a unified agenda of tax, regulatory, labor, healthcare, financial, and energy market reforms that enhance the role of market prices is a more efficient and effective approach to unleashing the growth potential of the U.S. economy. The CEA’s mandate under the Employment Act of 1946 is to advise on how best to achieve “maximum employment, production, and purchasing power.” To this end, this Report provides evidence supporting the CEA’s endorsement of free, competitive enterprise relying on market prices to guide economic activity over alternatives demanding increased socialization of productive assets and a consequently diminished role for market prices. Economic Report of the President | 21 x Contents Chapter 1: Evaluating the Effects of the Tax Cuts and Jobs Act. . . . . . . . . . . . . 35 Output and Investment. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38 Labor Market Effects. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 54 International Developments. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 62 The “Deemed Repatriation” of Accumulated Foreign Earnings.. . . . . . . . . . 63 Share Repurchases and Capital Distributions. . . . . . . . . . . . . . . . . . . . . . . . 67 Conclusion. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 75 Chapter 2: Deregulation: Reducing the Burden of Regulatory Costs.. . . . . . . . 77 Principles of Regulation and Regulatory Impact Analysis. . . . . . . . . . . . . . . . . . 81 Public Goods and Private Markets .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 81 The Process of Doing Regulatory Impact Analysis . . . . . . . . . . . . . . . . . . . . 82 The Current Regulatory Landscape.. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 83 Federal Regulatory and Deregulatory Actions . . . . . . . . . . . . . . . . . . . . . . . 83 The Trump Administration’s Regulatory Cost Caps Are Reducing Costs. . . . 92 Why More Deregulation?. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 97 Estimates of the Aggregate Cost of Regulation.. . . . . . . . . . . . . . . . . . . . . . 97 The Need to Level the Playing Field for Deregulation . . . . . . . . . . . . . . . . . 100 The Cumulative Economic Impact of Regulation. . . . . . . . . . . . . . . . . . . . . . . . The Effects of Regulation Are Transmitted through Markets.. . . . . . . . . . . The Cumulative Burden I: Within Industry. . . . . . . . . . . . . . . . . . . . . . . . . . The Cumulative Burden II: Costs along the Supply Chain .. . . . . . . . . . . . . 105 105 107 109 Lessons Learned: Strengthening the Economic Analysis of Deregulation. . . . Diagnosing Market Failure.. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . The Costs of Regulatory Actions That Are Correct on Average.. . . . . . . . . . Examples of the Excess Burdens of Regulatory Actions. . . . . . . . . . . . . . . . The Burdens of Nudge Regulatory Actions . . . . . . . . . . . . . . . . . . . . . . . . . Expanding Use of Regulatory Impact Analysis . . . . . . . . . . . . . . . . . . . . . . 113 113 115 116 119 123 Case Studies of Deregulatory Actions and Their Benefits and Costs. . . . . . . . Case Study 1: Association Health Plans.. . . . . . . . . . . . . . . . . . . . . . . . . . . Case Study 2: Short-Term, Limited-Duration Insurance Plans. . . . . . . . . . . Case Study 3: Specifying the Joint Employer Standard . . . . . . . . . . . . . . . 126 126 130 134 Conclusion. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 137 Economic Report of the President | 23 Chapter 3: Expanding Labor Force Opportunities for Every American. . . . . . 139 Long-Run Trends in Adult Employment, Labor Force Participation, and Wage Earnings.. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 142 Employment and Labor Force Participation. . . . . . . . . . . . . . . . . . . . . . . . 142 Wages and Labor Earnings.. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 145 Prime-Age Employment by Gender.. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 151 Barriers to Work from Child Care Expenses. . . . . . . . . . . . . . . . . . . . . . . . . 156 Policies to Reduce Barriers to Work Resulting from Child Care Expenses. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 162 Prime-Age Employment by Race, Ethnicity, and Education. . . . . . . . . . . . . . . 167 Increasing Workers’ Skills and Closing Skill Mismatches. . . . . . . . . . . . . . . 172 Reforming Occupational Licensing. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 180 Employment Experiences in Rural Areas. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 183 Policies to Enhance Rural Communities.. . . . . . . . . . . . . . . . . . . . . . . . . . . 188 Conclusion. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 192 Chapter 4: Enabling Choice and Competition in Healthcare Markets. . . . . . . 195 Rationales for the Government’s Healthcare Interventions That Restrict Competition and Choice. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Uncertainty, Third-Party Payments, and the Problem of Moral Hazard . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Asymmetric Information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Barriers to Market Entry. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . The Inelastic Demand for Healthcare.. . . . . . . . . . . . . . . . . . . . . . . . . . . . . Healthcare Is Not Exceptional. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Redistribution and Merit Goods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Current Proposals That Decrease Choice and Competition. . . . . . . . . . . . . . . Implications for the Value of the Program and Health Outcomes . . . . . . . Economies of Scale and Administrative Costs in Insurance . . . . . . . . . . . . Cross-Country Evidence on the Effects of Universal Healthcare on Health Outcomes and the Elderly.. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . The Lower Quality of Universal Coverage, in Terms of Reduced Availability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A U.S. Single-Payer System Would Have Adverse Long-Run Effects on Global Health through Reduced Innovation. . . . . . . . . . . . . . . . . . . . . . Financing “Medicare for All” . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . The Administration’s Actions to Increase Choice and Competition in Health Insurance. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . The Stability of the Nongroup Health Insurance Market .. . . . . . . . . . . . . . Setting the Individual Tax Mandate Penalty to Zero. . . . . . . . . . . . . . . . . . A Cost-Benefit Analysis of Setting the Individual Mandate Tax Penalty to Zero. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 199 200 202 203 204 207 207 209 210 211 214 216 219 221 223 224 227 229 Improving Competition to Lower Prescription Drug Prices.. . . . . . . . . . . . . . . 234 Lowering Prices through Competition. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 235 24 | Economic Report of the President The Administration’s Efforts to Enhance Generic and Innovator Competition. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 238 Estimated Reductions in Pharmaceutical Drug Costs from Generic Drug Entry . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 241 Estimates of the Value of Price Reductions from New Drugs. . . . . . . . . . . . 243 Conclusion. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 245 Chapter 5: Unleashing the Power of American Energy.. . . . . . . . . . . . . . . . . . . 247 U.S. Fuel Production Reached Record Levels in 2018. . . . . . . . . . . . . . . . . . . . U.S. Oil Production Is At an All-Time High. . . . . . . . . . . . . . . . . . . . . . . . . . The Natural Gas Revolution Rolls On. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Coal Production Is Recovering after the 2015–16 Slump. . . . . . . . . . . . . . . 249 250 257 261 U.S. Fuels in the Global Marketplace.. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . U.S. Oil Exports Are At an Unprecedented High. . . . . . . . . . . . . . . . . . . . . . After 60 Years, the U.S. Is Again a Net Exporter of Natural Gas . . . . . . . . Coal Exports. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Strategic Value. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 263 264 267 271 274 Energy Policy. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Increasing Access to Production. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Electricity Generation. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Deregulation. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Environmental Implications. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 275 276 277 287 289 Conclusion. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 295 Chapter 6: Ensuring a Balanced Financial Regulatory Landscape. . . . . . . . . . 297 The Causes and Consequences of the 2008 Systemic Crisis. . . . . . . . . . . . . . . The Boom/Bust Cycle in Residential Real Estate. . . . . . . . . . . . . . . . . . . . . Implicit Government Support That Undermined Market Discipline.. . . . . . An Ineffective and Uncoordinated Regulatory Response . . . . . . . . . . . . . . The Consequences of the Financial Crisis.. . . . . . . . . . . . . . . . . . . . . . . . . . 298 299 303 307 309 The Consequences of the Dodd-Frank Act. . . . . . . . . . . . . . . . . . . . . . . . . . . . . Addressing Systemic Risk. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Dodd-Frank’s Ill-Considered Approach. . . . . . . . . . . . . . . . . . . . . . . . . . . . . Dodd-Frank’s Consequences.. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 311 311 312 314 A More Measured Approach to Financial Regulation. . . . . . . . . . . . . . . . . . . . . 318 Core Principles for Regulating the U.S. Financial System. . . . . . . . . . . . . . 318 Recommendations for Meeting the Core Principles. . . . . . . . . . . . . . . . . . . 318 Conclusion. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 335 Chapter 7: Adapting to Technological Change with Artificial Intelligence while Mitigating Cyber Threats. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 339 What Is Artificial Intelligence?. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 342 Machine Learning. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 344 Applications of AI Technology. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 346 Economic Report of the President | 25 Technological Progress and the Demand for Labor.. . . . . . . . . . . . . . . . . . . . . A Brief History of Technological Change and Work. . . . . . . . . . . . . . . . . . . Effects of Technological Progress on Investment and Wages . . . . . . . . . . . Trade between People and Machines.. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 347 347 349 352 The Uneven Effects of Technological Change.. . . . . . . . . . . . . . . . . . . . . . . . . . Differential Effects by Occupation and Skill. . . . . . . . . . . . . . . . . . . . . . . . . The Scale and Factor-Substitution Effects of an Industry’s Technological Progress . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . When Will We See the Effects of AI on the Economy? .. . . . . . . . . . . . . . . . . 355 355 355 358 Cybersecurity Risks of Increased Reliance on Computer Technology. . . . . . . 361 Assessing the Scope of the Cyber Threat . . . . . . . . . . . . . . . . . . . . . . . . . . . 361 Potential Vulnerabilities by Industry . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 365 The Role of Policy.. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Policy Considerations as AI Advances: Preparing for a Reskilling Challenge. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . The Administration’s Policies to Promote Cybersecurity. . . . . . . . . . . . . . . The Administration’s Policies to Maintain American Leadership in Artificial Intelligence. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . The Administration’s Implementation of the National Cyber Strategy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Further Artificial Intelligence and Future of Work Policy Considerations. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 368 368 370 371 373 375 Conclusion. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 378 Chapter 8: Markets versus Socialism. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 381 The Economics of Socialism. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . The Socialist Economic Narrative: Exploitation Corrected by Central Planning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . The Role of Incentives in Raising and Spending Money . . . . . . . . . . . . . . . The Economic Consequences of “Free” Goods and Services. . . . . . . . . . . . 386 Socialism’s Track Record. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . State and Collective Farming. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Unintended Consequences. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Lessons Learned.. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Central Planning in Practice . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . The Case of Venezuela Today: An Industrialized Country with Socialist Policies.. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Economic Freedom and Living Standards in a Broad Cross Section of Countries. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . The Nordic Countries’ Policies and Incomes Compared with Those of the United States. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Measuring Tax Policies in the Nordic Countries.. . . . . . . . . . . . . . . . . . . . . Measuring Regulation in the Nordic Countries . . . . . . . . . . . . . . . . . . . . . . Income and Work Comparisons with the United States. . . . . . . . . . . . . . . . 392 394 395 399 400 26 | Economic Report of the President 386 388 391 401 403 406 406 411 412 Returns to “Free” Higher Education in the Nordic Countries.. . . . . . . . . . . 417 Socialized Medicine: The Case of “Medicare for All”. . . . . . . . . . . . . . . . . . . . . 419 “Medicare for All” from an International Perspective . . . . . . . . . . . . . . . . . 420 Effects on Overall Economic Activity.. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 423 Conclusion. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 425 Chapter 9: Reducing Poverty and Increasing Self-Sufficiency in America. . . 427 The Success of the War on Poverty. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . The Elements of a Poverty Measure. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . The Inability of Existing Poverty Measures to Assess the War on Poverty. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . The Full-Income Poverty Measure.. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 432 433 438 448 The Failure to Promote Self-Sufficiency. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 456 Trends in Self-Sufficiency. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 456 Work among Nondisabled, Working-Age Recipients of Key Welfare Programs. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 459 A New War on Poverty. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . The Success of Welfare Reform. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Lessons from Welfare Reform for Work Requirements in Noncash Programs. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Complementing Work Requirements with Work Supports and Rewards. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Benefits for Children. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 465 466 468 477 482 Conclusion. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 483 Chapter 10: The Year in Review and the Years Ahead. . . . . . . . . . . . . . . . . . . . . 485 Output.. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Consumer Spending. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Government Purchases.. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Net Exports. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 489 490 493 493 The Trade Year in Review.. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . U.S. Trade Policy in 2018. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Section 201: Solar Cells and Large Residential Washing Machines. . . . . . . Section 232: Steel and Aluminum. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Section 301: China. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Trade Agreements.. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Case Study: The Universal Postal Union.. . . . . . . . . . . . . . . . . . . . . . . . . . . 495 495 497 499 502 504 507 Policy Developments. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 511 Fiscal Policy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 511 Monetary Policy. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 512 Productivity. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 514 Inflation. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 515 Financial Markets. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 515 Economic Report of the President | 27 Equity Markets. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 516 Interest Rates and Credit Spreads. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 517 The Global Macroeconomic Situation.. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 520 Developments in 2018.. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 520 The Outlook. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . GDP Growth during the Next Three Years. . . . . . . . . . . . . . . . . . . . . . . . . . . GDP Growth over the Longer Term. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Upside and Downside Forecast Risks. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 526 527 527 531 Conclusion. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 532 References.. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 533 Appendixes A. B. Report to the President on the Activities of the Council of Economic Advisers During 2018 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 613 Statistical Tables Relating to Income, Employment, and Production. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 625 Figures 1-1 1-2 1-3 1-4 1-5 1-6 1-7 1-8 1-9 1-i 1-ii 1-iii 1-iv 1-10 1-11 1-12 1-13 1-14 1-15 1-16 28 | Adjustment Dynamics to a New Steady-State Capital Output Ratio. . . . . . . . . 41 Percentage of NFIB Survey Respondents Planning Capital Expenditures in the Next 3 to 6 Months, 2016–18. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42 Percentage of NFIB Survey Respondents Reporting That Now Is a Good Time to Expand, 2016–18. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42 Morgan Stanley’s Capex Plans Index, 2016–18 . . . . . . . . . . . . . . . . . . . . . . . . . . . 44 Core Capital Goods Orders, 2012–18. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44 Growth in Real Nonresidential Fixed Investment, 2017:Q4–2018:Q3. . . . . . . . 45 Forecast Errors for Equipment Investment and Price . . . . . . . . . . . . . . . . . . . . . 46 Structural VAR and Narrative Estimates versus Actual. . . . . . . . . . . . . . . . . . . . . 49 Growth in Real GDP, 2012:Q4–2018:Q3 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50 12–Month Percentage Change in National Real Housing Price Indices, 2015–18. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52 Four–Quarter Percentage Change in Regional Real Housing Price Indices, 2015–18. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53 National Home Ownership Rate, 2015–18 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53 Distribution of Announced Tax Reform Bonuses, 2018. . . . . . . . . . . . . . . . . . . . 58 Net Percentage of NFIB Survey Respondents Planning to Raise Worker Compensation in the Next 3 Months, 2016–18 . . . . . . . . . . . . . . . . . . . . . . . . . . . 59 Net Percentage of NFIB Survey Respondents Planning to Increase Employment, 2016–18. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59 Total Private Job Openings, 2014–18. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 60 Above-Trend Real Labor Compensation and Wage Growth, 2018. . . . . . . . . . . 61 Real U.S. Repatriated Earnings and Share Repurchases, 2015–18. . . . . . . . . . 68 Real Nonfinancial Corporate Share Repurchases, 2010–18. . . . . . . . . . . . . . . . 70 Real Corporate Net Dividends, 2010–18 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 71 Economic Report of the President 1-17 1-18 1-19 1-v 2-1 2-2 2-3 2-4 2-i 2-5 2-6 3-1 3-2 3-3 3-4 3-5 3-6 3-7 3-8 3-9 3-10 3-11 3-12 3-i 3-ii 3-iii 3-13 3-14 3-15 3-16 3-17 3-18 Real Private Nonresidential Fixed Investment by Noncorporate Businesses, 2012–18 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 72 Real Venture Capital Investment, 2012–18. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 72 Gross Foreign Sales of U.S. Corporate Stocks, 2013–18. . . . . . . . . . . . . . . . . . . 73 The Tax Cuts and Jobs Act: Farm Estates Exempted from Filing and Paying Estate Taxes, 2016. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 74 Economically Significant Final Rules, Presidential Year 1990–2018. . . . . . . . . 84 OMB-Reviewed Final Rules, by Agency, 2000–2018 . . . . . . . . . . . . . . . . . . . . . . . 89 Real Annual Costs of Major Rules, Fiscal Years 2000–2019. . . . . . . . . . . . . . . . . 89 Cumulative Costs of Major Rules, Fiscal Years 2000–2019 . . . . . . . . . . . . . . . . . 90 Small Business Optimism Index, 2000–2018. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 91 Distorted Allocation of Resources Among Industries. . . . . . . . . . . . . . . . . . . . 108 How Industry Regulation Affects the Aggregate Factor Market. . . . . . . . . . . 111 Labor Force Participation Rate and Employment-to-Population Ratio, 1950–2018 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 143 Adult Population by Age (Years), 1950–2018 . . . . . . . . . . . . . . . . . . . . . . . . . . . 144 Labor Force Participation Rate by Age (Years), 1950–2018. . . . . . . . . . . . . . . 144 Nominal Weekly Wage Growth Among All Adult Full-Time Wage and Salary Workers, 2010–18. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 147 Nonfarm Business Sector Real Output per Hour, 1980–2018. . . . . . . . . . . . . 149 Share of Adults Starting Work Who Were Not in the Labor Force Rather Than Unemployed, 1990–2018 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 150 Labor Force Participation Rate and Employment-to-Population Ratio for Prime-Age Adults by Gender, 1950–2018. . . . . . . . . . . . . . . . . . . . . . 152 Labor Force Participation Rates Among Prime-Age Women by Marital and Parental Status, 1982–2018 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 157 Child Care Costs as a Percentage of States’ Median Hourly Wage, 2017. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 161 Number of States and Average Center-Based Child Care Cost by Minimum Staff-to-Child Ratio and Age Group. . . . . . . . . . . . . . . . . . . . . . . . . . 165 Employment-to-Population Ratio for Prime-Age Adults by Race, 1994–2018 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 168 Labor Force Participation Rate for Prime-Age Adults by Race, 1972–2018 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 168 Employment-to-Population Ratio for Prime-Age Males by Race, 1994–2018 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 169 Employment-to-Population Ratio for Prime-Age Females by Race, 1994–2018 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 170 Rate of Imprisonment, by Gender and Race, 2016. . . . . . . . . . . . . . . . . . . . . . 171 Employment-to-Population Ratio for Prime-Age Adults by Education Level, 1992–2018. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 173 Employment Growth by Industry Relative to Total Adult Population Growth, 1979–2018. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 174 Job Opening Rates by Industry, Q4:2018 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 175 Public Expenditures on Active Labor Market Programs, 2016. . . . . . . . . . . . 177 Expenditures on Education and Skills Training by Age and Source, 2017 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 177 Workers with a Professional License or Certification, by Occupation and Education Level, 2017. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 182 Economic Report of the President | 29 3-19 3-20 3-21 3-22 3-23 4-1 4-2 4-3 4-4 4-5 4-6 4-7 4-8 4-9 4-10 4-11 4-12 5-1 5-2 5-3 5-i 5-4 5-5 5-6 5-7 5-8 5-9 5-10 5-11 5-12 5-13 5-14 5-15 5-16 5-17 5-ii 5-iii 5-18 30 | Employment-to-Population Ratio for Prime-Age Adults by Geography, 1976–2018 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 185 Industry Employment by Geography, 2017 . . . . . . . . . . . . . . . . . . . . . . . . . . . . 186 Educational Attainment in Rural versus Urban Areas . . . . . . . . . . . . . . . . . . . 187 Manufacturing Employment Growth, 1980-2018 . . . . . . . . . . . . . . . . . . . . . . . 189 Goods Producing Employment Growth, 1980-2018 . . . . . . . . . . . . . . . . . . . . . 190 Average Survival from a Cancer Diagnosis, 1983–99 . . . . . . . . . . . . . . . . . . . . 215 Seniors Who Waited at Least Four Weeks to See a Specialist during the Past Two Years, 2017 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 218 Effect of U.S. Drug Price Controls on Global Longevity, Among Those Age 55–59, 2010–60 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 221 Premium Costs as a Function of Household Income, 2018. . . . . . . . . . . . . . . 226 Nominal Gross Premiums per Member per Year for Subsidized Enrollees, 2014–8 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 227 Benefits of Setting the Individual Mandate Penalty to Zero . . . . . . . . . . . . . . 230 Price of Prescription Drugs Relative to the PCE, 2013–18 . . . . . . . . . . . . . . . 236 Generic Drug Price Relative to Brand Name Price, 1999–2004. . . . . . . . . . . . 237 New Generic Drug Applications Approved, 2013–18 . . . . . . . . . . . . . . . . . . . . 239 New Drug Applications and Biologics License Applications Approved, 2013–17 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 241 Price Decline Due to Generic Drug Entry . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 242 Price Reductions from Brand Name Entry . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 244 Energy Content of U.S. Fossil Fuels Production, 1980–2018 . . . . . . . . . . . . . 250 U.S. Monthly Crude Oil Production, 1981–2018. . . . . . . . . . . . . . . . . . . . . . . . . 252 Crude Oil Production in the United States, Russia, and Saudi Arabia, 2008–18 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 252 OPEC Crude Oil Production vs. Production Targets, 2016–18 . . . . . . . . . . . . 255 U.S. Lower-48 Production versus Hubbert’s 1956 Peak Oil Prediction, 1920–2018 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 257 U.S. Natural Gas Trade and Withdrawals, 1940–2018. . . . . . . . . . . . . . . . . . . . 260 U.S. Monthly Trade in Natural Gas, 2001–18 . . . . . . . . . . . . . . . . . . . . . . . . . . . 261 U.S. Quarterly Coal Disposition, 2002–18. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 262 Changes in AEO Natural Gas Forecast, 2010 versus 2018 . . . . . . . . . . . . . . . . 263 U.S. Crude Oil and Finished Product Exports, 2000–2018. . . . . . . . . . . . . . . . 265 U.S. Crude and Petroleum Product Exports, 2000–2018. . . . . . . . . . . . . . . . . 266 U.S. Petroleum Trade Balance, 2000–2018. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 266 Real U.S. Refiners’ Acquisition Costs and Recessions, 1974–2018 . . . . . . . . 268 Historic and Projected LNG Export Revenue, 2015–25 . . . . . . . . . . . . . . . . . . 271 Export Prices for U.S. LNG by Destination, 2017. . . . . . . . . . . . . . . . . . . . . . . . 272 U.S. Quarterly Coal Exports, 2007–18. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 273 Iranian Crude Oil Exports, 2016–18 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 275 OPEC Spare Production Capacity and Crude Oil Prices, 2001–20. . . . . . . . . 276 Northern Alaska, the Arctic National Wildlife Refuge, and the Coastal Plain 1002 Area. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 279 Alaska Crude Oil Production and Arctic National Wildlife Refuge Production Forecasts, 1973–2050. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 280 CEA Estimates of Federal Electricity Generation Subsidies by Fuel Type for Fiscal Year 2016. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 284 Economic Report of the President 5-19 5-20 5-21 5-iv 5-v 6-1 6-2 6-i 6-3 6-4 6-5 6-6 6-7 6-8 6-9 6-10 6-ii 6-iii 7-1 7-2 7-i 7-3 7-4 7-5 7-6 7-7 8-1 8-2 8-3 8-4 8-5 8-6 8-7 Average Total Cost for Investor-Owned Utilities by Fuel Type, 2007–17 . . . 286 Energy Intensity of GDP, 1980–2015 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 289 Annual World Carbon Dioxide Emissions, 1990–2017 . . . . . . . . . . . . . . . . . . . 290 Sulfur Dioxide Emissions and Rainwater Acidity, 1990–2017 . . . . . . . . . . . . . 292 Global Marine Fuel Sulfur Limits, 2005–21 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 294 S&P CoreLogic Case-Shiller Home Price Index, National Value, 1996–2011 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 300 Increase in S&P CoreLogic Case-Shiller Home Price Index by City, 1996–2006 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 300 The Rise in Subprime and Alt-A Loans Leading into the Financial Crisis, 2000–2007 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 302 Decrease in S&P CoreLogic Case-Shiller Home Price Index by City, 2006–11 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 304 Percentage of U.S. Conventional Subprime Mortgages 90 Days or More Past Due, 2002–16 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 304 Share of U.S. Home Mortgage Debt Held by Financial Sectors, 1955–2017 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 306 Gross Repurchase Agreement Funding to Banks and Broker-Dealers, 1990–2010 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 308 National and Long-Term Unemployment Rate, 2000–2018 . . . . . . . . . . . . . . 310 Tier 1 Capital Ratios of U.S. G-SIBs, 2001–18 . . . . . . . . . . . . . . . . . . . . . . . . . . . 313 Liquid Assets of U.S. G-SIBs, 2000–18 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 313 Average Economic Growth by Expansion Period, 1983–2018 . . . . . . . . . . . . . 316 Consolidation in U.S. Banking and Thrift Industries, 1934–2018 . . . . . . . . . 331 Share of Industry Assets Held by U.S. Banking Organizations That Became the Four Largest by 2008, 1985–2018 . . . . . . . . . . . . . . . . . . . . . . . . . . 332 Error Rate of Image Classification by Artificial Intelligence and Humans, 2010–17 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 343 The Effect of AI on the Amount of Capital and the Distribution of Factor Incomes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 351 Precision Agriculture Use in Peanuts and Soybeans . . . . . . . . . . . . . . . . . . . . 356 Share of Respondents Reporting Income from Ride-Sharing Platforms in the Past Year, 2013–18 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 359 Cybersecurity Breaches That Were Made Public, 2005–18 . . . . . . . . . . . . . . . 363 Industries That Are Most Lacking DMARC Protocol Among Fortune 500 Companies by Value Added, 2017 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 367 DMARC Protocol Use Across Government Agencies, 2018 . . . . . . . . . . . . . . . 367 Supply-and-Demand Ratio for Cybersecurity Jobs, 2018 . . . . . . . . . . . . . . . . 370 Four Ways to Spend Money . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 389 Annual Trend of Births and Deaths in Ukraine, by Gender, 1924–39 . . . . . . 398 Birthrates and Death Rates in China, 1949–78 . . . . . . . . . . . . . . . . . . . . . . . . . . 398 Total Production of Petroleum and Other Liquid Fuels, 1998–2018 . . . . . . . 403 The Progressivity of Personal Income Tax Structures in the Nordic Countries and the United States, 2017 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 411 Average Annual Cost of a Honda Civic in the United States versus Denmark, 2018 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 414 Real Income and GDP per Capita of People of Nordic Ancestry, by Place of Residence, 2015. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 415 Economic Report of the President | 31 8-8 9-1 9-2 9-3 9-4 9-5 9-6 9-7 9-8 9-9 9-10 9-i 9-ii 9-11 9-12 10-1 10-2 10-3 10-4 10-5 10-6 10-7 10-8 10-9 10-10 10-11 10-12 10-13 10-14 32 | Net Lifetime Private Financial Returns from Attaining Tertiary Education, 2017 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 419 Percentage of U.S. Population Enrolled in Noncash Welfare Programs, 1963–2017 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 442 Price Indices of Various Inflation Measures, 1963–2017 . . . . . . . . . . . . . . . . . 444 Percentage of the Population Living in Poverty, Based on Various Measures, 1960–2017. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 448 Percent of the Population Living in Poverty, Based on Official Poverty Measure and the Full-Income Poverty Measure, 1963–2017. . . . . . . . . . . . . . 452 Percentage of the Population Living in Poverty, Crosswalk from the Official Poverty Measure to the Full-Income Poverty Measure, 1963–2017 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 453 Individual-Level, Posttax, Posttransfer Household Size-Adjusted Income Distribution, Including In-Kind Transfers and Market Value of Health Insurance, Using PCE Inflation. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 454 Percentage of Nondisabled, Working-Age Adults Living in a Household That Receives Assistance during the Year, 1967–2017. . . . . . . . . . . . . . . . . . . 458 Percentage of Nondisabled, Working-Age Adults Employed, by Gender, 1968–2017. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 459 Hours Worked among December 2013 Nondisabled, Working-Age SNAP Recipients, January 2013 to December 2014. . . . . . . . . . . . . . . . . . . . . 463 Female Employment, by Marital Status and Youngest Child’s Age, and Number of States with Welfare Reform, 1985–2018. . . . . . . . . . . . . . . . . 468 States with Medicaid Waivers Implemented, Approved, and Pending, 2018. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 472 States Waiving SNAP Time Limit for Able-Bodied Adults (18-49) Without Dependents, Fourth Quarter of Fiscal Year 2018. . . . . . . . . . . . . . . . 474 Earned Income Tax Credit Schedule, by Number of Children and Filing Type, Tax Year 2018. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 479 Child Tax Credit for a Family with Two Children, Unmarried and Married Parents, Under Previous Law and Tax Cuts and Jobs Act (TCJA), 2018. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 481 Evolution of Blue Chip Consensus Forecasts for Real GDP Growth during the Four Quarters of 2017, 2018, and 2019. . . . . . . . . . . . . . . . . . . . . . 488 Personal Saving Rate, 1990–2018. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 491 Consumer Sentiment, 1980–2018. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 492 Consumption and Wealth Relative to Disposable Personal Income, 1952–2018 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 492 Government Purchases as a Share of Nominal GDP, 1948–2018. . . . . . . . . . 494 Contribution of Net Exports to U.S. Real GDP Growth, 2000–2018. . . . . . . . 494 The Federal Reserve’s Total Assets, 2000–2018. . . . . . . . . . . . . . . . . . . . . . . . . 514 Consumer Price Inflation, 2012–18. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 516 The CBOE’s Market Volatility Index, 2018. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 517 North American Investment Grade and High Yield Credit Default Swap Spreads, 2015–18. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 518 U.S. Treasuries’ Yield Curve, 2016–18 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 519 Foreign Real GDP and U.S. Real Export Growth, 2000–2018. . . . . . . . . . . . . . 521 China’s Broad Credit and Activity Proxy, 2009–19. . . . . . . . . . . . . . . . . . . . . . . 524 Nonperforming Loans and Return on Assets for China’s Commercial Banks, 2011–18. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 525 Economic Report of the President 10-15 10-16 Emerging Market Credit to Nonfinancial Corporations, 2008–18 . . . . . . . . . 525 Forecast for Growth Rate of Real GDP, 2018–29 . . . . . . . . . . . . . . . . . . . . . . . . 529 3-1 3-2 Number of Prime-Age Females by Marital and Parental Status, 2018 . . . . . 158 EITC Benefits for a Married Couple with Two Children, Based on the Additional Earnings from a Second Full-Time Worker, 2018 . . . . . . . . . . . . . 164 Adult Waiting Times for Nonemergency or Elective Surgery, 2016. . . . . . . . 217 Adult Waiting Times for Specialist Appointments during the Past Two Year, 2017. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 217 IRS Reporting of Individual Mandate Payments, 2014–16 . . . . . . . . . . . . . . . 228 Agricultural Production in Cuba After the Nationalization of Farms . . . . . . 397 Tax Policies in the United States and the Nordic Countries, 2015–18 . . . . . 408 All-In Average Personal Income Tax Rate, Less Transfers, at the Average Wage, 2017. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 412 Regulation Policies in the United States and the Nordic Countries, 2013. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 413 Actual Individual Consumption per Head at Current Prices and Purchasing Power Parity, 2016 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 416 Relative Income Inequality, 2015 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 417 National Accounts with and without “Medicare for All,” 2022. . . . . . . . . . . . 425 Basic Elements of Poverty Measures. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 440 Number of People by Welfare Receipt, Age and Disability Status, December 2013. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 460 Percentage of Nondisabled, Working-Age Adults Working Various Weekly Average Hours by Welfare Receipt, December 2013. . . . . . . . . . . . . . 461 Number of Nondisabled, Working-Age Adults by Welfare Program Receipt, by Category, December 2013. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 470 Timeline of Trade Events . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 500 Year-over-Year Real GDP Growth for Selected Areas and Countries, 2017–19. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 522 Administration Economic Forecast, 2017–29. . . . . . . . . . . . . . . . . . . . . . . . . . . 528 Supply-Side Components of Actual and Potential Real Output Growth, 1953–2029 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 530 Tables 4-1 4-2 4-3 8-1 8-2 8-3 8-4 8-5 8-6 8-7 9-1 9-2 9-3 9-4 10-1 10-2 10-3 10-4 Boxes 1-1 1-2 1-3 1-4 2-1 2-2 2-3 2-4 2-5 The Mortgage Interest Deduction and the Tax Cuts and Jobs Act. . . . . . . . . . . 51 Corporate Bonuses, Wage Increases, and Investment since the TCJA’s Passage. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57 The TCJA’s Provisions Shift the United States toward a Territorial System of Taxation. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 64 Estate Taxes and Family Farms. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 74 The Terminology of Federal Regulatory Actions. . . . . . . . . . . . . . . . . . . . . . . . . . 85 Economic Regulation and Deregulation. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 86 Small Businesses and the Regulatory Burden. . . . . . . . . . . . . . . . . . . . . . . . . . . . 91 Notable Deregulatory Actions. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 93 Retrospective Regulatory Review. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 95 Economic Report of the President | 33 2-6 3-1 3-2 3-3 3-4 4-1 5-1 5-2 5-3 5-4 5-5 6-1 6-2 6-3 6-4 6-5 7-1 7-2 7-3 7-4 8-1 9-1 9-2 9-3 9-4 10-1 10-2 10-3 34 | Opportunity Costs, Ride Sharing, and What Is Not Seen. . . . . . . . . . . . . . . . . 104 The Opioid Epidemic and Its Labor Market Effects. . . . . . . . . . . . . . . . . . . . . . 154 Employment Rates among Black Men. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 169 The President’s National Council for the American Worker. . . . . . . . . . . . . . 178 Strengthening Local Economies through Opportunity Zones. . . . . . . . . . . . 191 Additional Regulatory Reforms. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 233 OPEC’s Oil Production Cuts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 255 The Important Economic Effects of State Regulation on Energy Production. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 259 The Federal Role in Promoting Domestic Fuels Production: The Case of Alaska. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 278 Long-Term Improvements in Environmental Quality. . . . . . . . . . . . . . . . . . . . 292 International Environmental Standards and Liquid Fuels Markets: IMO 2020. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 293 Defining Subprime, Alt-A, and Nontraditional Mortgages. . . . . . . . . . . . . . . . 302 Measuring Regulatory Burden on the Financial Sector. . . . . . . . . . . . . . . . . . 317 Evaluating the Costs and Benefits of Bank Regulations . . . . . . . . . . . . . . . . . 323 Restoring Market Discipline in Banking. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 328 Factors Driving the Long-Term Consolidation in Banking. . . . . . . . . . . . . . . . 331 DARPA: Strategic Investments in Artificial Intelligence and Cybersecurity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 354 Technological Change in Agriculture and Rural America . . . . . . . . . . . . . . . . 356 Educating the Cyber Workforce of Tomorrow . . . . . . . . . . . . . . . . . . . . . . . . . . 374 Estonia: A Case Study in Modern Cybersecurity Practices . . . . . . . . . . . . . . . 376 What Is “Medicare for All”?. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 421 The CEA’s Role at the Beginning of the War on Poverty. . . . . . . . . . . . . . . . . . 437 Obtaining Better Evidence through Better Data . . . . . . . . . . . . . . . . . . . . . . . . 450 Medicaid Community Engagement Demonstration Projects. . . . . . . . . . . . . 471 Addressing Problems with SNAP Work Requirement Waivers. . . . . . . . . . . . 473 Mitigating Trade Retaliation for Agricultural Producers . . . . . . . . . . . . . . . . . 503 USMCA and U.S. Auto Manufacturing. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 505 USMCA and Canadian Dairy. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 506 Economic Report of the President x Chapter 1 Evaluating the Effects of the Tax Cuts and Jobs Act The 2018 Economic Report of the President, citing an extensive literature of over 80 peer-reviewed studies, provided evidence that before the Tax Cuts and Jobs Act (TCJA), the U.S. economy and U.S. workers had been adversely affected by the conjunction of rising international capital mobility and increasingly uncompetitive U.S. business taxation relative to the rest of the world. The Report concluded that the results of the convergence of these two trends were deterred capital formation in the United States, an absence of capital deepening, and consequently stagnant wage growth. Considering the weight of evidence in support of these observations, the Report projected that the business and international provisions of the TCJA would raise the target U.S. capital stock, reorient U.S. capital away from direct investment abroad in low-tax jurisdictions and toward domestic investment, and raise worker compensation and household income through both a short-run bargaining channel and long-run capital deepening channel. Finally, the Report noted that reductions in effective marginal personal income tax rates could be expected to induce positive labor supply responses. In this chapter, we evaluate each of these anticipated effects of the TCJA on the basis of currently available data, and with particular attention to the relevant time horizons of each margin of adjustment to the positive tax shock. We find that firms responded immediately to the TCJA by upwardly revising planned capital expenditures, employee compensation, and hiring. We further find that real private investment in fixed assets rose at an annual rate of about 8 percent from the fourth quarter of 2017 through the third quarter of 2018, to $150 35 billion (about 6 percent) above the level reconstructed from the projected trend of the preceding expansion, during which fixed investments grew at an annual rate of about 5 percent. In addition to reporting a tally of over 6 million workers receiving an average bonus of nearly $1,200, we also estimate that, as of the third quarter of 2018, real disposable personal income per household was up $640 over the trend. Expressed as a perpetual annuity, this corresponds to a lifetime pay raise of about $21,000 for the average household—a $2.5 trillion boost to total real disposable personal income across all households. Finally, we report that the flow of U.S. direct investment abroad declined by $148 billion, while U.S. direct investment in eight identified tax havens declined by $200 billion, as U.S. multinational enterprises redirected capital investment toward the domestic economy. Applying insights from a large body of corporate finance literature, we then discuss channels—particularly shareholder distributions—through which we expect repatriations of past corporate earnings previously held abroad in low-tax jurisdictions to be efficiently reallocated by capital markets from cash-abundant to cash-constrained firms. O n December 22, 2017, President Trump signed into law the Tax Cuts and Jobs Act (TCJA). With an estimated $5.5 trillion in gross tax cuts accompanied by $4 trillion in new revenue over 10 years, and with fundamental changes to itemization and a movement toward a territorial system of corporate income taxation, the TCJA arguably constituted the most significant combination of tax cuts and comprehensive tax reform in U.S. history. The TCJA was motivated by four principal objectives: tax relief for middleincome families, simplification of the personal income tax code, economic growth through business tax relief and increased domestic investment, and repatriation of overseas earnings. First, accordingly, in the personal income tax code, the standard deduction was approximately doubled by the TCJA, thereby exempting a greater share of middle-class incomes from Federal income tax liability altogether, and simplifying tax filing for millions of American taxpayers who would previously have had to itemize deductions. The law also lowered marginal personal income tax rates across nearly all brackets, and raised and expanded eligibility for the Child Tax Credit. Second, the law eliminated certain deductions that disproportionately benefited higher-income households, while capping 36 | Chapter 1 others—such as the Mortgage Interest Deduction and State and Local Tax Deduction—that similarly skewed toward the highest-income tax filers. Third, to address the previous relative international uncompetitiveness of U.S. business taxation, the TCJA lowered the top marginal Federal statutory corporate tax rate from 35 percent—the highest in the developed world—to 21 percent. In addition, the TCJA introduced a 20 percent deduction for most owners of pass-through entities and generally allowed for immediate full expensing of new equipment investment. Fourth, to encourage repatriation of past overseas earnings of U.S. multinational enterprises previously held abroad in low-tax jurisdictions, and to prevent future corporate profit shifting through the mispricing of intellectual property products and services, the TCJA applied a low 8 or 15.5 percent tax on previously untaxed deferred foreign income and introduced a trio of new mechanisms to deter artificial corporate profit shifting. In the 2018 Economic Report of the President, the Council of Economic Advisers estimated that these provisions of the TCJA would: 1. Raise real capital investment by lowering the user cost of capital and thus raising the target steady-state flow of capital services. 2. Raise the growth rate of U.S. output—in the short run, through both supply- and demand-side channels; and in the long run, through a supply-side channel. 3. Raise worker compensation and household income, both through a short-run profit-sharing channel and a long-run capital deepening channel, raising the steady-state level of capital per worker. 4. Incentivize higher labor force participation. 5. Reorient U.S. capital investment away from direct investment abroad and toward domestic investment. 6. Induce large-scale repatriation of past overseas earnings of U.S. multinational enterprises previously held in low-tax jurisdictions. In this chapter, we evaluate these estimates and projections utilizing data available since the TCJA became law, and with particular attention to the relevant time horizons of different margins of adjustment to a positive tax shock. Consistent with projections reported in the 2018 Economic Report of the President, we find that output and investment accelerated in response to the reduction in the user cost of capital, and more importantly rose substantially above the trend. Real gross domestic product (GDP) growth rose 1.0 percentage point above the recent trend, while capital expenditures by nonfinancial businesses were up 12.1 percent over the trend. We also find that real disposable personal income rose above the trend, especially as forward-looking firms raised near-term compensation to retain similarly forward-looking workers in a tightening labor market. As of 2018:Q3, we estimate that real disposable personal income per household was up about $640 over the trend, while real median usual earnings of full-time wage and salary workers were up $805 on an annualized basis. We furthermore report Evaluating the Effects of the Tax Cuts and Jobs Act | 37 survey data indicating that these margins of adjustment were immediately anticipated by marked shifts in business expectations in response to the TCJA. In addition, we report that in the first three quarters of 2018 alone, $570 billion in overseas corporate dividends, including earnings previously reinvested abroad, were repatriated to the United States, out of an upper-bound estimated total stock of as much as $4.3 trillion, and that U.S. direct investment abroad declined by $148 billion as U.S. multinational enterprises redirected capital investment toward the domestic economy. We then discuss how repatriation affects the distribution of corporate earnings to shareholders, and how efficient capital markets utilize shareholder distributions to reallocate capital from established, cash-abundant firms without profitable investment opportunities to more dynamic, cash-constrained firms with profitable investment opportunities. Finally, we also report the results of several simple simulations estimating the implied effects on long-run Federal government tax revenues of the higher economic growth that has thus far been observed since the TCJA’s enactment. In summary, we find that the U.S. economy is responding auspiciously to the positive tax shock of the TCJA along multiple margins, and in patterns that are both broadly and specifically consistent with projections reported in the 2018 Economic Report of the President. Looking ahead, we suggest that making permanent the TCJA provisions that are currently scheduled to expire would improve the long-run potential growth of the U.S. economy. Output and Investment Changes in corporate income tax rates and depreciation allowances can induce large investment effects through their effect on the user cost of capital—as demonstrated by Cummins and Hassett (1992); Auerbach and Hassett (1992); Cummins, Hassett, and Hubbard (1994, 1996); Caballero, Engel, and Haltiwanger (1995); Djankov and others (2010); and Dwenger (2014). Essentially, the user cost of capital is the rental price of capital, corresponding to the minimum return on investment required to cover taxes, depreciation, and the opportunity costs of investing in physical capital accumulation versus financial alternatives. By increasing (or decreasing) the after-tax rate of return on capital assets, a decrease (increase) in the tax rate on corporate profits decreases (increases) the before-tax rate of return required for the marginal product of new physical assets to exceed the cost of producing and using these assets, thereby raising (lowering) firms’ demand for capital services. As documented in the 2018 Economic Report of the President, early empirical estimates of the user-cost elasticity of investment (e.g., Eisner and Nadiri 1968) were much smaller than the neoclassical benchmark of unit elasticity (Jorgenson 1963; Hall and Jorgenson 1967), and were often outperformed by simple accelerator models of investment. However, subsequent studies (e.g., 38 | Chapter 1 Goolsbee 1998, 2000, 2004; and Cummins, Hassett, and Oliner 2006) demonstrated that estimates likely suffered from considerable omitted variable bias owing to (1) unobserved firm heterogeneity; (2) mismeasurement of investment fundamentals, resulting in attenuation bias; and (3) the correlation of statutory changes in corporate income tax rates, depreciation allowances, and tax credits with cyclical factors. Studies that successfully achieve identification—particularly by exploiting plausibly exogenous variation in the user cost of capital in the cross section of asset types (e.g., Cummins and Hassett 1992; Auerbach and Hassett 1992; Cummins, Hassett, and Hubbard 1994, 1996; and Zwick and Mahon 2017), or by utilizing micro-level panel data (e.g., Caballero, Engel, and Haltiwanger 1995; Dwenger 2014; and Zwick and Mahon 2017)—accordingly estimate much higher user-cost elasticities of investment. Indeed, Dwenger (2014) is unable to reject the null hypothesis that the user-cost elasticity is not statistically different from the neoclassical benchmark of –1.0. This implies that a tax change that lowers the user cost of capital by 10 percent would raise demand for capital services by up to 10 percent. Following Devereux, Griffith, and Klemm (2002) and Bilicka and Devereux (2012), and assuming a consensus estimated user-cost elasticity of investment of –1.0, in the 2018 Economic Report of the President, the CEA calculated that the corporate income tax provisions in the TCJA would, on average, lower the user cost of capital, and thus raise demand for services, by approximately 9 percent. Using the Multifactor Productivity Tables from the Bureau of Labor Statistics in a growth accounting framework to increment the Congressional Budget Office’s June 2017 10-year GDP growth projections by the additional contribution to output from a larger target capital stock, and assuming constant capital income shares, the CEA then calculated that the steady-state U.S. economic output would be between about 2 and 4 percent higher in the long run. More formally, DeLong and Summers (1992) derive the adjustment dynamics by beginning with this identity: ΔYt = (r + δ)ΔKt where Y is output, r is the social net rate of return, δ is the economic depreciation rate, and K is the capital stock. The gross increase in Y produced by an increase in K is the gross rate of return on capital multiplied by the increase in K. The capital stock of an economy initially in the steady state that receives a permanent boost, I, to its gross investment therefore evolves according to: ΔKt = I – δKt – 1 That is, the increase in the capital stock is equal to new gross investment minus depreciation of the preceding period’s capital stock. In the first period, the entire increase in investment translates into an increase in the capital stock: ΔKt = I, such that ΔYt = (r + δ)I. In the second period, Evaluating the Effects of the Tax Cuts and Jobs Act | 39 investment will still be higher by I, but because K1 > K0, depreciation will also be higher. The increase in the capital stock will therefore be smaller: ΔK2 = (I – δK1) = (I – δI) = (1 – δ)I, and ΔY2 = (r + δ)(1 – δ)I. Successive increases in the capital stock will accordingly diminish, with the sum of changes gradually converging to a steady-state value ΔK*: ΔK* = I/δ And the cumulative change in output converges to a new steady-state level: ΔY* = I (r + δ) / δ An increase in investment equal to 1 percentage point of output can therefore induce up to a (r + δ) / δ percentage-point increase in the steady-state level of output, and up to a (r + δ) / δt increase in the growth rate of output over a period of t years. In the absence of capital adjustment costs, the standard neoclassical model therefore predicts an immediate jump in investment in the first period, though with no effect on the rate of growth of investment thereafter. The level effect, however, is permanent, such that the capital-to-output ratio and the ratio of the flow of new investment to the outstanding capital stock gradually approach their new, steady-state levels, as illustrated with a hypothetical example in figure 1-1. Economic research (e.g., Hartman 1972; Abel 1983; Caballero 1991; and Bar-Ilan and Strange 1996), suggests that the costs associated with adjusting capital stocks may result in short-run adjustment lags. Consequently, we would expect the first margin of adjustment to a positive tax shock to capital investment to be expectations, which, unlike capital and labor market contracts, are instantaneously flexible. Consistent with this anticipated effect, figure 1-2 reports the percentage of businesses in the National Federation of Independent Business’s (NFIB’s) monthly survey reporting plans to raise capital expenditures in the next 3 to 6 months, reported as a 3-month centered moving average to smooth out random noise. Figure 1-2 shows two marked upward shifts in the percentage of firms reporting planned increases in capital investment—first, at the moment of Donald Trump’s election to the U.S. Presidency; and second, at the moment of the TCJA’s passage. These increases followed two years during which the percentage of firms reporting plans to raise capital expenditures was essentially flat. Reinforcing this pattern, figure 1-3 reports the percentage of NFIB respondents reporting that now is a good time to expand. Once again, the survey data reveal two marked spikes—first, after the election of President Trump; and second, after the TCJA’s passage. After the TCJA’s passage, the percentage of respondents reporting that now was a good time to expand broke the survey’s previous 1984 record to set a new all-time high. Meanwhile, in 2018:Q1, the Business Roundtable (2018) survey of CEOs reported record highs for their capital spending index and the percentage 40 | Chapter 1 $"0- рҊрѵ%0./( )/4)($./* 2/ 4Ҋ// +$/' 0/+0//$* It ҝKtҊр Percentage of the capital stock per year рцѵп рхѵф рхѵп рфѵф рфѵп руѵф руѵп р ч рф сс сш тх ут фп сш тх ут фп Years K ҝY Years of I to build K рѵрп рѵпч рѵпх рѵпу рѵпс рѵпп пѵшч пѵшх р ч рф сс Years *0- ѷ '0'/$*).ѵ */ ѷ%0./( )//*) 2./ 4.// !/ -рп+ - )/ '$) $)/# 0. -*./*!+$/'Ѷ ..0($)"r ۙпѵпфѶδ = 0.15, ))$)$/$'+$/'Ҋ*0/+0/-/$**!рѵпѵ Evaluating the Effects of the Tax Cuts and Jobs Act | 41 $"0- рҊсѵ - )/" *! 0-1 4 .+*) )/.'))$)"+$/' 3+ )$/0- .$)/# 3/т/*х*)/#.Ѷспрх–рч Percent тс ' /$*) 4Ҋрч тп сч сх су 4спрх *1ѵспрх 4спрц *1ѵспрц *0- .ѷ/$*)' -/$*)*! ) + ) )/0.$) ..җ ҘѸ'0'/$*).ѵ */ ѷ/- +- . )/ )/ - тҊ(*)/#(*1$)"1 -" ѵ ۙ30/.) *./ѵ 4спрч $"0- рҊтѵ - )/" *! 0-1 4 .+*) )/. +*-/$)"#/ *2 .**$( /*3+)Ѷспрх–рч Percent тф ' /$*) 4Ҋрч тп Previous all-time high, 1984 сф сп рф рп ф 4спрх *1ѵспрх 4спрц *1ѵспрц *0- .ѷ/$*)' -/$*)*! ) + ) )/0.$) ..җ ҘѸ'0'/$*).ѵ */ ѷ/- +- . )/ )/ - тҊ(*)/#(*1$)"1 -" ѵ ۙ30/.) *./ѵ 42 | Chapter 1 4спрч reporting rising capital spending in the next 6 months. Through 2018:Q3, both series remained higher than at any point since 2011:Q2. Also in 2018:Q1, the percentage of respondents to a National Association of Business Economists (2018) survey reporting rising capital expenditures on information and communication technology hit a record high, and has remained well above the previous average since the question entered the survey. Broader survey results reflect the same pattern. Figure 1-4 reports the centered 3-month moving average of Morgan Stanley’s Planned Capital Expenditures (Capex Plans) Index, which tracks what business firms will probably spend in coming months. Again, after two years of decline, we observe two marked spikes after the election of President Trump and the TCJA’s passage. Indeed, at the start of 2018, the index set its all-time high. Over time, as actual investment begins to reflect investment plans, we would expect these indices, as well as other survey responses, to edge back, as more respondents report plans to leave investment unchanged once the new, higher level of investment is attained. An additional, short-run margin of adjustment—succeeding the adjustment of expectations but preceding the adjustment of actual physical capital stocks—is new capital goods orders, as reported by purchasing managers. Figure 1-5 reports core capital goods orders, in billions of dollars, from January 2012 through November 2018. Once again, after two years of declines, we observe two sharp spikes in capital goods orders within months of investmentrelevant events—first, after President Trump’s election; and second, after the TCJA’s passage. Despite expected adjustment costs and investment lags in the transition to a higher-target capital stock, the first three quarters after the TCJA’s passage saw a notable acceleration in investment. Figure 1-6 reports growth in real private nonresidential fixed investment from the time of the TCJA’s passage until the third quarter of 2018, both for nonresidential investment overall and for the major subcomponents of structures, equipment, and intellectual property products, expressed as compound annual growth rates to smooth substantial quarterly volatility, with investment being the most volatile component of GDP. On a downward trend since 2014, we again observe a marked reversal, with private nonresidential fixed investment overall, as well as investment in each subcomponent of investment, up over preelection and pre-TCJA trends. Indeed, if we regress the compound annual growth rate of private nonresidential fixed investment on a linear time trend over the sample period 2009:Q3–2017:Q4 (2017:Q3 for equipment), and we project this trend into 2018 and reconstruct levels from forecasted growth rates, we find that as of 2018:Q3, overall private nonresidential fixed investment was up $150 billion (5.8 percent) over the trend. Among nonfinancial businesses, overall capital expenditures were up 12.1 percent over the trend. Evaluating the Effects of the Tax Cuts and Jobs Act | 43 Figure 1-4. Morgan Stanley’s Capex Plans Index, 2016–18 Index 35 Election TCJA May-18 30 25 20 15 10 May 2016 Nov. 2016 May 2017 Nov. 2017 May 2018 Sources: Bloomberg; CEA calculations. Note: Morgan Stanley’s Planned Capital Expenditures (Capex Plans) Index tracks what firms plan to spend in coming months. TCJA = Tax Cuts and Jobs Act. Data represent a centered 3month moving average. $"0- рҊфѵ*- +$/' **.- -.Ѷспрс–рч Dollars (billions) цф ' /$*) *1Ҋрч цп хф хп фф спрс спрт спру спрф спрх спрц спрч *0- .ѷ ).0.0- 0Ѹ '0'/$*).ѵ */ ѷ*- "**.$)'0 )*) ! ). +$/'"**.Ѷ 3'0$)"$--!/ѵ/- +- . )/ )/ - тҊ(*)/#(*1$)"1 -" ,/-0)/$)"$)*1 ( -спрчѵ ۙ 30/.) *./ѵ 44 | Chapter 1 $"0- рҊхѵ-*2/#$) '*)- .$ )/$'$3 )1 ./( )/Ѷспрцѷу– спрчѷт /0'"-*2/# Compound annual growth rate рп 8.0 7.9 ч х 4.8 у 1.5 с п Ҋс Ҋу /-0/0- . 1 -'' )*)- .$ )/$'!$3 $)1 ./( )/ - Ҋ /- )"-*2/# рпѵп 6.6 5.0 –2.2 ,0$+( )/ )/ '' /0'+-*+ -/4 +-*0/. *0- .ѷ 0- 0*!*)*($)'4.$.Ѹ'0'/$*).ѵ */ ѷ# ./-0/0- .)$)/ '' /0'+-*+ -/4+-*0/.pre- /- )s- '0'/ on the sample сппшѷт–спрцѷуѵ# ,0$+( )/pre-TCJA trend is '0'/ on the sample сппшѷт– спрцѷтѶbecause!0'' 3+ ).$)"2.- /-*/$1 /* +/ ( -спрцѵThe data for s/-0/0- .) $)/ '' /0'+-*+ -/4+-*0/s- +- . )/т-,0-/ -*(+*0)))0'"-*2/# -/ ѵ,0$+( )/ /- +- . )/у-,0-/ -*(+*0)))0'"-*2/#-/ ѵ# *1 -''rates for the )*)- .$ )/$' !$3 $)1 ./( )//- ))/0'*(+*0)))0'"-*2/#- '0'/ . *)2 $"#/ 1 -" *!/# ./-0/0- .Ѷ ,0$+( )/Ѷ)$)/ '' /0'+-*+ -/4+-*0/components. Equipment investment, in particular, exhibited a pronounced spike in the fourth quarter of 2017, as both the House and Senate versions of the TCJA bill, which were respectively introduced on November 2 and November 9, stipulated that full expensing for new equipment investment would be retroactive to September 2017. This created a strong financial incentive for companies to shift their equipment investment to the fourth quarter of 2017, so as to deduct new equipment investment at the old 35 percent statutory corporate income tax rate. After the initial spike in the rate of growth in fixed investment, standard neoclassical growth models would predict a return of the rate of growth to its pre-TCJA trend, but from a higher, post-TCJA level, with the capital-to-output ratio thereby asymptotically approaching its new, higher steady-state level. More revealingly, considering higher-resolution data at the detailed asset level, we observe that asset types exhibiting larger residuals from an AR(n) step-ahead forecast of the user cost of capital also experienced larger forecast errors for real investment in 2018. Following Cummins, Hassett, and Hubbard (1994), figure 1-7 reports autoregressive forecast errors for each disaggregated equipment investment series against forecast errors for the detailed assetlevel user cost of capital, assuming equity financing. As can be observed in the figure, there is a negative correlation between forecast errors for the user cost of capital and investment, consistent with larger declines in the user cost of capital inducing larger increases in demand for capital services. Evaluating the Effects of the Tax Cuts and Jobs Act | 45 $"0- рҊцѵ*- ./--*-.!*-,0$+( )/ )1 ./( )/)-$ )1 ./( )/ . -*./ Residual пѵпх пѵпу пѵпс п Ҋпѵпс Ҋпѵпу Ҋпѵпх Ҋпѵпч Ҋпѵр Ҋпѵрс /# *(+0/ -.) $)!*-(/$*) + -$+# -' +-* ..$)" ,0$+( )/ )0./-$' -).+*-//$*) /# - *0- .ѷ0- 0*!*)*($)'4.$.Ѹ'0'/$*).ѵ */ ѷ .$0'.!-*(0/*- "- ..$1 !*- ./.*!"-*2/#-/ .*! #$.""- "/ ,0$+( )/ $)1 ./( )/. -$ .- +'*// "$)./- .$0'.!-*(0/*- "- ..$1 !*- ./.*!/# + - )/ #)" $)/# .$(+'$!$ 0. -*./*!+$/'4.. //4+ ѵ Finally, though the projected increase in steady-state output is predominantly a long-run effect deriving from a higher flow of capital services as the economy transitions to a higher steady-state target capital stock, already in 2018 we observe the effects on growth of higher investment demand after corporate tax reform and robust consumer spending followed the enactment of the TCJA’s individual provisions. During the 34 quarters between the start of the current expansion in 2009:Q3 and the TCJA’s enactment in 2017:Q4, the average contribution of real private nonresidential fixed investment to GDP growth was 0.6 percentage point. But in the first three quarters after the TCJA’s passage, the contribution of real private nonresidential fixed investment to GDP growth rose to 1.0 percentage point. As a share of GDP, private nonresidential fixed investment in the first three quarters of 2018 attained its second-highest level since 2001. As documented in the 2018 Economic Report of the President, the principal challenge for estimating the effect of changes in corporate and personal income tax rates on economic growth is that the timing of tax changes tends to correlate with cyclical factors. Specifically, legislators tend to lower tax rates during periods of economic contraction and raise rates during periods of economic expansion, which can negatively bias estimates of the effects of changes in marginal tax rates on investment and output. Two recent empirical approaches to addressing this threat to identification are structural vector autoregression (SVAR) and the use of narrative history 46 | Chapter 1 to identify exogenous tax shocks; both approaches were reviewed in the 2018 Report, and estimates from this literature were applied to the TCJA. The SVAR approach, which was pioneered by Blanchard and Perotti (2002), identifies tax shocks by utilizing information about fiscal institutions to distinguish between discretionary and automatic or cyclical tax changes. Meanwhile, the narrative approach, which was initiated by Romer and Romer (2010), relies on a textual analysis of tax debates to identify exogenous tax changes with political or philosophical, rather than economic, motivations. More recently, Mertens and Ravn (2013) have developed a hybrid of both approaches that utilizes Romer and Romer’s narrative tax shock series as an external instrument to identify structural tax shocks. Using the estimated revenue effects of the TCJA from the Joint Committee on Taxation (JCT 2017), Mertens (2018) applies estimated coefficients from the SVAR and narrative approaches to a tax cut of the TCJA’s magnitude. He calculates that effects based on aggregate tax multiplier estimates—by Blanchard and Perotti (2002), Romer and Romer (2010), Favero and Giavazzi (2012), Mertens and Ravn (2012), Mertens and Ravn (2014), and Caldara and Kamps (2017)—imply a cumulative effect on GDP between 2018 and 2020 of 1.3 percent. Applying estimated impacts based on responses to individual marginal tax rates from Barro and Redlick (2011) and Mertens and Montiel Olea (2018), he calculates a cumulative effect by 2020 of 2.1 percent. Finally, applying estimated effects of disaggregated individual and corporate tax multipliers from Mertens and Ravn (2013), he calculates the cumulative effect on GDP between 2018 and 2020 of individual tax reform to be 0.5 percent, and the cumulative effect of business tax reform to be 1.9 percent. As shown in figure 1-8, actual GDP growth in 2018 was consistent with these estimated effects. Between 2012:Q4 and 2016:Q4, the compound annual growth rate of real GDP averaged just 2.3 percent, slowing to 2.0 and 1.9 percent in 2015 and 2016, respectively. After increasing to 2.5 percent in 2017, GDP was on pace in the first three quarters of 2018 to grow by 3.2 percent over the four quarters of the calendar year, for the first time since 2004. Moreover, this growth represented a sharp divergence from the trend. Regressing the compound annual growth rate of GDP on a time trend over a pre-TCJA expansion sample period 2009:Q3–2017:Q4, projecting this trend into 2018, and reconstructing levels from forecasted growth rates, we find that as of 2018:Q3, GDP growth in 2018 was up 1.0 percentage point over the trend. Although it is difficult to empirically disentangle the TCJA’s effects on growth from the effects of the Trump Administration’s other economic policy initiatives to date, particularly deregulatory actions, the estimates reported in chapter 2, “Deregulation That Frees the Economy,” of the 2018 Economic Report of the President suggest that these actions likely contributed less than 0.1 percentage point to growth in 2018. Evaluating the Effects of the Tax Cuts and Jobs Act | 47 We also estimate the TCJA’s effect on 2018 growth by calculating the divergence of observed growth from a 2017:Q3 baseline forecast, as discussed in chapter 10 of this Report and chapter 8, “The Year in Review and the Years Ahead,” of the 2018 Economic Report of the President. To construct this baseline, we treat the TCJA as an unanticipated shock arriving in the fourth quarter of 2017. Adapting the approach of Fernald and others (2017), we then decompose pre-2017:Q4 growth rates into trend, cyclical, and higher-frequency components—using Okun’s law and a partial linear regression model with a frequency filter—to estimate the long-run growth rate. We then estimate an unrestricted vector autogressive model (VAR) on detrended growth rates through 2017:Q3 of real GDP, the unemployment gap, the labor force participation rate, real personal consumption expenditures, and the yield spread of 10-year over 3-month Treasuries. We determine optimal lag length by satisfaction of the Akaike and Hannan-Quinn information criteria. Postestimation and VAR forecasting, we then add the estimated long-run trend. Relative to this baseline forecast, observed output growth was up 1.4 percentage points at a compound annual rate as of 2018:Q3. Figure 1-9 compares these two estimated effects of the TCJA to the SVAR and narrative estimates reported by Mertens (2018). Another approach to evaluate the TCJA’s effect on growth is to compare the Congressional Budget Office’s (CBO) final, pre-TCJA 10-year economic projection with the post-TCJA actuals. In June 2017, the CBO forecasted real GDP growth of 2.0 percent in 2018, with real private nonresidential fixed investment growing by just 3.0 percent. If GDP growth during the four quarters of 2018 was instead 3.2 percent, as the U.S. economy was on pace to achieve through 2018:Q3, and if it were to then immediately revert to the CBO’s June 2017 forecast, in 2027 economic output would be 1.2 percent higher than projected. If GDP were to simply grow by 3.2 percent in 2018, by the CBO’s upwardly revised August 2018 forecast of 2.8 percent in 2019, and if it were to then revert to the pre-TCJA projection, in 2027 economic output would be 2.5 percent higher than projected, in line with the CEA’s initial estimates. Data available through 2018:Q3 therefore suggest that estimates from the Tax Policy Center (0.0), Penn-Wharton Budget Model (0.6–1.1 percent), JCT (0.7 percent on average over 10 years, implying a 10-year level effect of 1.2 percent), and Tax Foundation (1.7 percent) may constitute lower bounds. The preliminary evidence is, however, consistent with a more recent analysis by Lieberknecht and Wieland (2018), who employ a two-country dynamic stochastic general equilibrium model to estimate a long-run GDP effect of 2.6 percent. An important implication of higher-than-projected growth is Federal government revenue. The JCT estimated the TCJA’s conventional revenue cost at $1.5 trillion over 10 years, and a dynamic estimate of $1.1 trillion, after accounting for higher revenue due to economic growth, net of increased interest payments. If the TCJA’s effect on economic growth exceeds the JCT’s estimate, the actual long-run revenue cost may be lower. 48 | Chapter 1 $"0- рҊчѵ-*2/#$) 'Ѷспрсѷу–спрчѷт Actual 2012–16 growth Actual 2016–18 growth Compound annual growth rate тѵф Trend growth 3.2 тѵп сѵф 2.3 сѵп 2.5 2.0 2.2 рѵф рѵп пѵф пѵп спрсѷу– спрхѷу спрхѷу– спрцѷу спрцѷу– спрчѷт *0- .ѷ 0- 0*!*)*($)'4.$.Ѹ'0'/$*).ѵ */ ѷ/- +- . )/*(+*0)))0'"-*2/#-/ *1 -/# "$1 ),0-/ -.ѵ# спрхѷу– спрцѷу+- ' /$*)/- )+-*% /$*)$.'0'/ !*-сппшѷт–спрхѷуѵ# спрцѷу–спрчѷт +- Ҋ /- )+-*% /$*)$.'0'/ !*-сппшѷт–спрцѷуѵ The cumulative effect of higher near-term growth on revenue can be illustrated by calculating the difference between the CBO’s final, pre-TCJA (June 2017) 10-year projections of growth and revenue, advancing from 2017:Q4 actuals, and the CBO’s final, pre-TCJA 10-year economic projections updated with 2018 actual GDP data and April 2018 CBO revenue projections. Fiscal year revenue-to-GDP projections are converted to calendar years by assigning 25 percent of the subsequent fiscal year to the current calendar year. First, we assume that actual nominal GDP growth in the four quarters of 2018 achieved its 2018:Q1–2018:Q3 annualized pace of 5.6 percent. Second, we assume that actual nominal GDP growth in 2019 achieves the Administration’s current projection of 5.3 percent. Third, we assume that, thereafter, growth reverts to the pre-TCJA trajectory projected by the CBO. Fourth, we assume that the ratio of revenue to GDP was as projected by the CBO in April 2018. In this simulation, Federal tax revenue would be about $500 billion higher over the 10 years through 2027. This macroeconomic feedback alone would thereby offset more than one-third of the conventional cost of the law. Because increased growth in calendar year 2018 was likely augmented by other legislative and Administration policies, as well as nonpolicy economic factors, we also estimate the likely macroeconomic feedback of higher growth by applying the estimated coefficients from Romer and Romer (2010) and Mertens (2018) to GDP growth in 2018, 2019, and 2020, and assuming April 2018 revenue-to-GDP projections. This approach yields an estimated cumulative revenue effect of between $140 and $190 billion over 3 years, or between $480 Evaluating the Effects of the Tax Cuts and Jobs Act | 49 $"0- рҊшѵ/-0/0-')--/$1 ./$(/ .1 -.0./0' 1.2 1.3 1.3 1.3 1.4 1.4 1 -" ۙрѵт 0.9 0.9 '-) (+.җспрцҘ ')#-) -*//$җсппсҘ 1 -*)$155$җспрсҘ -/ ).)*)/$ '' җспрчҘ -/ ).)1)җспрсҘ *( -)*( -җспрпҘ --*) '$&җспррҘ -/ ).)*)/$ '' җспрчҘ -/ ).)1)җспруҘ -/ ).)1)җспртҘ '/$1 /*сппшѷт–спрцѷу/- ) '/$1 /*спрцѷт. '$) 1.6 1.8 1.0 1.4 пѵп пѵф рѵп Percentage points рѵф сѵп *0- .ѷ -/ ).җспрчҘѸ '0'/$*).ѵ Note: VAR = vector autoregression. The 2009:Q3–2017:Q4 trend is estimated on compound annual growth rates and levels reconstructed from projected rates. The CEA’s 2017:Q3 baseline is estimated using a VAR and statistical frequency filter, as described in chapter 10 of this Report. Mertens (2018) compiles references to 10 estimates from other papers (these other estimates are shown in this figure). Mertens and Olea (2018) provide two estimates from the same paper. and $640 billion over 10 years if the level effect persists. Excluding Mertens’s (2018) international estimations, which treat deemed repatriation—an effective reduction in the implicit tax liability of U.S. multinational enterprises—as a tax increase, the approach suggests a cumulative revenue feedback over 10 years of $810 billion. Because these empirically estimated growth effects only extend for three years, whereas the increased flow of capital services as the economy transitions to a higher steady-state capital-to-labor ratio is a long-run effect, the corresponding revenue effects may constitute a lower bound (box 1-1). Because the TCJA was passed by Congress under the budget reconciliation process, the bill’s conventional revenue cost, as estimated by its official scorer, the JCT, could not exceed $1.5 trillion over 10 years. As a result, several provisions of the TCJA are scheduled to expire by the end of fiscal year 2027. Specifically, many of the provisions affecting the personal income tax code are due to expire on December 31, 2025, whereas among corporate income tax provisions, bonus depreciation, particularly for equipment investment, is set to begin phasing out on January 1, 2023, and to fully phase out on December 31, 2026. Using a neoclassical growth model, Barro and Furman (2018) estimate that making the TCJA’s temporary business provisions permanent would raise long-run GDP by 2.2 percentage points above their baseline, law-aswritten estimate, and by 0.8 percentage point over 10 years. Using a more 50 | Chapter 1 Box 1-1. The Mortgage Interest Deduction and the Tax Cuts and Jobs Act Before the passage of the Tax Cuts and Jobs Act, discussions of potential changes in the mortgage interest deduction (MID) raised concern about possible future effects on home value and homeownership (NAR 2017). The National Association of Realtors commissioned a study that forecasted a 10.2 percent decline in home prices in the short run resulting from proposals in the TCJA that included, at the time, changes to the MID (PwC 2017). The TCJA did not eliminate the MID, but it did reduce the maximum mortgage eligibility by $250,000 (CEA 2018). In addition, the TCJA included a doubling of the standard deduction, which was projected to reduce taxable units claiming the MID and increase tax units utilizing the standard deduction (CEA 2017b). The MID is a regressive subsidy with greater benefit for those with mortgages on more expensive homes, in part because individuals with higher incomes are more likely to itemize their deductions rather than opt for the standard deduction. The incentive provided by the MID for more expensive homes has ramifications for the housing market. Earlier CEA analyses and reviews of the literature note that the MID is not associated with higher home ownership rates, even though that was a central goal for maintaining the policy (CEA 2017b). Furthermore, given the incentive for larger and/or more expensive home purchases, the MID inflates housing prices. The impact of the MID on housing prices is found to vary across different housing markets, depending on the elasticity of housing supply. A market with a more inelastic supply would face greater downward pressure on housing prices than a market with elastic supply as a result of an elimination of the MID. Furthermore, earlier CEA analyses comparing home ownership rates in the United States with those in Canada and other countries belonging to the Organization for Economic Cooperation and Development found the MID to be “neither necessary nor sufficient” for relatively higher home ownership rates (CEA 2017b, 7). The final TCJA legislation, which was signed into law in December 2017, did not eliminate the MID—though, as noted above, both the change in the amount of mortgage debt for which interest can be deducted and the doubling of the standard deduction would result in fewer tax filers utilizing itemized deductions and the MID. Given this policy change, examining the reaction of both homeownership rates and housing prices across the country and across different markets can provide insight into the predicted effects detailed above. In the first 11 months of 2018, though housing prices continued to increase, the pace of housing price growth ticked slightly down. In the first three quarters of 2018, homeownership rates slightly increased. Housing prices, measured by a number of housing price indices, have increased nationally since 2012. In the first 11 months of 2018, real house price indices continued to increase, though the pace of annual growth slowed slightly. The 12-month percentage change among three of the four real house Evaluating the Effects of the Tax Cuts and Jobs Act | 51 price indices displayed in figure 1-i decreased in 2018, though they have remained positive. At the city level, the reaction of housing prices varied in the first three quarters of 2018. As noted above, how housing prices respond to a change in use of the MID is dependent on the elasticity of housing supply. In markets where housing supply is less responsive, such as San Francisco, housing prices would be expected to react more to changes in use of the MID versus a housing market with a less-regulated supply, such as Dallas. Though the real housing price indices in both San Francisco and Dallas continued to increase in the first three quarters of 2018, the annual change in Dallas’s real housing price indices continued on the downward trend that was evident before the TCJA’s passage. The pace of annual change in San Francisco, however, quickened in the first three quarters of 2018 after the TCJA’s passage (figure 1-ii). Contrary to a report commissioned by the National Association of Realtors in May 2017, which predicted that MID reforms similar to that ultimately enacted by the TCJA would cause a short-run decline in national home prices of 10.2 percent, housing prices have increased in some markets (PwC 2017). Homeownership rates nationally had trended down for several years, though they saw a reversal in 2016, when rates began to move upward for the first time since 2004. After the TCJA’s passage, homeownership continued to increase nationally through the first three quarters of 2018 (figure 1-iii). Faster $"0- рҊ$ѵрс-*)/# - )/" #)" $)/$*)' ' *0.$)" -$ )$ .Ѷспрф–рч Percent ш *1Ҋрч т 4 $''*2 х 3 Standard & Poor’s *- *"$ 2 п Ҋт спрф 1 0 спрх спрц спрч *0- .ѷ CoreLogic; Standard & Poor’s; Zillow; Federal Housing Finance Agency (FHFA); Bureau of *)*($)'4.$.Ѹ'0'/$*).ѵ */ ѷ ۙ30/.) *./ѵ 52 | Chapter 1 5 $"0- рҊ$$ѵFour-0-/ - - )/" #)" $) "$*)' ' *0.$)"-$ )$ .Ѷспрф–рч Percent рф спрчѷт )-)$.* ''. 9 8 7 рп 6 5 ф 4 3 п 2 1 –ф спрф 0 спрх спрц спрч *0- .ѷ -' *0.$)"$)) " )4Ѹ0- 0*!*)*($)'4.$.Ѹ'0'/$*).ѵ */ ѷ ۙ30/.) *./ѵ $"0- рҊ$$$ѵ/$*)' *( 2) -.#$+/ Ѷспрф–рч Percent хф спрчѷт ху хт хс спрф спрх спрц спрч *0- ѷ ).0.0- 0ѵ */ ѷ ۙ30/.) *./ѵ Evaluating the Effects of the Tax Cuts and Jobs Act | 53 economic growth resulting from the TCJA would be expected to shift the demand curve for housing outward. U.S. fiscal policy continues to implicitly subsidize owner-occupied housing by excluding imputed rental income from income taxation and through direct and indirect financial support of government-sponsored mortgage enterprises, as discussed in chapter 6 of this Report. User cost calculations reported by Poterba and Sinai (2008) suggest that the implicit subsidy of untaxed imputed rent is 1.5 times that of the MID, with the magnitude of the differential impact increasing in household income. Feldman (2002) and Passmore, Sherlund, and Burgess (2005), meanwhile, find that government sponsorship of the Federal National Mortgage Association and Federal Home Loan Mortgage Corporation lower mortgage rates by 7 to 50 basis points. richly specified, two-country dynamic stochastic general equilibrium model, Lieberknecht and Wieland (2018) find that making the temporary provisions permanent would raise the long-run growth effect from 2.6 to 5.7 percent. We can also estimate the effect on output of making permanent the TCJA’s provisions currently set to expire in 2025 by calculating the static budget impact in 2026 and 2027 and applying the estimated impact multipliers reported by Mertens (2018). Specifically, calculating the change from 2025 in the JCT’s (2017) static revenue estimate for 2026 and 2027, dividing by the Administration’s projection for GDP in 2026 and 2027, reversing the sign, and applying the estimated tax multipliers indicate a cumulative impact of up to 0.4 percentage point by the end of 2027. Labor Market Effects In the 2018 Economic Report of the President, the CEA demonstrated that due to the high mobility of capital relative to labor, the incidence of corporate income taxation is increasingly borne by labor, though there is an important distinction between short- and long-run economic incidence. In the short run, increases (or decreases) in corporate income taxation are largely borne by current owners of corporate capital, through a decline (rise) in asset values, and by investors, through lower (higher) after-tax rates of return. However, the CEA estimated that in the long run, labor bears a majority of the burden of corporate income taxation, as an increase (decrease) in the effective tax rate on capital income from marginal investment lowers (raises) steady-state demand for capital services. The consequent decline (rise) in the capital-to-labor ratio lowers (raises) labor productivity and thus depresses (lifts) labor compensation. Consistent with this investment channel, Giroud and Rauh (2018), employing Romer and Romer’s (2010) narrative approach to estimate the effects of State-level corporate income tax changes, find short-run statutory corporate 54 | Chapter 1 tax elasticities of both employment and establishment counts of about –0.5, and elasticities of –1.2 over a 10-year horizon. Moreover, a broad survey of empirical studies of the incidence of corporate income taxation, reported in the 2018 Economic Report of the President, indicates that workers ultimately bear between 21 and 75 percent of the economic burden of corporate taxation, with more recent studies generally constituting the upper bound of this range, reflecting growing international capital mobility. The studies that were cited suggest a corporate income tax elasticity of wages of between –0.1 and –0.5, with estimated tax semielasticities from –0.4 to as large as –2.4. Applying these estimated elasticities to the TCJA, the CEA calculated that a permanent 14-percentage-point reduction in the Federal statutory corporate tax rate would raise average annual household income by between $2,400 and $12,000 in the long run, with an average estimate of $5,500. Dropping the two lowest and two highest estimates suggests a tighter range, between $3,400 and $9,900. Although these are long-run, estimated wage effects resulting primarily from a gradual transition to a new steady state with a higher capital-to-labor ratio, even in the short term, we would expect to observe forward-looking firms revising their labor market expectations. Models of rent sharing indicate that, in the short run, workers stand to benefit from increased profits accruing to their parent employer through a bargaining channel. This model does not make any predictions about changes in employment levels. Arulampalam, Devereux, and Maffini (2012) present a model of rent sharing in which changes in the corporate tax rate, expensing provisions, and overall marginal tax rates (from various and sundry other tax provisions) all serve to affect the wage. The model supposes a single union representing all wage earners. How the model’s predictions would change under different bargaining arrangements is not clear, though in each case, the signs of the first derivative on corporate tax rates, longer depreciation schedules, and overall marginal tax rates are all negative, such that the TCJA is predicted to unambiguously increase workers’ wages through the bargaining channel. This theory accords with the empirical evidence, first noted by Krueger and Summers, that “more profitable industries tend to use some of their rents to hire better quality labor, and share some of their rents with their workers” (Krueger and Summers 1968, 17; also see 1988). More recent studies of intraindustry wage differentials confirm that rent sharing remains a feature of the U.S. labor market (Barth et al. 2016; Card et al. 2016; Song et al. 2019). In the results of the research by Arulampalam, Devereux, and Maffini (2012), the wage is roughly equal to the weighted average of the outside wage option of the employer and some share of the firm’s location-specific profit. Changes in expensing provisions affect the profits over which employers and employees bargain, even in the absence of changes in the target capital stock—as do other adjustments outside the corporate income tax rate that Evaluating the Effects of the Tax Cuts and Jobs Act | 55 serve to affect the firm’s tax liability. Arulampalam and her colleagues note that if cost reductions induced by the tax law are fully passed on to consumers in the output market, the profits over which to bargain are unchanged. Finally, Arulampalam and colleagues’ result highlights the role of the corporate tax rate itself, τ, in the wage bargain. Higher values of τ raise the value of the firm’s outside option (here, relocation to another tax jurisdiction) and lower bargained wages. Lowering τ reduces the value of the firm’s outside option (in this case, another tax jurisdiction) and, thus, increases worker wages. Each of these effects is “immediate,” manifesting in higher worker wages as soon as the impact of changes in corporate taxes on firm profits is known with some certainty. Thus, the spate of bonus and increased wage announcements immediately after the TCJA’s enactment, reported in box 1-2, is consistent with the rent-sharing model of worker wages. It is also consistent with survey data that were gathered immediately after the TCJA’s passage. Figures 1-10 and 1-11 report the net percentage of NFIB survey respondents reporting plans to raise worker compensation and increase employment over the next three months, expressed as a three-month centered moving average to smooth random monthly volatility. As with planned capital expenditures, the survey results indicate two marked upward shifts in compensation and hiring plans—the first after the election of President Trump, and the second after the TCJA’s passage. In August 2018, the net share of independent businesses reporting plans to increase employment in the next three months set a new alltime record, whereas in October 2018, the net share of independent business reporting plans to raise worker compensation in the next three months broke a 28-year record to set a new all-time high. Reinforcing the private survey data, and consistent with the research of Giroud and Rauh (2018), data from the Bureau of Labor Statistics’ Job Openings and Labor Turnover survey also show a sharp uptick in labor demand after the TCJA’s passage. Figure 1-12 reports total private job openings from 2014 through 2018. After leveling off in 2016 at between about 5 and 5.5 million, private job openings surged after the TCJA’s passage, topping 6.5 million by August 2018. In addition, during the entire pre-TCJA expansion, real nonproduction bonuses per hour grew at a compound annual rate of 5.4 percent. Since the TCJA came into effect, they have risen $150 per worker on an annual basis, or by 9.3 percent. Available labor earnings data are also consistent with the CEA’s projections. Relative to a time trend estimated over the entire pre-TCJA expansion sample period (2009:Q3–2017:Q4), as of 2018:Q3, real disposable income per household was up $640 over the trend. Expressed as a perpetual annuity, this corresponds to a lifetime pay raise of about $21,000 for the average household, assuming the real discount rate currently implied by Shiller’s cyclically adjusted earnings-to-price ratio for the Standard & Poor’s (S&P) 500 of 3.1 percent. Across all households, this constitutes a $2.5 trillion boost to household 56 | Chapter 1 Box 1-2. Corporate Bonuses, Wage Increases, and Investment since the TJCA’s Passage In a dynamic, competitive economy, like that of the United States, firms compete for workers. And a robust academic literature, pioneered by one of President Obama’s CEA chairs, Alan Krueger, shows that more profitable employers pay higher wages. Why? Because a firm that attempts to pay a worker less than he or she is worth will quickly lose the worker to a competitor. In a tight labor market, wage bargaining models predict that firms will respond to a profits windfall by raising wages and bonuses to attract and retain talent. The CEA has already tallied 645 companies that have offered bonuses, and/or increased retirement contributions, since the TCJA was enacted. The total number of workers receiving a bonus or increased retirement contribution now stands at over 6 million, with an average bonus size of $1,154 (figure 1-iv). Additional workers are seeing higher take home pay, given that nearly 200 companies have announced increases in wages, with 102 of these firms announcing minimum wage increases. Walmart, the Nation’s largest private employer, has announced an increase in the starting wage of its workers of $2 an hour for the first six months and $1 thereafter. For a full-time employee working 40 hours a week, this means up to $3,040 a year in additional pay. These pay increases are for those earning Walmart’s minimum wage, so, as a share of income, the gains are substantial—at least 16 percent. Many other employers have done the same as Walmart—including BB&T, the 11th-largest bank by assets in the United States, where full-time workers who are paid the bank’s minimum wage will see a $6,000 increase in their annual income. Nearly 15 percent of firms announcing minimum wage hikes have provided increases of at least $4,000. Hard-working Americans are also seeing savings in their electricity bills thanks to the TCJA. More than 130 companies have pledged to pass tax savings on to their customers in the form of reduced tax rates—a practice that will pass savings on to millions. The President’s promise to lower corporate taxes and reduce red tape has led American businesses to a surge in investment, and since the TCJA became law, the CEA has tallied over $220 billion in new corporate investment announcements attributable to it. Likewise, the March 2018 Morgan Stanley composite Planned Capital Expenditures (Capex Plans) Index marked a record high in a series that began 13 years ago. As discussed earlier in this chapter, the official investment statistics show that this investment boom is already taking hold. This is welcome news; according to the CEA’s calculations, a return to the historical rate of capital deepening in the United States would give households a boost of $4,000 in annual wage and salary income by 2026. The bottom line is that the TCJA’s enactment in December 2017 gave a much-needed boost to American workers, who in recent years have endured Evaluating the Effects of the Tax Cuts and Jobs Act | 57 Figure 1-iv. Distribution of Announced Tax Reform Bonuses, 2018 Number of companies 200 188 150 92 100 50 18 13 13 $1,500–$1,999 $2,000 or greater 0 Less than $500 $500–$999 $1,000–$1,499 Sources: Americans for Tax Reform; CEA calculations. Note: This figure does not encompass all companies that have announced tax reform benefits. Benefits related to retirement, wage increases, and ambiguous bonus announcements are excluded. chronic underinvestment due to a corporate tax code that discouraged domestic capital formation. With investment growth now accelerating in response to the corporate tax cuts, we should consider the recent spate of bonus and wage hike announcements as merely a down payment on a longoverdue raise for American households. income. As discussed above, this effect is expected to grow over time through increased capital deepening, raising capital per worker, labor productivity, and wages. Though long-run capital deepening is expected to further raise real disposable personal income, this effect will be partially offset if the personal income tax cuts currently scheduled to expire after 2025 are not extended or made permanent through new legislation. Figure 1-13 reports compound annual growth rates in real median weekly earnings of full-time wage and salary workers and real average weekly earnings of production and nonsupervisory employees in manufacturing since the TCJA’s enactment, relative to the recent trend. On an annualized basis, real median usual earnings for full-time wage and salary workers were up $805 over the trend, while real average earnings for production and nonsupervisory employees in the manufacturing sector specifically were up $493 (box 1-2). In the longer run, as articulated by the CEA (2017a) and in the 2018 Economic Report of the President, we expect wage gains to be driven primarily by increased investment raising the target capital stock, and thus the 58 | Chapter 1 $"0- рҊрпѵ / - )/" *! 0-1 4 .+*) )/.'))$)"/* $. *-& -*(+ )./$*)$)/# 3/т*)/#.Ѷспрх–рч Percent сх ' /$*) Ҋрч су сс Previous all-time high, 1990 сп рч рх ру рс 4спрх *1ѵспрх 4спрц *1ѵспрц 4спрч *1ѵспрч *0- .ѷ/$*)' -/$*)*! ) + ) )/0.$) ..җ ҘѸ'0'/$*).ѵ */ ѷ/- +- . )/ )/ - тҊ(*)/#(*1$)"1 -" ,/-0)/$)"$) ( -спрчѵ ۙ30/.) *./ѵ $"0- рҊррѵ / - )/" *! 0-1 4 .+*) )/.'))$)"/* )- . (+'*4( )/Ѷспрх–рч Percent су сп ' /$*) Ҋрч Previous all-time high, 2000 рх рс ч 4спрх *1ѵспрх 4спрц *1ѵспрц 4спрч *1ѵспрч *0- .ѷ/$*)' -/$*)*! ) + ) )/0.$) ..җ ҘѸ'0'/$*).ѵ */ ѷ/- +- . )/ )/ - тҊ(*)/#(*1$)"1 -" ,/-0)/$)"$) ( -спрчѵ ۙ30/.) *./ѵ Evaluating the Effects of the Tax Cuts and Jobs Act | 59 $"0- рҊрсѵ*/'-$1/ *+ )$)".Ѷспру–рч Millions ц ' /$*) Ҋрч х ф у т спру спрф спрх спрц спрч *0- .ѷ0- 0 *! *-//$./$.Ѹ'0'/$*).ѵ */ ѷ/- +- . )/ )/ - тҊ(*)/#(*1$)"1 -" , /-0)/$)"$) ( -спрчѵ ۙ30/.) *./ѵ steady-state level of capital per worker and, consequently, labor productivity. Already in 2018, we observe evidence of this mechanism operating. During the pre-TCJA expansion in 2009:Q3–2017:Q4, growth in business sector labor productivity averaged 1.0 percent, compared with a pre-2008 postwar average of 2.5 percent. Growth in nonfarm business sector labor productivity averaged 1.1 percent during the pre-TCJA expansion, compared with a pre-2008 postwar average of 2.3 percent. In contrast, in the first three quarters of 2018, business sector labor productivity grew at an annual rate of 2.0 percent—double the rate of the pre-TCJA expansion. Labor productivity in the nonfarm business sector grew at an annual rate of 1.8 percent. Finally, as noted in the 2018 Economic Report of the President, Keane and Rogerson (2012, 2015) demonstrate that because incremental human capital acquired through employment raises expected future earnings—the net present value of which varies inversely with age—older and relatively more experienced workers can be expected to have larger labor supply responses to changes in marginal personal income tax rates than younger, less experienced workers. Indeed, we observe this effect in the data. Regressing the employment-to-population of over-55-year-olds on a linear time trend fully interacted with a binary variable for post-TCJA over a sample period July 2009–December 2018, we estimate a positive coefficient on the interaction term, and we can reject the null hypothesis of no slope change with 95 percent confidence. In 60 | Chapter 1 $"0- рҊртѵ*1 -- ) ' *- *(+ )./$*))" -*2/#Ѷ спрч Dollars (annualized, 2012) рѶппп $804.7 цфп фпп $492.6 сфп п '1 -" 2 &'4 -)$)".!*()0!/0-$)"+-*0/$*)) )*).0+ -1$.*-4 (+'*4 . '( $)0.0'2 &'4 -)$)".!*!0''Ҋ/$( 2" ).'-42*-& -. *0- .ѷ 0- 0*! *-//$./$.Ѹ0- 0*!*)*($)'4.$.Ѹ'0'/$*).ѵ */ ѷ# /- )$.'0'/ !*-сппшѷтҗ +/ ( -сппш!*-(*)/#'4/Ҙ/#-*0"#спрцѷу җ ( -спрц!*-(*)/#'4/Ҙѵ))0'$5 спрс*''-...0( фсҊ2*-&Ҋ2 &4 -ѵ contrast, we cannot reject the null hypothesis with a similar level of confidence for other age cohorts, which suggests that the TCJA may have had a specific, positive effect on labor force participation among near-retirement and retirement-age workers at the margin. Although there is some evidence (e.g., Blau and Robins 1989; Whittington 1992; and Haan and Wrohlick 2011) that expansion of the Child Tax Credit may positively affect the long-run potential labor supply through the fertility channel, the data that are currently available do not permit evaluation of this hypothesis. However, there is also evidence (e.g., Blau and Robins 1989; Whittington 1992; Averett, Peters, and Waldman 1997; and Haan and Wrohlick 2011) of positive labor supply responses among females to decreases in the effective cost of child care through public subsidies. Consistent with this literature, female labor force participation among those age 25–34 years rose 0.9 percentage point in 2018—2.1 percentage points above the trend during the period 2009:Q3–2017:Q4. In contrast, overall female labor force participation rose 0.5 percentage point (1.3 percentage points over the trend), while male labor force participation among those age 25–34 rose just 0.3 percentage point (0.7 percentage point above the trend). The elimination of personal exemptions may have partially offset any maternal-specific labor supply effects of the Child Tax Credit’s expansion, though this offsetting effect would have been mitigated by the near doubling of the standard deduction. Evaluating the Effects of the Tax Cuts and Jobs Act | 61 International Developments In the 2018 Economic Report of the President, the CEA reported that an additional margin along which changes in corporate income tax rates can affect economic growth is through the propensity for multinational enterprises to engage in profit shifting across international tax jurisdictions. One technique for effecting such profit shifts is the use of international transfer pricing of intellectual property assets between U.S. multinational enterprises and their subsidiaries in lower-tax jurisdictions. Though transfer pricing is intended by tax authorities to be conducted on an “arm’s length,” transactional basis, in practice the pricing of relatively untraded or otherwise illiquid proprietary intellectual property is often opaque, with the result that firms may systematically underprice the value of the transferred asset. Guvenen and others (2017) estimate that such profit shifting by multinational enterprises results in substantial U.S. economic activity being imputed to overseas affiliates, and therefore has been understating the United States’ GDP, particularly since the 1990s. These researchers correct for this mismeasurement by reweighting the consolidated firm profits that should be attributed to the United States by apportioning profits according to the locations of labor compensation and sales to unaffiliated parties. Applying these weights to all U.S.-based multinational enterprises and aggregating to the national level, the authors calculate that in 2012, about $280 billion in official foreign profits could have been properly attributed to the United States. Importantly, the 2018 Economic Report of the President documented that the propensity to engage in international profit shifting is highly responsive to effective marginal corporate income tax rate differentials. For example, Hines and Rice (1994), estimate a tax semielasticity of profit shifting of –2.25, indicating that a 1-percentage-point decrease in a country’s corporate tax rate is associated with an increase of 2.25 percent in reported corporate income. Before the TCJA, the United States had one of the highest statutory corporate income tax rates among the countries that belong to the Organization for Economic Cooperation and Development, and U.S. multinational enterprises therefore faced strong incentives to report profits in lower-tax jurisdictions. Hines (2010), Phillips and others (2017), and Zucman (2018) each rank the top 10 jurisdictions they quantitatively identify as tax havens. In these rankings, 8 economies—Bermuda, Hong Kong, Ireland, Luxembourg, the Netherlands, Singapore, Switzerland, and the U.K. Caribbean islands—appear on all three lists. As of 2017, these 8 jurisdictions, with a combined population of just 0.6 percent (44 million) of the world’s population and 3.2 percent of global output, accounted for 43 percent of the United States’ direct investment abroad position, on a historical cost basis. After the TCJA’s passage, in the first two quarters of 2018, U.S. direct investment in these 8 jurisdictions declined by $200 billion (box 1-3). 62 | Chapter 1 The “Deemed Repatriation” of Accumulated Foreign Earnings In addition to reduced incentives to shift corporate earnings on a flow basis, the TCJA also included provisions designed to incentivize the repatriation of past earnings previously held abroad. In particular, the TCJA imposed a one-time tax, which it termed “deemed repatriation,” on past, post-1986 earnings that were being held abroad, regardless of whether these earnings are repatriated. With a tax of 15.5 percent on earnings representing liquid assets such as cash and 8 percent on earnings representing illiquid, noncash assets, payable over eight years, deemed repatriation was intended to incentivize the reallocation of past corporate earnings from investment in low-yield assets in low-tax jurisdictions to real investment in U.S.-based fixed assets. Indeed, on a directional basis, outbound U.S. direct investment consequently declined by $148 billion in the first three quarters of 2018, as U.S. multinational companies redirected investment toward the domestic economy. Although the precise volume of total accumulated U.S. corporate earnings held abroad is difficult to estimate, we can calculate an approximation by summing the net flow of earnings reinvested abroad since 1986—as reported in table 6.1 of the Bureau of Economic Analysis’ International Transactions Accounts—through 2017. This calculation suggests that a maximum cumulative total of $4.3 trillion was held abroad by U.S. multinational enterprises as of 2017:Q4. Of this sum, $571 billion, or 13 percent, was repatriated in the first three quarters of 2018 alone, including both the flow of current earnings and the distribution of past earnings. The trend in the volume of quarterly repatriations through 2018:Q3 suggests that this pace can be expected to abate in 2019. Although the distribution of past earnings between cash and noncash investments abroad is similarly difficult to assess, Credit Suisse (2015) recently estimated that 37 percent of overseas earnings of nonfinancial S&P 500 companies were held in the form of cash. The share, 43 percent, of the U.S. direct investment position accounted for by the eight small jurisdictions identified by Hines (2010), Phillips and others (2017), and Zucman (2018) as tax havens is therefore consistent with the Credit Suisse estimate. Assuming a 37 percent cash share of a $4.3 trillion stock, deemed repatriation could raise as much $460 billion in additional tax revenue by 2026, before reduced credits for foreign taxes are paid. This constitutes an extreme upper-bound estimate of potential revenue from deemed repatriation, because the cumulated flow of reinvested earnings may include defunct firms and/or firms that have since been acquired by other foreign-based firms. But there are also reasons to expect that the JCT and the Bureau of Economic Analysis’s (BEA’s) estimates of $340 and $250 billion, respectively, may be conservative. Specifically, data revisions since the JCT and BEA estimations, as well as the inclusion of reinvested earnings in 2017:Q4, yield a substantially larger tax base for the deemed repatriation tax. Second, private sector estimates (Credit Suisse 2015) suggest calculations based on the Evaluating the Effects of the Tax Cuts and Jobs Act | 63 Box 1-3. The TCJA’s Provisions Shift the United States toward a Territorial System of Taxation Accompanying the substantial reduction in the U.S. corporate tax rate as part of the Tax Cuts and Jobs Act were provisions that shifted the United States away from a worldwide system of taxation and toward a territorial system. The provisions of the Global Intangible Low-Tax Income (GILTI), the Foreign Derived Intangible Income (FDII), and the Base Erosion and Anti-Abuse Tax (BEAT) aim to address the incentives for U.S. firms to shift profits abroad. Profit-shifting has become increasingly costly in recent decades, with estimated revenue loss increasing 2.5 times between 2005 and 2015, rising by an estimated $93 to $114 billion, or 27 to 33 percent of the U.S. corporate income tax base (Clausing 2018). A total of 80 percent of the profit shifted abroad by U.S. firms in 2015 was to tax haven countries. The previous worldwide system taxed U.S. firms on their global profits, though most profits earned abroad by U.S. firms were only taxed once they were repatriated to the United States. Evidence from surveyed U.S. tax executives indicated that U.S. firms exposed themselves to nontax costs to avoid taxes on repatriated income (Graham, Hanlon, and Shevlin 2010). The United States was one of just 6 nations among 35 countries belonging to the Organization for Economic Cooperation and Development with a worldwide tax system before the TCJA’s passage. As a result, U.S. firms were left at a potential competitive disadvantage to other OECD-country firms competing in overseas markets that were generally not subject to home-country taxes on profits earned abroad (Pomerleau 2018). The inclusion of the GILTI, FDII, and BEAT in the TCJA shifted the United States toward a hybrid territorial system, lowering incentives for U.S.-based firms to shift profits out of the country. The GILTI and FDII are complementary provisions that address the tax system’s treatment of intangible income. The GILTI is a tax at a reduced rate on the foreign profits of a U.S. firm earned with respect to activity of its controlled foreign corporations in excess of a 10 percent return, where 10 percent is the rate of return attributable to depreciable tangible assets in a competitive market. A rate of return in excess of 10 percent is attributed to mobile income from intellectual property or other intangible assets. The FDII also addresses profits from intangible assets, including intellectual property, but with respect to U.S. firms’ excess returns related to foreign income earned directly. The FDII provides for a reduced tax rate on foreign-derived U.S. income in excess of the 10 percent rate of return associated with tangible assets (Pomerleau 2018). Together, the GILTI and FDII are intended to neutralize the role that tax considerations play in choosing the location of intangible income attributable to foreign market activity. The BEAT establishes a tax on U.S. firms with revenue of $500 million or more and base erosion payments generally in excess of 3 percent of total deductions. Base erosion payments are generally certain deductible payments that a U.S. firm makes to related and controlled foreign corporations. 64 | Chapter 1 The BEAT discourages firms from profit-shifting to lower-tax foreign jurisdictions by applying the 10 percent BEAT tax rate generally to both taxable income and base erosion payments made by the firm (Pomerleau 2018). The 10 percent rate started phasing in from 5 percent in 2018, and will end up rising to 12.5 percent in 2025. The BEAT, GILTI, and FDII contribute to reshaping the incentives the firms face in determining the location of assets as well as new investment when considering after-tax income. When coupled with the notable reduction in the corporate tax rate, this shift toward a territorial system of taxation may contribute to the TCJA’s supply-side effect on changing the growth rate of U.S. output. The growth in the intellectual property component of real nonresidential business fixed investment is above the recent trend (see figure 1-6 in the main text). Investment in real intellectual property products grew at the fastest pace since 1999 in the first three quarters after the TCJA’s passage, at a compound annual rate. Further, by disincentivizing profit shifting, the provisions could have a positive impact on the corporate income tax base. The GILTI, modeled with the reduction of both the corporate income tax rate and the rate for repatriated income, is estimated to increase the corporate tax base by $95 billion, resulting in $19 billion in additional U.S. revenues (Clausing 2018). cash share of total assets less equity of U.S.-majority-owned foreign affiliates, as reported in the BEA’s Activities of U.S. Multinational Enterprises accounts, may substantially underestimate the share of cumulated reinvested earnings liable for the deemed repatriation taxation at the 15.5 percent rate. During the temporary two-year repatriation holiday introduced by the Homeland Investment Act (HIA) of 2004, U.S. multinational firms repatriated $400 billion, of which about $300 billion, or 27 percent of the about $1.1 trillion in then-accumulated overseas earnings, is attributed to the HIA (Redmiles 2008; Herrick 2018). However, though many authors have attempted to draw comparisons between the HIA and the TCJA (e.g., Gale et al. 2018; and Herrick 2018), aside from introducing an incentive to repatriate, the two laws are otherwise generally incommensurable. Most importantly, the comparison is invalid because the TCJA, in addition to deemed repatriation, also permanently lowered the user cost of capital, whereas the HIA, a temporary tax cut on past earnings, did not. Though the Jobs and Growth Tax Relief Reconciliation Act of 2003 had expanded first-year depreciation allowances for certain properties, increased Section 179 expensing, and cut the dividend tax rate for individual shareholders, these provisions were all temporary, expiring, respectively, in December 2004, December 2005, and December 2008. Thus, the bonus depreciation introduced in 2003 expired before the HIA came into effect, while Section 179 Evaluating the Effects of the Tax Cuts and Jobs Act | 65 expensing applied for only half the duration of the repatriation holiday, and the dividend tax cut applied to no more than three or four years of the lives of assets newly installed during the HIA repatriation holiday. In addition, under the “new view” of dividend taxation, the tax advantage of financing marginal investment out of retained earnings or low-risk debt exactly offsets the double taxation of subsequent dividends. As a result, among firms financing marginal investment out of retentions and paying dividends out of residual cash flows, taxes on dividends have no impact on investment incentives (King 1977; Auerbach 1979; Bradford 1981; Auerbach and Hassett 2002; Desai and Goolsbee 2004; Chetty and Saez 2005; Yagan 2015). This contrasts to the “traditional view,” in which marginal investment is financed through variations in the level of new shares. Under the “new view” of dividend taxation, we would therefore expect the impact of the HIA on U.S. domestic investment to have been limited to cash-constrained firms. Consistent with the “new view,” Dharmapala, Foley, and Forbes (2011) find that the HIA had no significant effect on domestic investment, employment, or research and development, in part because most U.S. multinationals were not financially constrained at the time, and because repatriated earnings were generally distributed to shareholders through share repurchases, particularly among firms with stronger corporate governance. Among firms with low investment opportunities and high residual cash flows, stronger corporate governance would indeed predict higher shareholder distributions, given that weakly governed managers may face incentives to raise executive compensation or embark on risky or otherwise low-return acquisitions. Blouin and Krull (2009) also find that, on average, firms that repatriated in response to the HIA had lower investment opportunities and higher free cash flows than nonrepatriating firms, and relatively increased share repurchases by about $60 billion, though this had no significant effect on dividend payments. In contrast to Dharmapala, Foley, and Forbes (2011), but consistent with the “new view,” Faulkender and Petersen (2012) find that the HIA had a large, positive effect on domestic investment by previously capital-constrained firms, though unconstrained firms accounted for the majority of repatriations. Faulkender and Petersen’s findings suggest that domestic and foreign internal funds are not perfectly fungible, and that lowering the cost of repatriating foreign income reduces the cost of financing marginal investment with internal foreign funds. Consistent with the imperfect fungibility of domestic versus foreign internal funds, Desai, Foley, and Hines (2016) find that high corporate tax rates encourage borrowing through trade accounts, with U.S. multinational firms employing trade credit to reallocate capital between locations with differing tax rates. These researchers conclude that the additional corporate borrowing through trade accounts is comparable in magnitude to the additional borrowing through bank loans and debt issuance associated with higher corporate tax rates. 66 | Chapter 1 Reinforcing Faulkender and Petersen’s results and in contrast to Dharmapala, Foley, and Forbes (2011), Dyreng and Hills (2018) find that employment increased in the geographic region surrounding the headquarters of repatriating multinational enterprises in the three years immediately after the HIA’s inception, and that the effect of repatriation on employment was increasing in the amount repatriated. Dyreng and Hills observe that the positive employment effect was strongest when the geographic region is defined as a 20-mile radius around the headquarters of repatriating firms, with estimates indicating that employment rose by more than three employees for every $1 million repatriated in response to the HIA. Share Repurchases and Capital Distributions Research conducted by the Federal Reserve shows that, coinciding with repatriated earnings in the first quarter in 2018, there was a substantial increase in share repurchases conducted by U.S. multinational firms (Smolyansky, Suarez, and Tabova 2018). This analysis further shows that the increase in share repurchases was concentrated in the top 15 firms in terms of total cash held abroad. Figure 1-14 shows the elevated level of real repatriated earnings by U.S. firms coincident with an increase in real share repurchases relative to total assets. The large positive shock to share repurchases, centralized in the top cash-held-abroad U.S. firms, after the TCJA’s enactment has garnered an extensive discussion on the impact of share repurchases. As noted in more recent research, “a common critique is that each dollar used to buy back a share is a dollar that is not spent on business activities that would otherwise stimulate economic growth,” though “people seem to forget some of the very basic lessons of financial economics when it comes to share repurchases” (Asness, Hazelkorn, and Richardson 2018, 2). Jensen’s (1986, 323) free cash flow hypothesis outlined the agency conflicts that arise between shareholders and corporate managers when firms have substantial “cash flow in excess of that required to fund all projects that have positive net present values when discounted at the relevant cost of capital.” Jensen notes that managers of a firm with large free cash flows may use those excess flows to pursue low-return acquisitions rather than distributing residual cash to shareholders. He further suggests that agency conflicts between managers and shareholders are greater within firms with larger free cash flows as “the problem is how to motivate managers to disgorge the cash rather than investing it at below the cost of capital or wasting it on organization inefficiencies” (Jensen 1986, 323). Jensen’s seminal hypothesis informs the later literature by underscoring how excess or free cash flows, if unable to be invested in projects with a positive net present value, may incur economic costs and lead to agency conflicts. Dittmar and Mahrt-Smith (2007) find evidence in support of Jensen’s hypothesis. Consistent with Dharmapala, Foley, and Forbes’s (2011) observation Evaluating the Effects of the Tax Cuts and Jobs Act | 67 $"0- рҊруѵ ' ѵѵ +/-$/ -)$)".)#- +0-#. .Ѷспрф–рч */'- '- +/-$/ -)$)".җ' !/3$.Ҙ /$**!- ')*)!$))$'*-+*-/ .#- - +0-#. ./* /*/'.. /.җ-$"#/3$.Ҙ Dollars (billions, 2012) тпп Ratio пѵппу сфп пѵппт спп пѵппс рфп рпп пѵппр фп п п р с т у р с т у р с т у р с т спрф спрх спрц спрч *0- .ѷ -' . -1 *-Ѹ0- 0*!*)*($)'4.$.Ѹ'0'/$*).ѵ that share repurchases in response to the HIA were particularly pronounced among repatriating firms with stronger corporate governance, Dittmar and Mahrt-Smith estimate that investors value $1.00 in cash in a poorly governed firm at only $0.42 to $0.88. Contrary to popular myth, this is the primary mechanism whereby share repurchases may raise share prices; repurchases otherwise have no mechanical effect on share price. For example, following Cochrane (2018), suppose a company with $100 in cash and a factory worth $100, and with two outstanding shares, each valued at $100, uses that $100 in cash to repurchase one of the two outstanding shares. The company now has one asset—a factory worth $100—and one outstanding share, worth $100. There has been no change in share price or shareholder wealth. However, if investors had previously worried that there was a 40 percent chance that corporate management would squander the $100 in cash on excessive executive compensation or loss-making investment projects or acquisitions, then the two shares would have been valued at $80 each. If the company then repurchased one of the two outstanding shares, it would have $20 in cash, a factory worth $100, and one outstanding share valued at $112, assuming that investors still attach a 40 percent probability to mismanagement. Grullon and Michaely (2004) also provide empirical evidence that supports Jensen’s free cash flow hypothesis, finding, among other results, that 68 | Chapter 1 the market reaction to firms announcing share repurchases is more robust if the firm is more likely to overinvest, and that repurchasing firms experience substantial reductions in systematic risk and the cost of capital relative to nonrepurchasing firms. Their findings support Jensen’s hypothesis that share repurchases are a firm’s value-maximizing response when they do not have investments to make that have a positive net present value. Grullon and Michaely (2004, 652) further note that “repurchases may be associated with a firm’s transition from a higher growth phase to a lower growth phase. As firms become more mature, their investment opportunity set becomes smaller. These firms have fewer options to grow, and their assets in place play a bigger role in determining their value, which leads to a decline in systematic risk.” Though share repurchases and dividend payments constitute alternative mechanisms for distributing earnings, they are imperfect substitutes. First, dividends are subject to personal income tax when received, but capital gains are not taxed until realized, and therefore many investors prefer share repurchases over dividends because they allow the shareholder to determine when he or she incurs the tax liability. Second, in open market repurchases, firms do not have to commit to repurchase. Third, there is no expectation that distributions through share repurchases will recur on a regular basis, in contrast to dividends (Dittmar 2000). In practice, market participants view changes in the amount of dividends paid to be a signal of management’s view of the firm’s prospects. Because dividend decreases are viewed negatively, firms tend not to raise dividend payments unless management believes they can be maintained. Dividends thus tend to exhibit “stickiness,” increasing when management believes the firm’s prospects are sustainably good and decreasing only when absolutely necessary (Brav et al. 2005). Brennan and Thakor (1990), Guay and Harford (2000), and Jagannathan, Stephens, and Weisbach (2000) accordingly find that since the Securities and Exchange Commission legalized share repurchases in 1982, they have become firms’ preferred method for distributing “transient,” nonoperating residual cash flows, whereas dividend payments are the preferred method for distributing “permanent,” operating residual cash flows. Thus, theory and empirical evidence suggest that, among cash-unconstrained firms, a large, positive shock to cash flow, such as from a lowered cost of accessing the accumulated stock of past residual cash flows abroad, is likely to be distributed via share repurchases. Among previously cash-constrained firms, any profit windfall in excess of positive expected return investment opportunities is also likely to be distributed via share repurchases. Figure 1-15 reports a pronounced increase in corporate share repurchases after the TCJA’s passage, with repurchases rising above the recent trend by $200 billion as of 2018:Q3. In contrast, figure 1-16 reports that though corporate net dividend payments rose slightly after the TCJA’s passage, the increase was modest, and net dividends were only $15 billion above the recent trend. Evaluating the Effects of the Tax Cuts and Jobs Act | 69 $"0- рҊрфѵ '*)!$))$'*-+*-/ #- +0-#. .Ѷ спрп–рч Dollars (billions, 2012, NSA) рфп ' /$*) спрчѷт рсп шп хп тп спрп спрр спрс спрт спру спрф спрх спрц спрч *0- .ѷ -' . -1 *-Ѹ0- 0*!*)*($)'4.$.Ѹ'0'/$*).ѵ */ ѷۙ)*)–. .*)''4%0./ ѵ ۙ30/.) *./ѵ Observed share repurchases may be substantially smaller in volume relative to repatriations because under the “new view” of dividend taxation, a simultaneous positive shock to cash flow and investment generates an ambiguous effect on shareholder distributions, depending on the relative magnitudes of the coincident shocks. Though the has TCJA created a positive financial windfall—both for past residual earnings and future cash flow—it has also substantially and permanently lowered the break-even rate of return on marginal investment. Auerbach and Hassett (2002) find that though the probability of share repurchases is higher among firms with a greater cash flow, the probability of repurchase activity is lower among firms with more investment, and the estimated coefficients on cash flow and investment are of the same absolute magnitude. Indeed, a Wald test that the sum of the estimated coefficients on two lags of investment equals (in absolute value) the sum of the estimated coefficients on two lags of cash flow is accepted at all standard levels of significance, and for every specification estimated, and the simple correlation is very close to –1.0. Auerbach and Hassett (2002) further observe that the probability of repurchase activity is highest among large firms with strong capital market access—as indicated by high bond ratings and coverage by multiple analysts. Consistent with these results, Hanlon, Hoopes, and Slemrod (2018), analyzing corporate actions in response to the TCJA, find that observed increases in share repurchases after the TCJA’s passage were extremely concentrated among a 70 | Chapter 1 $"0- рҊрхѵ '*-+*-/ /$1$ ).Ѷспрп–рч Dollars (billions, 2012) SAAR ' /$*) рѶспп спрчѷт рѶрпп рѶппп шпп чпп цпп хпп спрп спрр спрс спрт спру спрф спрх спрц *0- .ѷ -' . -1 *-Ѹ0- 0*!*)*($)'4.$.Ѹ'0'/$*).ѵ */ ѷSAAR = seasonally adjusted annual rate. ۙ30/.) *./ѵ спрч very small subset of cash-abundant firms—particularly Apple, Amgen, Bank of America, Pfizer, and JPMorgan Chase. Excluding already-cash-unconstrained Apple alone from the sample, these researchers find that the value of shares repurchased in 2018:Q1 were no higher than the value of shares repurchased in 2016:Q1. The concentration of the increase in the volume of repurchase activity among such a small subset of firms suggests that though these firms may have been cash-unconstrained, many other firms faced binding financing constraints. The corporate finance literature therefore strongly suggests that repurchase activity is an integral margin of adjustment to a positive cash flow–cum– investment shock, constituting the primary mechanism whereby efficient capital markets reallocate capital from mature, cash-abundant firms without profitable investment opportunities to emerging, cash-constrained firms with profitable investment opportunities. For example, Alstadsaeter, Jacob, and Michaely (2017) find that a 10-percentage-point cut in Sweden’s dividend tax rate in 2006 improved efficiency by inducing capital reallocation from established, cash-rich firms to cash-constrained firms. Similarly, Fried and Wang (2018) find that non-S&P 500 public firms— which are generally younger and faster growing than S&P 500 firms—were net importers of equity capital for every year between 2007 and 2016, with net shareholder inflows into these firms equal to 11 percent of net shareholder distributions by S&P 500 firms. These researchers further observe that a Evaluating the Effects of the Tax Cuts and Jobs Act | 71 $"0- рҊрцѵ '-$1/ *)- .$ )/$'$3 )1 ./( )/4 *)*-+*-/ 0.$) .. .Ѷспрс–рч - Ҋ - Ҋ /- ) *./Ҋ Dollars (billions, 2012, SAAR) спрчѷт тпп счп схп суп ссп ѵспрс ѵспрт ѵспру ѵспрф ѵспрх ѵспрц *0- .ѷ -' . -1 *-Ѹ 0- 0 *!*)*($)'4.$.Ѹ'0'/$*).ѵ */ ѷ ۙ. .*)''4%0./ ))0'-/ ѵ # /- )$.'0'/ !*-сппшѷт–спрцѷуѵ *($)'1'0 .- !'/ 0.$)"/# #$)+-$ $) 3!*-+-$1/ )*)- .$ )/$'!$3 $)1 ./( )/ѵ ۙ30/.) *./ѵ $"0- рҊрчѵ ' )/0- +$/' )1 ./( )/Ѷ спрс–рч - Ҋ - Ҋ /- ) Dollars (billions, 2012) *./Ҋ уп спрчѷу тп сп рп п ѵспрс ѵспрт ѵспру ѵспрф ѵспрх ѵспрц ѵспр8 *0- .ѷ/$*)' )/0- +$/'..*$/$*)Ѹ 0- 0*!*)*($)'4.$.Ѹ '0'/$*).ѵ */ ѷ# /- )$.'0'/ !*-сппшѷт–спрцѷуѵ* !'/ /# - '1'0 $)спрчѷуѶ$/$...0( that /# #$)+-$ $) 3"- 2//# .( *(+*0)))0'-/ $)спрчѷу.in спрчѷр–тѵ ۙ30/.) *./ѵ 72 | Chapter 1 $"0- рҊршѵ-*..*- $")' .*!ѵѵ*-+*-/ /*&.Ѷ спрт–рч Dollars (millions) сѶппп *1Ҋрч рѶцфп рѶфпп рѶсфп рѶппп цфп фпп спрт спру спрф спрх спрц спрч *0- .ѷ -' . -1 *-Ѹ'0'/$*).ѵ */ ѷ/- +- . )/ )/ - тҊ(*)/#(*1$)"1 -" Ѷ/-0)/$)"$)*1 ( -спрчѵ ۙ30/.) *./ѵ substantial fraction of net shareholder distributions by all public companies is reinvested in initial public offerings by newly listing companies, as well as in nonpublic firms through venture capital and private equity vehicles. They additionally note that these firms account for more than 50 percent of private nonresidential fixed investment, employ nearly 70 percent of U.S. workers, and generate almost half of corporate profits. As shown in figures 1-17 and 1-18, real private investment by noncorporate businesses and private equity firms rose sharply in 2018. Among noncorporate firms, in the first three quarters of 2018, real nonresidential fixed investment rose 16.0 percent at a compound annual rate, which would constitute the fastest calendar-year growth in noncorporate business investment since 1993 if sustained through the fourth quarter (see box 1-4 for a discussion of the TCJA and family farms). Asness, Hazelkorn, and Richardson (2018, 4) echo Fried and Wang’s (2018) findings. In particular, they address the “myth” that “share repurchases have come at the expense of profitable investment.” They note that funds obtained by the shareholder after a repurchase are often invested elsewhere. This “redirection of available capital” ensures that capital flows to new investment opportunities. They do note that “there is always the possibility for agency issues to create incentives for corporate managers to engage in suboptimal share repurchase decisions,” though the literature on agency theory finds positive value in paying back free cash flows as much as it does negative ones. Evaluating the Effects of the Tax Cuts and Jobs Act | 73 Box 1-4. Estate Taxes and Family Farms A total of 98 percent of U.S. farms are family businesses. Succession planning, successfully passing the farm to the next generation, is a critically important issue for farm families. The Tax Cuts and Jobs Acts reduced the effective tax rate for family farm households by 3.3 percent. Williamson and Bawa (2018), researchers at the Department of Agriculture, estimate that if the TCJA’s estate tax provisions had been in place in 2016, family farm households would have faced an average effective tax rate of 13.9 percent that year instead of 17.2 percent. The TCJA also doubled the estate value that could be excluded from an individual’s estate taxes to $11.18 million. A large portion of a farm’s assets are illiquid, most often with land as the largest category, equaling millions of dollars. Without a significant estate tax exemption, farms would sometimes need to be liquidated to meet estate tax liability. President Trump was clear that he wanted to spare farm families from the punitive effects of estates taxes when passing the farm to the next generation. The TCJA achieves this objective by virtually eliminating the need for farms to pay estate taxes. Williamson and Bawa (2018) estimate that if the TCJA’s estate tax provisions had been in place in 2016, then 0.11 percent of all farm estates would have had to pay estate taxes, and only 0.58 percent would have had to file an estate tax return. And Williamson and Bawa also estimate that the aggregate tax liability of all farm estates in 2016 would have been $"0- рҊ1ѵThe Tax Cuts and Jobs Act: Farm Estates Exempted from Filing and Paying Estate Taxes, 2016 пѵрڿ пѵфڿ */- ,0$- /*!$' ,0$- /*!$' 0/)* '$$'$/4 шшѵуڿ *0- ѷ ѵѵ +-/( )/*!"-$0'/0- ѵ */ ѷ/- . *)спрх!-( .// .ѵ 74 | Chapter 1 ,0$- /*!$' ѷ$)0- /3'$$'$/4 reduced from $496 million under the previous estate tax rules to $104 million under the TCJA (figure 1-v). By doubling the estate tax threshold, introducing a 20 percent deduction for pass-through income, and extending and expanding bonus depreciation for equipment investment, the TCJA may also positively affect investment by independent farms. Poterba (1997) demonstrates that the estate tax is effectively a tax on capital income and thus lowers after-tax investment returns—particularly, as mortality risk is increasing in age, among older proprietors. Kotlikoff and Summers (1981, 1988) and Gale and Scholz (1994) also highlight the substantial contribution of intergenerational transfers to aggregate capital formation. Especially if the TCJA’s provisions that are currently scheduled to expire are made permanent, the TCJA can therefore be expected to incentivize new capital formation among independent farms, thereby raising productivity and steady-state output. Finally, an additional second-order effect of increased repurchase activity in response to repatriation is the impact of share repurchases on measured foreign direct investment. The BEA (2018) defines foreign direct investment as the ownership or control, directly or indirectly, by a single foreign individual or entity, of “10 percent or more of the voting securities of an incorporated U.S. business enterprise, or an equivalent interest in an unincorporated U.S. business enterprise.” Consequently, given that U.S. multinational enterprises employ some fraction of repatriated funds to repurchase outstanding shares, some of these shares may have been previously held by foreign entities. Accordingly, figure 1-19 reports the three-month centered moving average of gross foreign sales of U.S. corporate stocks. Consistent with repatriating firms repurchasing shares, including shares previously held by foreign entities, we observe a substantial spike in gross foreign sales immediately after the TCJA’s enactment. Conclusion In the 2018 Economic Report of the President, the Council of Economic Advisers demonstrated that before the TCJA’s enactment, the U.S. economy and labor market were adversely affected by the conjunction of rising international capital mobility and increasingly internationally uncompetitive U.S. business taxation, with adverse consequences for domestic capital formation, capital deepening, and wages. Drawing on an extensive academic literature, the Report concluded that the TCJA’s business and international provisions would raise the target U.S. capital stock, reorient U.S. capital away from direct investment abroad in low-tax jurisdictions and toward investment in the United States, and raise household income through both a short-run bargaining channel Evaluating the Effects of the Tax Cuts and Jobs Act | 75 and a long-run capital deepening channel. The Report also documented that reductions in effective marginal personal income tax rates by the TCJA were expected to induce positive labor supply responses. In this chapter, we have used the available data to examine each of these anticipated effects of the TCJA, with particular attention to the relative velocities of adjustment along each margin. We find that the TCJA had an immediate and large effect on business expectations, with firms immediately responding to the TCJA by upwardly revising planned capital expenditures, employee compensation, and hiring. We also observe revised capital plans translating into higher private investment in real fixed assets, with nonresidential fixed investment growing at an annual rate of about 8 percent in the period 2017:Q4–2018:Q3, to a level $150 billion over the recent trend. In addition to tallying more than 6 million workers receiving bonuses that could be directly attributed to the TCJA, with an average bonus of $1,200, we also estimate that as of September 2018, real disposable personal income per household had risen $640 over the trend during calendar year 2018 thus far. As a perpetual annuity, this increase in compensation corresponds to a lifetime pay raise of about $21,000 for the average household, or $2.5 trillion across all households. Finally, we also report evidence of a reorientation of U.S. investment from direct investment abroad, particularly in low-tax jurisdictions, to investment in fixed assets in the United States. Specifically, in the first three quarters after the TCJA’s enactment, U.S. direct investment abroad declined by $148 billion, while the U.S. direct investment position in eight identified tax havens declined by $200 billion. Citing a large body of corporate finance literature, we conclude that shareholder distributions through share repurchases is an important margin of adjustment to a simultaneous positive shock to cash flow and investment, constituting the primary mechanism whereby efficient capital markets reallocate capital from mature, cash-abundant firms without profitable investment opportunities to emerging, cash-constrained firms with profitable investment opportunities. 76 | Chapter 1 x Chapter 2 Deregulation: Reducing the Burden of Regulatory Costs When appropriate, well-designed regulatory actions promote important social purposes, including the protection of workers, public health, safety, and the environment. At the same time, complying with regulations increases the cost of doing business and results in opportunity costs—business and consumer activities that are forgone due to regulation. For decades, the regulatory state has expanded and imposed an ever-growing burden of regulatory costs on the U.S. economy. The Trump Administration has taken major steps to reverse the long-standing trend of rising regulatory costs. In 2017 and 2018, Federal agencies issued many times more deregulatory actions than new regulatory actions. From 2000 through 2016, the annual trend was for regulatory costs to grow by $8.2 billion each year. In contrast, in 2017 and 2018 Federal agencies took deregulatory actions that resulted in cost savings that more than offset the costs of new regulatory actions; in fiscal year 2017, deregulatory actions saved $8.1 billion in regulatory costs (in net present value), and in 2018, they saved $23 billion. In this chapter, we develop a framework to analyze the cumulative economic impact of regulatory actions on the U.S. economy. Regulation affects productivity, wages, and profits in the regulated industry and in the economy as a whole. Economics tells us that the regulatory whole is greater than the sum of its parts. However, Federal regulations have traditionally been considered on a stand-alone basis. The Trump Administration’s reform agenda uses regulatory cost caps to reduce the cumulative burden of Federal regulation. In addition to regulation-specific cost-benefit tests, the cost caps induce agencies to view all 77 their regulations as a portfolio, which is more congruent with the experiences of the households and businesses subject to them. Small business owners, consumers, and workers gain when less regulation means lower business costs, lower consumer prices, more consumer choice, and higher worker productivity and wages. The chapter discusses a number of notable deregulatory actions during the Trump Administration, and gives detailed information about the association health plan rule; the short-term, limited-duration insurance rule; and the joint employer standard. G overnment regulation is ubiquitous in modern economies. When appropriate, well-designed regulatory actions promote important social purposes, including the protection of workers, public health, safety, and the environment. As business owners and managers are aware, complying with regulations often increases the cost of doing business. Moreover, regulatory actions also result in opportunity costs: business and consumer activities forgone due to regulation. Ultimately, consumers and workers bear much of the burden, because business-entry barriers, higher costs, and lower productivity are reflected in higher prices, limited consumer choice, and lower real wages. For decades, the regulatory state has expanded and imposed an ever-growing burden of regulatory costs on the U.S. economy. In 2017 and 2018, the Trump Administration took major steps to reverse the long-standing trend of rising regulatory costs. In fiscal year 2017, there were 15 significant deregulatory actions and 3 new significant regulatory actions, saving $8.1 billion in regulatory costs (in net present value), according to official measures (OMB 2017a). In fiscal year 2018, there were 57 significant deregulatory actions and 14 new significant regulatory actions, saving $23 billion (OMB 2018). The Trump Administration’s regulatory reform agenda uses regulatory cost caps to seek to reduce the cumulative burden of federal regulation. Economics tells us that the regulatory whole is different from the sum of its parts. Households and businesses are required to comply with new regulations along with old ones. Nevertheless, Federal regulations have traditionally been considered on a stand-alone basis. Under the Trump administration, agencies are now also given regulatory cost caps for the upcoming year. In addition to regulation-specific cost-benefit tests, the cost caps induce agencies to view all their regulations as a portfolio, which is more congruent with the experience of the households and businesses subject to them. While pursuing their agencyspecific missions—for example, the Environmental Protection Agency’s (EPA) mission to protect human health and the environment—the regulatory cost 78 | Chapter 2 caps provide the framework for agencies to evaluate regulatory costs, to consider deregulatory actions, and to set priorities among new regulatory actions. Moreover, when the executive branch sets the regulatory cost caps across all federal agencies, the caps reflect the priorities and trade-offs imposed by the cumulative regulatory burden on the U.S. economy. The Trump Administration has sought to lift the burden of unnecessary regulatory costs while encouraging Federal agencies to preserve important protections of workers, public health, safety, and the environment. The regulatory reform agenda is guided by cost-benefit analysis—a systematic way to balance the benefits of regulatory actions, including the value of these important protections, with the costs. The regulatory cost caps require prioritization among costly rules. An agency cannot meet its cost cap simply by eliminating costly regulatory actions; it eliminates regulatory actions when the benefits do not justify the costs. Last year, we discussed the impact of deregulation on aggregate economic growth (CEA 2018). Based on the evidence reviewed, we concluded that if the United States adopted product market regulatory reforms, over the next decade gross domestic product (GDP) could be 1.0 to 2.2 percent higher (CEA 2018). In this chapter, we report on progress and dig deeper into the economic effects of regulation and deregulation. We develop a framework to analyze the cumulative economic impact of regulatory actions on the U.S. economy. Regulation affects the regulated industry and the economy as a whole. Consider the effects of a regulation—such as the expansive joint employer standard featured at the end of this chapter—that discourages specialization and encourages centralized decisionmaking along an industry’s supply chain. Productivity and competition are often greater when separate businesses can specialize in the various tasks required to produce the final consumer good (Becker and Murphy 1992). For example, some businesses specialize in handling raw materials, others in branding and intellectual property, others in performing the clerical work, and still others in regional retail. But the regulation incentivizes a number of these supply-linked businesses to act as a single large business and as a result forgo many of the productivity gains from specialization and decentralized decisionmaking (see also chapter 8 of this Report). Productivity is further sacrificed as capital moves out of the industry. In certain circumstances (discussed below), one result can be lower pay for workers—even workers outside this sector—because the work done in the sector is made less productive due to the regulation, and because fewer employers are competing for workers in the sector. Consumers also will pay higher prices due to the regulation’s effect on costs and diminished competition in the retail market. Although estimating the benefits and costs of Federal regulatory and deregulatory actions might appear to be a technocratic exercise, the principles Deregulation: Reducing the Burden of Regulatory Costs | 79 that underlie the exercise are democratic. To complete an evidence-based cost-benefit analysis requires expertise not only in economic analysis but often also in scientific areas relevant to the regulated industry. Career public servants in the agencies provide the needed expertise; career public servants in the Office of Information and Regulatory Affairs (OIRA) within the Office of Management and Budget (OMB) review the completed analyses. A previous OIRA Administrator, Cass Sunstein (2018), proclaimed the process as the “triumph of the technocrats.” However, the goal of economic analysis is to estimate the benefits and costs based on the preferences of the people affected by the regulatory actions. Cost-benefit analysis is “an attempt to replicate for the public sector the decisions that would be made if private markets worked satisfactorily” (Haveman and Weisbrod 1975, 171). Cost-benefit analysis uses the information revealed in market transactions to guide public sector decisions. For example, a regulatory cost-benefit analysis places a high value on improving health and safety based on empirical evidence that people are willing to pay a great deal to reduce the risks of injury and death. The empirical evidence captures the public’s preferences for health and safety, not the analyst’s. The Trump Administration recognizes that the public—including workers, consumers, and small business owners—are key stakeholders in deregulation and actively seeks their feedback on proposed regulatory and deregulatory actions. In the next section, we use our economic framework to discuss different types of regulatory actions and when they are needed to improve the economy. We then survey the current regulatory landscape and provide information on the number and costs of Federal regulatory actions, and on how the regulatory cost caps are reducing the regulatory burden on the U.S. economy. Following that, we use our framework to analyze the cumulative economic impact of regulatory actions. We then discuss lessons from our framework. The chapter concludes with a set of three case studies that illustrate the value of meaningful regulatory reform. The case studies explore different aspects of how Federal deregulatory actions improve productivity and reduce costs for small businesses and their workers. The first case study is about a rule that allows more small businesses to form association health plans to provide lower-cost group health coverage to their workers. The second case study is about a rule that expands consumer options to purchase short-term health coverage. And the third case study is about the reform of the joint employer standard. Regulatory costs, and therefore the regulatory cost savings of the Trump Administration’s regulatory reforms, are understated by the official measures in all three cases because the official measures did not include all the relevant opportunity costs, especially those accruing outside the regulated 80 | Chapter 2 industry. The case studies provide guidance on how to strengthen the regulatory analysis of deregulatory actions.1 Principles of Regulation and Regulatory Impact Analysis Although there are tens of thousands of regulatory actions, a fairly simple economic framework helps organize their effects. Regulation affects productivity, wages, and profits in the regulated industry. Then, as capital and labor move in response to the compliance costs and incentive effects of the regulation, regulation affects productivity, wages, and profits in the economy as a whole. The effects of regulatory actions, taxes, and other market distortions accumulate multiplicatively within the industry and along that industry’s supply chain, through what economists call “convex deadweight costs.” The concept of convex deadweight costs is a well-established result in the economic analysis of taxation (Auerbach and Hines 2002). Taxes impose a burden on the economy in excess of the tax revenues collected; the excess burden is also known as the deadweight cost, the deadweight loss, or the welfare loss due to taxation. The deadweight cost function is convex; if the tax is increased by 10 percent, the deadweight costs of the tax increase by more than 10 percent. As we discuss in detail below, the regulatory deadweight cost function is also convex. A new regulatory action that increases regulatory costs by 10 percent increases the cumulative regulatory cost burden by more than 10 percent. As we discuss below, even though in many cases most of the burden of a regulatory action is outside the regulated industry, the burden can be quantified, primarily with information about the regulated industry alone. Public Goods and Private Markets The economic framework distinguishes public goods (and services), such as clean air, from private goods, such as automobiles and health insurance. The economic term “public good” refers to a good of which one person’s consumption does not reduce the availability of the good for other consumers and of which it is difficult or impossible to exclude those consumers who do not pay for the good from using it. Due to these properties, households and businesses have insufficient incentives to purchase and produce public goods in private markets. For example, consumers tend to free-ride on other people’s purchases rather than purchase the good for themselves. Although private goods are not necessarily free from market failures, individual households and businesses have significant incentives to engage in these activities, and they are situated in a chain of economic activity that is critical for understanding 1 The CEA previously released research on topics covered in this chapter. The text that follows builds on the following research paper produced by the CEA: “Deregulating Health Insurance Markets: Value to Market Participants” (CEA 2019). Deregulation: Reducing the Burden of Regulatory Costs | 81 the cumulative effect of regulatory actions. A number of regulatory actions are designed to enhance public goods, even while the opportunity cost of such actions includes the loss of the output from private goods. Other regulatory actions are designed to increase the total value of private good production by correcting failures in the markets for private goods. Regulatory actions sometimes combine both these elements; but even in these cases, it helps to examine the economic functions separately. Environmental regulatory actions are an important type of regulatory actions that trade private goods for public goods, where environmental quality is the public good. A number of employment regulatory actions are also examples, when they restrict employers’ practices in order to promote, say, fairness. Regulations of public goods typically, although not always, involve a loss of private good output, usually when these regulations reduce productivity in the process of producing these goods.2 The productivity loss is not by itself an argument against regulations of public goods, because the value of the public goods needs to be part of the cost-benefit analysis; but of course the amounts of losses and gains need to be accurately assessed. Regulations to enhance productivity are assessed on the basis of their productivity effects; they may reduce productivity in some activities so as to increase it overall. For example, regulations designed to prevent a financial crisis enhance productivity. Chapter 6 discusses the Dodd-Frank Act, which established a wide range of regulatory mandates to reduce the likelihood and severity of future systemic financial crises. The Trump Administration’s financial regulatory approach balances the benefits of preventing financial crises and the regulatory costs that Dodd-Frank imposed on the banking industry, on other financial providers, and on the public. The Process of Doing Regulatory Impact Analysis Regulatory actions promote important societal goals, but not without opportunity costs. Since President Reagan’s Executive Order 12291 was issued in 1981, most Federal agencies have been required to use cost-benefit analysis to strike an appropriate balance in rulemaking (White House 1981). Early in their first terms, Presidents Clinton, Obama, and Trump each signed Executive Orders that continued to require most Federal agencies to conduct Regulatory Impact Analyses (RIAs) of new and existing rules. Each RIA includes a cost-benefit analysis. Federal independent regulatory agencies—such as the Consumer Financial Protection Bureau, the Securities and Exchange Commission, and the Federal Reserve—are not required to conduct RIAs (OMB 2017b). 2 Sometimes a regulation prohibits certain types of labor from engaging in private-good production, in which case the output effect would come from fewer production inputs rather than less productivity. Some measures of productivity could even be enhanced if the prohibitions apply to the less productive inputs, but in this chapter we refer to productivity in the more specific multifactor sense (BLS 2018b). 82 | Chapter 2 Federal regulatory cost-benefit analyses are grounded in welfare economics, the branch of economics that studies questions about the well-being of a society’s members. In principle, regulatory cost-benefit analyses should help guide Federal agencies to adopt the set of regulatory actions that net the largest societal benefits over regulatory costs. Key concepts in estimating benefits and costs are willingness to pay and opportunity costs. Federal agencies draw on extensive bodies of economic research that provide estimates of societal willingness to pay for beneficial regulatory outcomes, including improvements in health, safety, and the environment. The agencies also develop estimates of the opportunity costs of regulatory actions. Cost-benefit analyses of deregulatory actions are guided by the same principles of applied welfare economics that guide cost-benefit analyses of regulatory actions. In particular, opportunity costs and willingness to pay continue to be the key concepts. The economic concept of sunk costs can also play an important role in analyzing a deregulatory action. Some firms’ and consumers’ responses to regulatory actions involve sunk costs that cannot be recovered, even if the action is subsequently modified or eliminated. As a result, the costs savings from a deregulatory action might be less than the costs of the original regulatory action. However, the existence of large sunk costs might point to an important source of opportunity cost savings from deregulatory actions. Sunk costs to comply with a regulatory action can serve as a barrier to entry that gives market power to established firms (Aldy 2014). Although these firms cannot recover their sunk costs, a deregulatory action that removes costly requirements can promote the entry of new firms, increase competition, and decrease prices. The Current Regulatory Landscape This section examines the current regulatory landscape. First, it explores current Federal regulatory and deregulatory actions. Then it explains how the Trump Administration’s regulatory cost caps are reducing costs. Federal Regulatory and Deregulatory Actions Last year, we discussed the various approaches that researchers have taken to the difficult task of quantifying the extent of Federal regulation (CEA 2018). One approach is to count the number of pages in the Federal Register or the Code of Federal Regulations. Another approach is to use an index based on textual analysis of keywords in these publications, like “shall” or “must,” that indicate restrictions on the economy. In this subsection, we review evidence on the number of rules and estimates of the regulatory costs. From 2000 through 2018, Federal agencies published over 70,000 final rules in the Federal Register—an average of more than 10 a day. OMB reviews those rules that are considered significant. Figure 2-1 shows the number of economically Deregulation: Reducing the Burden of Regulatory Costs | 83 Figure 2-1. Economically Significant Final Rules, Presidential Year 1990–2018 Number of rules 100 80 60 40 20 0 1990 1995 2000 2005 2010 2015 Sources: Office of Information and Regulatory Affairs; George Washington University Regulatory Studies Center. Note: A presidential year begins in February and ends in January of the subsequent year. The final rule count includes all interim final rules and final rules. significant rules—including both regulatory and deregulatory actions—that OMB reviewed in each presidential year (February of the given year through January of the next year). Throughout this chapter, we use “regulatory and deregulatory actions” as umbrella terms, but we use more precise terms when needed (see box 2-1). Federal regulatory and deregulatory actions cover a wide range of economic activity. Above, we make the distinction between regulations to enhance productivity and regulations of public goods. Earlier discussions made a similar distinction between economic and social regulations (Joskow and Rose 1989). With the deregulation movement of the 1970s, Federal efforts shifted away from economic regulatory actions that restricted entry and regulated prices (see box 2-2). State and local economic regulation of sectors such as electricity remain common. Currently, many Federal agencies issue regulatory actions designed to promote social purposes, including the protection of workers, public health, safety, and the environment. Other Federal regulatory actions are designed to improve the functioning of specific sectors of the economy. This Report discusses the economics of sector-specific developments and policies, including regulatory and deregulatory actions, in its other chapters; chapter 1 discusses taxes, chapter 3 discusses the labor market, chapter 4 discusses healthcare, chapter 5 discusses energy, and chapter 6 discusses banking. In this chapter, we focus on crosscutting issues in regulatory and deregulatory actions that are independent of the specific industry being regulated. 84 | Chapter 2 Box 2-1. The Terminology of Federal Regulatory Actions Agencies in the executive branch issue regulatory actions, also called rules, to implement Federal legislation passed by Congress. Executive Order 12866 established the process for the Office of Information and Regulatory Affairs (OIRA) within the Office of Management and Budget (OMB) to review proposed and final rules. Under this Executive Order, rules may be categorized as “significant” or “economically significant.” OIRA coordinates the reviews of all the rules that it deems significant, which are specifically defined as rules that are anticipated to 1. “Have an annual effect on the economy of $100 million or more or adversely affect in a material way the economy, a sector of the economy, productivity, competition, jobs, the environment, public health or safety, or State, local, or tribal governments or communities; 2. Create a serious inconsistency or otherwise interfere with an action taken or planned by another agency; 3. Materially alter the budgetary impact of entitlements, grants, user fees, or loan programs or the rights and obligations of recipients thereof; or 4. Raise novel legal or policy issues arising out of legal mandates, the President’s priorities, or the principles set forth in this Executive Order.” Economically significant rules are a subcategory of significant rules that meet requirement 1 above of having an annual effect on the economy of $100 million or more or having other adverse effects. If a rule is deemed economically significant, an assessment of its economic benefits and costs is typically required before it is finalized. The Congressional Review Act (1996) introduced the term “major rule” to the U.S. Code to categorize certain rules regulated by congressional action. A major rule is essentially an economically significant rule—one that is determined by OIRA to likely result in significant adverse economic effects or an annual effect on the economy of $100 million or more (U.S.C. Section 804[2]). However, not all economically significant rules are deemed to be major. OIRA formally defined the terms “regulatory action” and “deregulatory action” when describing rules to better implement and track the Trump Administration’s regulatory reform agenda under Executive Order 13771, which requires Federal agencies to issue two deregulatory actions passed for each new regulatory action. Under this Executive Order, a “regulatory action” is a finalized significant rule or guidance document that imposes total costs greater than zero. A “deregulatory action” can include any agency action that has been finalized and has total costs less than zero (including significant and nonsignificant rulemaking; guidance documents; some actions related to international regulatory cooperation; and information collection requests that repeal or streamline recordkeeping, reporting, or disclosure requirements). Deregulation: Reducing the Burden of Regulatory Costs | 85 Box 2-2. Economic Regulation and Deregulation Economic regulation refers to the regulation of prices and entry into specific industries. Economic regulation has been used in industries with economies of scale, including electricity, telephone service, and cable television (Joskow and Rose 1989). In industries such as these, in theory it can make sense to restrict entry to a single firm to take advantage of economies of scale and lower production costs. To prevent the single firm from exploiting its market power and charging higher prices, prices are regulated so the firm earns a normal return. Economic regulation has also been used in multifirm industries, including airlines, banking, and trucking. Depending on the industry, economic regulations are implemented at the local, State, and national levels. Although the principles of economic regulation are grounded in economic theory, in practice it has not always led to good economic results. In 1970, the Council of Economic Advisers described the “disappointing” performance of economic regulation: “Entry is often blocked, prices are kept from falling, and the industry becomes inflexible and insensitive to new techniques and opportunities for progress” (CEA 1970, 107). Amid other economic and political developments in the 1970s, the failures of economic regulation helped lead to the deregulation movement. Perhaps the most dramatic success story is the deregulation of the airline industry. Rose (2012, 376) refers to it as “one of the greatest microeconomic policy accomplishments of the past fifty years” and credits deregulation as generating “lower average fares; greater numbers of flights, non-stop destinations, and passengers; dramatically different network structures; and increased productivity.” Borenstein and Rose (2014) provide a brief history. In 1925, the U.S. government began regulating the airline industry with the Air Mail Act (43 Stat. 805). This legislation (and its amendments) allowed the Post Office to award contracts and created subsidies for mail delivery by private airlines. After mismanagement by the Postmaster General and a desire to regulate a chaotic marketplace, Congress passed legislation, including the Civil Aeronautics Act of 1938, that established the precursor to the Civil Aeronautics Board (hereafter the “Board”), to oversee economic regulation of the nascent industry (52 Stat. 977). With the Board setting airfare and routes, airlines competed on in-flight quality, schedule convenience, and seat availability. The lack of price competition encouraged airlines to offer more frequent flights with fewer passengers and more amenities. Regulation also encouraged airlines to purchase new aircraft regularly to offer the latest technology, rather than allow assets to depreciate, because the Board did not allow airlines to charge lower prices for flights on older aircraft (Borenstein and Rose 2014). The ratio of passengers to seats available declined with the number of route competitors and route distance (Douglas and Miller 1974). The Board tried to maintain the industry’s profitability by raising airfares, but the airlines responded by increasing flight 86 | Chapter 2 frequency, which further decreased passengers per available seat and raised costs closer to the price set by the Board. President Carter appointed the economist Alfred Kahn as chair of the Board in 1977, with a mandate to deregulate the airline industry. With rising airfares in regulated markets, the Airline Deregulation Act of 1978 dismantled the Board and eliminated price controls, entry restrictions, and regulated networks. After 1978, load factors soared and profit yields fell as the airlines began to compete on price. Instead of comparing prederegulation and postderegulation loads, profits, and prices, ideally researchers would compare the outcomes under deregulation to outcomes in a hypothetical counterfactual world where airline deregulation never occurred. Borenstein and Rose (2014) suggest that the Standard Industry Fare Level (SIFL)—created by the Board to determine airfares prior to deregulation and updated based on input cost and productivity changes—provides a useful counterfactual. Compared with the SIFL, in 2011 actual airfares were 26 percent lower. Using the SIFL counterfactual, in 2011 airline deregulation created $31 billion (in 2011 dollars) in benefits for consumers (Borenstein and Rose 2014). In addition to the Airline Deregulation Act of 1978, the deregulation movement under President Ford and President Carter included the Railroad Revitalization and Regulatory Reform Act of 1976, the Motor Carrier Act of 1980, and the Depository Institutions Deregulation and Monetary Control Act of 1980. Alfred Kahn (1988) argued that airline deregulation helped make possible the deregulation of these other major industries. Most of the Federal regulatory actions tabulated in the figures in this chapter are not economic regulations but instead are social regulatory actions designed to protect workers, public health, safety, and/or the environment, or to promote other social goals. OMB (2003) advises the Federal agencies issuing these regulatory actions that, in competitive markets, there should be a presumption against price controls, production or sales quotas, mandatory uniform quality standards, or controls on entry into employment or production. In this way, the lessons learned in the deregulation movement of the 1970s continue to shape current Federal regulatory practices. Federal regulatory actions range from simple housekeeping to actions that change manufacturing processes, business practices, and ultimately the prices and availability of consumer goods and services. Between January 2000 and November 2018, OMB reviewed over 4,000 significant final rules. The Department of Health and Human Services accounted for 16 percent of the final rules reviewed by OMB, followed by the Environmental Protection Agency, with 11 percent, and the Department of Agriculture, with 8 percent. The Department of Transportation and Department of Commerce round out the top five agencies with the most final rules since 2000. Together, these top five rulemaking agencies accounted for almost half the significant rules Deregulation: Reducing the Burden of Regulatory Costs | 87 reviewed this century, while 44 other Federal agencies issued the remainder of the final rules (figure 2-2). It needs to be noted that an OMB review is currently not required for actions issued by Federal independent regulatory agencies. Until 2018, an OMB review was also not generally required for tax regulatory actions taken by the Department of the Treasury. In its annual Reports to Congress, OMB provides an accounting of the benefits and costs of selected major rules published in the preceding fiscal year. Figure 2-3 shows the regulatory costs created by the new rules included in OMB’s Reports each year from 2000 through 2018, and the planned costs from the OMB Regulatory Budget for 2019. Figure 2-4 shows the simple accumulation—ignoring interactions—of the costs of Federal regulatory actions. Regulatory costs are measured in constant, inflation-adjusted 2017 dollars and are on an annualized basis to show the ongoing costs that the rules will continue to impose on the economy. We report the midpoints of OMB’s ranges of estimated costs. From 2000 through 2016, the annual trend was for regulatory costs to grow by $8.2 billion each year. If regulatory costs continued to grow at that rate, cumulative costs would reach over $163 billion by 2019 (figure 2-4). However, the regulatory landscape changed in 2017 and 2018. From 2000 to 2018, the simple accumulation of regulatory costs totaled $138 billion, which is just over 11 percent lower than what would have been predicted based on the trend from 2000 to 2016 (figure 2-4; also see box 2-3 on small businesses’ perspectives on regulatory costs). The growth in regulatory costs did not just slow down; it reversed. In fiscal years 2017 and 2018, deregulatory actions resulted in regulatory cost savings that more than offset the costs of new regulatory actions. Since 1981, Federal agencies have used a systematic general framework to estimate the costs of new regulatory actions, but over time there have been differences in methodologies and assumptions (OMB 2006). With this caveat in mind, from 1981 through 2016, cost savings from deregulatory actions more than offset new regulatory costs in only three years—in 1981 and 1982, which were the first two years of the Reagan Administration; and in 2001 when the Congressional Review Act was used to repeal a costly rule about workplace repetitive-motion injuries (OMB 2006).3 In this chapter, we define “deregulation” as any action by the government that reduces its control over business and consumer decisions. There are several ways to deregulate. Federal agencies’ deregulatory actions account for most of the cost savings shown in figure 2-3. Deregulatory actions include revising regulatory processes, modifying existing rules, and eliminating existing rules. Deregulatory actions also include periodic updates of rules, such as fishing quotas or medical reimbursement rates, that save regulatory costs. For 3 Because the rule about repetitive-motion injuries was repealed, later OMB reports do not include the rule’s estimated costs in 2000 or the cost savings from its repeal in 2001. OMB also revises its estimates when needed. In figures 2-3 and 2-4, we use the later reports, which do not show a net cost savings in 2001. 88 | Chapter 2 Figure 2-2. OMB-Reviewed Final Rules, by Agency, 2000–2018 HHS, 16% EPA, 11% Remaining agencies, 51% USDA, 8% DOC, 6% DOT, 8% Sources: Office of Management and Budget (OMB); CEA calculations. Note: HHS = Department of Health and Human Services; EPA = Environmental Protection Agency; USDA = Department of Agriculture; DOT = Department of Transportation; DOC = Department of Commerce. The percentage calculation includes all the final rules reviewed by OMB per agency from January 1, 2000, to October 31, 2018. Figure 2-3. Real Annual Costs of Major Rules, Fiscal Years 2000–2019 Dollars (billions, 2017) 26 21 16 11 6 1 -4 2000 2002 2004 2006 2008 2010 2012 2014 2016 2018 Sources: Office of Information and Regulatory Affairs (OIRA); CEA calculations. Note: The cost estimates for years 2000–2016 are taken from the most recent OIRA Report to Congress with an estimate for that year. The real cost estimate for 2019 is a projected estimate from the OIRA Regulatory Budget for fiscal year 2019. Annual cost estimates include all major rules for which both benefits and costs have been estimated. Deregulation: Reducing the Burden of Regulatory Costs | 89 Figure 2-4. Cumulative Costs of Major Rules, Fiscal Years 2000–2019 Dollars (billions, 2017) 180 2019 160 140 120 Trend 100 80 60 40 20 0 2000 2002 2004 2006 2008 2010 2012 2014 2016 2018 Sources: Office of Information and Regulatory Affairs; CEA calculations. Note: Cumulative costs begin in 2000, assuming there are no costs from before fiscal year 2000. Data from figure 2-3 were used to determine the yearly cumulative costs. The trend is calculated for 2002 through 2016. example, the National Oceanic and Atmospheric Administration is required by law to periodically review designations and protections of essential fish habitats. The 2018 revision of essential fish habitat designations opened large areas off the coast of New England to commercial sea scallop harvesting, resulting in a net economic benefit of $654 million. Congress can deregulate by passing legislation that alters the statutory regulatory requirements. The economic deregulation movement of the 1970s involved major legislative actions to deregulate the trucking and airline industries (see box 2-2). More recently, the Tax Cuts and Jobs Act of 2017 included a provision that removed the tax penalty that enforced the Affordable Care Act’s (ACA) mandate that individuals had to purchase health insurance (see chapter 4). The 2018 Economic Growth, Regulatory Relief, and Consumer Protection Act modified regulation of the banking industry (see chapter 6). Congress can also use its authority under the 1996 Congressional Review Act to eliminate Federal regulatory actions. From 1996 through 2016, the Congressional Review Act had only been used once, in 2001 (mentioned above). In 2017, Congress used the act to overturn 15 rules, including the Fair Pay and Safe Workplaces rule and the Stream Protection rule. The deregulatory action for those two rules alone resulted in total cost savings of about $500 million. In 2018, Congress used the 90 | Chapter 2 Box 2-3. Small Businesses and the Regulatory Burden Owners of small businesses have their own perspective on regulatory costs. The National Federation of Independent Business (NFIB 2001) regularly conducts monthly surveys of small business owners. One monthly NFIB survey question asks small businesses to identify the “single most important problem facing [their] business.” They are given a list of common small business burdens and allowed to write in responses. Between 2012 and the election of President Trump, the NFIB reported that government regulation was the most frequently cited top concern for small businesses, at about 45 percent of the time. (The last report before the election was in October 2016. Survey responses do not distinguish between concerns about Federal regulations versus State or local regulations.) Since the election, regulation has never been the most frequently cited top concern of small businesses. NFIB also conducts monthly surveys assessing small business optimism. Figure 2-i shows an upward recent trend in the NFIB index of small business optimism. Small business optimism began to sharply increase after the November 2016 election and has now reached record highs. $"0- 2Ҋ$ѵ(''0.$) ..+/$($.( ) 3Ѷсппп–спрч Index (1986 = 100) ррп ' /$*) Ҋрч рпф рпп шф шп чф чп сппп сппс сппу сппх сппч спрп спрс спру спрх спрч *0- sѷ/$*)' -/$*)*! ) + ) )/0.$) ..Ѹ'0'/$*).ѵ */ ѷ /- +- . )/ рсҊ(*)/#(*1$)"1 -" ѵ#$)" )*/ .a - ..$*)ѵ Deregulation: Reducing the Burden of Regulatory Costs | 91 act to overturn guidance issued in 2013 by the Bureau of Consumer Financial Protection.4 Finally, deregulation can also result from litigation. The Trump Administration’s Regulatory Cost Caps Are Reducing Costs The turnaround in the growth of regulatory costs is the direct result of the regulatory cost caps that were established early in the Trump Administration. In fiscal year 2017, there were 67 deregulatory actions and 3 new significant regulatory actions (22-for-1), saving in net present value $8.1 billion in regulatory costs. Of the deregulatory actions in fiscal year 2017, 15 were significant (5-for-1; see box 2-1 for a definition of significant actions). In fiscal year 2018, there were 176 deregulatory actions and 14 new significant regulatory actions (12-for-1), saving in net present value $23 billion in regulatory costs. Of the deregulatory actions in fiscal year 2018, 57 were significant (4-for-1). This turnaround reflects President Trump’s January 30, 2017, Executive Order 13771, “Reducing Regulation and Controlling Regulatory Costs.” This Executive Order requires Federal agencies to eliminate, on average, two regulatory actions for each new regulatory action and, for the first time, to meet a regulatory cost cap. In fiscal year 2017, the cost cap was set at zero; the regulatory costs created by any new regulatory actions had to be at least offset by deregulatory actions. In 2018, across all agencies, the cap was set at a $9.8 billion (present value) reduction in regulatory costs. In the first two years under Executive Order 13771, Federal agencies have more than met both the two-for-one requirement and the regulatory cost caps. See box 2-4 for more discussion of notable deregulatory actions. Deregulation has been faster than many experts thought possible. The notice-and-comment requirements build a lot of inertia into the Federal rulemaking process for regulatory and deregulatory actions. Shortly after Executive Order 13771 was issued, Potter (2017) cautioned that to undo existing regulatory actions could take “many, many years.” The record of deregulatory actions in 2017 and 2018 allays this concern. Looking to the future, for 2019 Federal agencies have adopted caps that, when met, will save another $18 billion in projected regulatory costs (net present value). In addition, in 2019 the Department of Transportation and the EPA expect to finalize a proposed rule regarding corporate average fuel economy. The $18 billion in regulatory cost savings in 2019 (net present value) do not include the potential regulatory cost savings from this rule. The Administration notes the impact separately in order to highlight ongoing reform across all agencies; the cost savings from this onetime deregulatory action are expected to be an order of magnitude larger than other deregulatory actions to date. 4 The act states that Congress has 60 days after the rule is submitted to overturn it; because the 2013 policy guidance had not been submitted to Congress in 2013 for review, the 2018 Congress could overturn it. 92 | Chapter 2 Box 2-4. Notable Deregulatory Actions Previous administrations have issued costly regulatory actions affecting markets for labor, energy, insurance, education, and credit—to name a few. These regulatory actions were imposing a large cumulative cost, and they reduced economic growth for the reasons examined in this chapter. Many of these actions have been overturned during the Trump Administration. And some of these overturned regulations were also notable, even when viewed in isolation. In the labor area, the National Labor Relations Board (NLRB) had expanded the definitions of joint employer and independent contractor that would have reduced competition and productivity in labor markets, as discussed at the end of this chapter. The NLRB had also permitted “microunions,” which means that subsets of employees could organize even if the majority of employees did not want to be represented by a labor union. Several notable regulations from the previous Administration substantially added to employers’ compliance costs. Its Overtime Rule required employers to track hours worked by a wider range of employees, including a number of white-collar workers, even though the rule would not substantially increase workers’ pay as shown by basic economics (Trejo 1991) and as verified empirically by an economist at the Department of Labor (Barkume 2010). Furchtgott-Roth (2018) details the Trump Administration’s changes in these rules, as well as changes to other notable rules affecting employers such as the Persuader Rule, the Fiduciary Rule, and Fair Pay and Safe Workplaces Executive Order. The Federal Communications Commission’s Open Internet Order (commonly called the Net Neutrality rule) restricted pricing practices by Internet service providers. Like price controls more generally, the rule would have resulted in a less productive allocation of resources. The commission repealed the rule in 2017. Regulations may be increasing entry barriers and reducing competition in higher education, including Gainful Employment Regulations and the Borrower Defense Rule. The Trump Administration’s Department of Education is currently reviewing these and other notable regulations. Chapter 5 of this Report discusses how energy productivity has been enhanced by repealing or revising notable prior rules, including the Clean Power Plan, the Waters of the United States, the Waste Prevention Rule, the Stream Protection Rule, and the closure of an area on the coastal plain of the Arctic National Wildlife Refuge. The Safer Affordable Fuel Efficient (SAFE) Vehicles rule is also discussed below. Notable health insurance deregulations include the setting of the ACA’s individual mandate penalty to zero, giving small businesses more flexibility to join associated health plans, and eliminating previous restrictions on the sales of short-term, limited duration insurance (see the end of this chapter and chapter 4 of this Report). Deregulation: Reducing the Burden of Regulatory Costs | 93 Regulations had also hindered productivity and competition in the financial and banking sector. Chapter 6 of this Report discusses the Trump Administration’s actions to reform implementation of the Dodd-Frank Act, nullify the Consumer Financial Protection Bureau’s Arbitration Rule, and revise the National Credit Union Administration’s Corporate Credit Union Rule. The regulatory cost caps establish an incremental regulatory budget and create new incentives for Federal agencies. Rosen and Callanan (2014) provide a useful history and discussion of the idea of a regulatory budget. In 1980, the CEA described a regulatory budget “as a framework for looking at the total financial burden imposed by regulations, for setting some limits to this burden, and for making trade-offs within those limits” (CEA 1980, 125). Instead of establishing a budget limit on total regulatory costs—which, as the CEA mentioned, are hard to measure—Executive Order 13771 establishes a budget in terms of the incremental costs added or reduced by new actions; this Executive Order builds on earlier efforts to encourage retrospective regulatory review (see box 2-5). Within each agency, the caps create internal incentives to prioritize costly regulations, to limit the compliance costs of new regulatory actions, and to remove outdated or inefficient existing actions. Breyer (1993, 11) argued that agencies often suffer from tunnel vision and pursue “a single goal too far, to the point where it brings about more harm than good.” The cost caps help expand an agency’s focus of vision. To pursue its agency-specific mission—for example, the EPA’s mission to protect human health and the environment; under Executive Order 13771, the EPA now also has internal incentives to pay greater attention to regulatory costs. For example, Rosen (2016, 53) pointed out that given a regulatory budget, “an excessively costly regulation would come at an opportunity cost to the agency, because it would require the agency to forgo other regulatory initiatives.” For the same reason, the regulatory budget gives the agency incentives to consider deregulatory actions, including the removal of outdated or inefficient rules. Although an agency that suffers from tunnel vision might tend to look mainly for opportunities to expand its regulatory portfolio, the cost caps shift the agency’s focus to how it might alter its regulatory portfolio toward more cost-effective actions. By creating an incremental regulatory budget, the cost caps serve a function similar to private businesses’ accounts and to the Federal government’s fiscal budget. Demski (2008) described the managerial uses of business accounting information as focusing on two questions—What might it cost? and Did it cost too much? The private sector business manager uses the information in the accounts to judge how well the management of each company division 94 | Chapter 2 Box 2-5. Retrospective Regulatory Review In addition to conducting reviews of new regulatory actions, the Executive Orders issued by Presidents Reagan, Clinton, and Obama instructed Federal agencies to conduct retrospective reviews of currently effective regulatory actions (respectively, Executive Orders 12291, 12866, and 13563). The GAO (2007, 2014) and Aldy (2014) discuss the history of these efforts in detail. In his 2012 State of the Union Address, President Obama highlighted the retrospective review of an EPA rule that, since the 1970s, had defined milk as an “oil” and forced some dairy farmers to spend $10,000 a year to prove that they could contain an oil spill. The elimination of this requirement was estimated to result in $146 million (in 2009 dollars) annually in regulatory costs savings. But it is perhaps more notable that the requirement was in place for over three decades. A report for the Administrative Conference of the United States assessed the broader impact of President Obama’s emphasis on retrospective review (Aldy 2014). The study examined all major rules listed in the 2013 and 2014 OMB Reports to Congress. In 2013 and 2014, the ratio of deregulatory actions to new regulatory actions was 1 to 10, compared with the ratio of 4 to 1 achieved in 2018. (Including nonmajor deregulatory actions, the 2018 ratio was 12 to 1.) A retrospective review yielded cost savings from 2012 to 2016. However, as shown above in figures 2-3 and 2-4, the total regulatory costs of major rules grew especially rapidly in 2012 and more slowly in the years 2013–16; by comparison, total regulatory costs fell in 2017 and 2018. Raso (2017) concluded that retrospective reviews were a “credible but small component of the Obama administration’s rulemaking efforts.” DeMenno (2017, 8) studied public participation in agencies’ retrospective review processes initiated in 2011. She found 3,227 comments across the 10 agencies in her sample, which she described as “significantly lower than agencies often receive for rulemakings.” The EPA received somewhat over 800 comments and the Department of Education received 30 comments, compared with the 63,000 and 16,300 comments, respectively, that these agencies received about the Trump Administration’s deregulation initiative. and how well each division’s strategy have performed. In a similar way, the executive branch can use the information in the incremental regulatory budget to judge how well each agency has performed—that is, how well each agency uses regulatory actions to improve societal welfare. A key difference between a private business and a Federal agency is that regulatory actions impose unreimbursed costs on private parties to comply with the actions. Because regulatory costs are like a hidden tax, the incremental regulatory budget also plays a similar role as the Federal government’s fiscal budget. Without a regulatory budget, agencies might tend to treat private resources as a “free good” (Rosen Deregulation: Reducing the Burden of Regulatory Costs | 95 2016). Moreover, like the Federal budget, the regulatory budget strengthens political accountability and transparency (Rosen and Callanan 2014). OIRA sets the regulatory cost caps that will be allowed for each agency. The cost caps may allow an increase or require a net reduction in regulatory costs. The cost caps impose a discipline on Federal agencies but allow for flexibility when agencies identify important new regulatory opportunities to better protect the public. OMB’s guidance also allows agencies to accumulate cost savings. Otherwise, agencies would have an incentive to enact new regulatory actions at the end of the year so as to use up any regulatory cost savings that exceeded that year’s cap. The general public—including workers, consumers, owners of small businesses, and other interested parties—also contribute to the deregulatory reform process. The Administrative Procedures Act sets out the steps that Federal agencies must follow to take new regulatory and deregulatory actions (Garvey 2017). In the first step of the most common notice-and-comment process, the agency proposes a rule and invites public comment through a Notice of Proposed Rulemaking. Sometimes, public comment is solicited even earlier before issuing a prospective rule, through an Advance Notice of Proposed Rulemaking. These notices are published in the Federal Register. The public can also view and comment on proposed regulatory and deregulatory actions online via the website regulations.gov. The Trump Administration encourages public input on its deregulation initiatives. The Administration’s Executive Order 13777 requires Federal agencies to establish Regulatory Reform Task Forces, and many agencies’ task forces issue specific requests for public comment. For instance, in response to its request, the EPA received more than 460,000 public comments. After taking into account identical or nearly identical form letters, the EPA received 63,000 unique comments. The Department of Education received over 16,300 comments in response to its request. The workers, consumers, and business owners who participate in the regulated markets provided information from their own experiences about the likely effects of deregulation. Several other countries have used regulatory caps similar to the Trump Administration’s approach to deregulation (Gayer, Litan, and Wallach 2017; Renda 2017). Some countries have placed caps on regulatory requirements or actions, while others have placed caps on regulatory costs. In 2001, the Canadian province of British Columbia required that for every new regulatory requirement, two regulatory requirements must be eliminated. After having reduced regulatory requirements by 40 percent by 2004, the requirement was changed to a cap of no net increase in regulatory requirements. The provincial government reports that since 2001, these steps have reduced regulatory requirements by 49 percent (British Columbia 2017). In 2012, the Government of Canada (2015) required that for every new regulation (which are much less numerous than regulatory requirements), one regulation must be eliminated. 96 | Chapter 2 The Netherlands, Denmark, Norway, and the United Kingdom have adopted targets for net reductions in regulatory costs—that is, regulatory cost caps (Renda 2017). Although Executive Order 13771 requires U.S. Federal agencies to estimate reductions in opportunity costs broadly defined, other countries focus on narrower measures, such as administrative burdens, compliance costs, or direct costs imposed on businesses (Renda 2017). Using narrower measures can have unintended consequences. For example, in the United Kingdom, requiring large retailers to charge for plastic bags was counted as a reduction in the net costs to businesses, even though this cost reduction was exactly offset by the increase in consumer costs (Morse 2016). The Trump Administration’s deregulatory process, established by Executive Order 13771, is crafted to achieve significant and sustained progress toward reducing the regulatory burden on the U.S. economy. After reviewing the recent history in the United States and other countries, Gayer, Litan, and Wallach (2017) note the potential of the Administration’s deregulation efforts but caution that these efforts might not go far enough, or might go too far. The deregulatory actions in 2017 and 2018, and those planned for 2019, show that these efforts are overcoming the inertia built into the Federal rulemaking notice-and-comment process. At the same time, the requirement that deregulatory actions must be subject to the same rigorous cost-benefit analysis required of new regulatory actions helps ensure that deregulation will not go too far. Why More Deregulation? This section seeks to answer the question of why there needs to be more deregulation. First, it examines estimates of the aggregate cost of regulation. And second, it considers the need to level the playing field for deregulation. Estimates of the Aggregate Cost of Regulation Up to this point, we have focused on studies of the burden or costs of Federal regulatory actions. Of course, State and local regulatory actions also impose costs. State and local actions are too diverse to easily summarize, but examples help illustrate their range. Chapter 3 of this Report describes the extent and variation across States in occupational licensing. In the first half of 2018, just under one-quarter of all workers reported that they had an active professional certification or license, usually because it is required for employment. As another example, State laws regulating the beer industry are so inconsistent that it leads industry leaders to describe the domestic market as “like selling in fifty different countries almost” (Morrison 2013). State regulatory actions often prevent brewers from selling directly to customers. Although there is no conclusive evidence that these laws limit craft beer entrepreneurship, statistical Deregulation: Reducing the Burden of Regulatory Costs | 97 associations show that there are more breweries in places that provide easier access to markets for small producers (Malone and Lusk 2016). Local regulatory actions add to the cumulative regulatory burden. Last year, we discussed the impact of local land use regulations, including an estimate that with decreased zoning restrictions in three cities—New York, San Jose, and San Francisco—the growth rate of aggregate output between 1964 and 2009 could have increased enough to increase GDP in 2009 by 8.9 percent (CEA 2018). Turning to other local regulations, the U.S. Chamber of Commerce Foundation (2014b, 11) ranks 10 U.S. cities on their regulatory environment for small businesses. The study uses the World Bank’s Doing Business framework and compiles publicly available information from official U.S. sources (World Bank 2018). According to this measure, “Dallas and Saint Louis impose the lightest regulatory burdens on small businesses,” whereas New York, San Francisco, and Los Angeles impose heavy burdens. For example, in New York starting a business requires 7 procedures, dealing with construction permits requires 15 procedures, and registering property requires 7 procedures. In another study, the U.S. Chamber of Commerce Foundation (2014a) examined regulations for food trucks. Boston and San Francisco, for example, require 32 procedures to open a new food truck, compared with Denver’s 10 required procedures. Some efforts have been made to estimate the total costs of regulatory actions in the United States. One approach is to build the total cost estimate from the bottom up, using regulatory action- and industry-specific estimates of regulatory costs. Taking this approach, the costs of Federal social regulation (i.e., actions designed to promote social purposes, including the protection of workers, public health, safety, and the environment) were estimated to be $198 billion in 1997 (in 1996 dollars) (OMB 1997). The 1997 estimate was built up from earlier studies, and then added OMB estimates of the costs of new regulatory actions from 1987 to 1996. OMB continued to use this approach through 2000, when it estimated that the total regulatory costs were in the range of $146 billion to $229 billion (in 1996 dollars). We updated the estimated total regulatory costs to 2018 by adding OMB’s estimates of the costs of new regulatory actions after 2000 to the 2000 estimate. This exercise yields a midrange estimate that the total regulatory costs in the U.S. in 2018 were $421 billion (all costs adjusted to 2017 dollars). Taking the same general approach but using additional sources, a study published by the Competitive Enterprise Institute estimated the total costs of social regulations in the U.S. in 2018 were $1.2 trillion (Crews 2018). OMB and a report by the Congressional Research Service noted important limitations for bottom-up estimates of regulatory costs. First, estimated costs are available for only a small fraction of all regulatory actions. Second, there are difficult questions about the quality of the original underlying data and analyses (OMB 2002; Carey 2016). Moreover, at a conceptual level, the simple 98 | Chapter 2 sum of action-specific costs does not necessarily provide an accurate measure of total regulatory costs. A major theme of this chapter is that the cumulative burden of multiple regulatory actions exceeds the simple sum of costs when each action is considered one by one. In light of these limitations, OMB (2002) deemphasized estimates of total costs, and subsequent OMB Reports no longer included them. Instead, the current practice is to focus on the last 10 years of major Federal regulatory actions (OMB 2017b). Cross-country comparisons provide a different perspective on the extent of U.S. regulatory actions and on these actions’ potential to improve economic performance. Cross-country comparisons from a number of different studies suggest that in the recent past, the regulatory burden in the United States was lower than in many, but not all, other countries. The cross-country rankings are not sufficiently current to reflect the Trump Administration’s deregulatory actions. In the most recent data, the United States was 8th out of the 190 rated economies in the Ease of Doing Business ranking, lagging behind New Zealand, Singapore, Denmark, Hong Kong, South Korea, Georgia, and Norway (World Bank 2018). The United States is 27th out of 35 countries in the product market regulation ranking by the Organization for Economic Cooperation and Development (OECD) (CEA 2018).5 A total of 3 of the top 4 OECD ranked countries have adopted regulatory caps—the Netherlands, ranked first; the United Kingdom, second; and Denmark, fourth. In last year’s Report, we estimated that if the United States adopted structural reforms and achieved the same level of product market regulation as the Netherlands, U.S. real GDP would be 2.2 percent higher over 10 years (CEA 2018). In the Economic Freedom of the World overall ranking, the United States is sixth, trailing Hong Kong, Singapore, New Zealand, Switzerland, and Ireland. These cross-comparisons also provide the basis for top-down estimates of total U.S. regulatory costs. The Congressional Research Service (Carey 2016) describes a prominent example of a top-down estimate from a report for the National Association of Manufacturers (Crain and Crain 2014). Crain and Crain use the World Economic Forum’s Executive Opinion Survey to develop a proxy measure of the amount of regulation in each of 34 OECD member countries from 2006 to 2013. (The proxy measure is not the same as the OECD product market regulation index or the other cross-country indices discussed above.) They estimate a regression model that shows GDP per capita as a function of the regulation index and a set of control variables that capture other influences on GDP. They find a statistically significant association between their index of 5As noted in chapter 8 of this Report, the OECD product market survey was limited to the State of New York, and therefore may not be representative of the rest of the country. The data show that the United States is suffering from relatively high regulatory protection of established firms, due to exemptions from antitrust laws for publicly controlled firms (OECD 2018). In addition, the OECD notes that U.S. product market regulation is more restrictive than that of other OECD economies due to the prevalence of State-level ownership of certain enterprises, particularly in the energy and transportation sectors. Deregulation: Reducing the Burden of Regulatory Costs | 99 low regulatory burden and GDP per capita. They also compared the U.S. score on the regulation index with the average score on the regulation index in five benchmark countries with the lowest regulatory burdens. On the basis of this comparison, they estimate that if the burden of regulation in the United States were as low as in the benchmarks, U.S. GDP would be $1.4 trillion higher. This estimate forms one component of their estimate of the total regulatory costs in the United States (Crain and Crain 2014). The Congressional Research Service notes that there have been a number of criticisms of this top-down estimate of regulatory costs (Carey 2016). It would be useful for policymakers to know the impact of different broad regulatory programs on the value of goods and services that the U.S. economy can produce. Comparing GDP per capita achieved by different countries that have taken different regulatory approaches mimics this thought experiment. In principle, the top-down approach should capture the cumulative burdens of regulatory actions. However, there are fundamental methodological challenges regarding how to measure regulatory burden across countries and on the validity of drawing causal inferences from the estimated statistical associations. Further econometric specification issues include the selection of the dependent and independent variables and the correct functional form of the relationship between the dependent and independent variables. To sum up, total regulatory costs in the United States are difficult to estimate with precision. However, the cost estimates—which range from almost half a trillion to over a trillion dollars—are sufficiently large to justify the argument that deregulatory actions should be considered as a priority to help sustain U.S. economic growth. The cross-country comparisons of regulatory burdens also suggest that there is room to reduce the burden in the United States. The Need to Level the Playing Field for Deregulation If regulatory review worked perfectly, it might seem that deregulation would never be needed. Each deregulatory action is subjected to the same costbenefit analysis required for new regulatory actions (OMB 2017a). Regulatory review thus requires that a deregulatory action’s benefits (the regulatory costs saved) must justify the action’s costs (the benefits forgone when the original regulatory action is modified or eliminated). The original regulatory review should have ensured that the benefits of the original regulatory action justified its costs. If the results of the original regulatory review were correct and unchanging, a deregulatory action should never be needed, and indeed should not pass regulatory review itself. However, until the use of regulatory cost caps, the regulatory process was likely to have been tilted toward the benefits of expanding the regulatory state. Because regulatory actions address agencies’ core missions—such as protecting workers, public health, safety, and the environment—there is a 100 | Chapter 2 natural tendency for the analyses to emphasize benefits over costs. In the past, some agencies’ regulatory analyses came across like advocacy documents “to justify a predetermined decision, rather than to inform the decision” (Broughel 2015, 380); emphasis in the original). OMB’s OIRA regulatory review process provides a check on this tendency. In the extreme, the focus on agency-specific missions leads to tunnel vision, causing regulators to go too far in pursuing their agencies’ missions (Breyer 1993). The economic theory of regulation and public choice economics provide additional insights into the functioning of government bureaucracies. Regulatory actions can serve the interests of established firms in the industry—for example, by creating barriers that prevent the entry of new firms (Stigler 1971). Chapter 3 of this Report reviews evidence that State professional licensing requirements serve as barriers to entry rather than promoting the public interest. In addition to altruistic support for an agency’s mission, Niskanen (1971) argues that self-interested regulators pursue actions that expand the scope and size of their agency. Several examples illustrate the possible tilt in agencies’ past analyses toward the benefits of regulatory actions over the costs.6 Dudley and Mannix (2018) criticize RIAs of air-quality regulations. More generally, Dudley and Mannix (2018, 9) argue that agencies do not appear to search for benefits and costs objectively but instead focus on benefits and “quantify or list every conceivable good thing that they can attribute to a decision to issue new regulations.” Gayer and Viscusi (2016) provide a detailed discussion of the controversial question of whether Federal agencies should measure the benefits of climate change policies from a domestic or global perspective. The “Circular A-4” guidance document (OMB 2003) instructs Federal agencies to focus on regulatory benefits and costs to citizens and residents of the United States. When a regulatory action has effects beyond the borders of the United States, agencies are told to report those effects separately (OMB 2003). However, previous analyses have compared the global benefits of major environmental regulatory actions with domestic compliance costs. For example, the EPA estimated that the proposed Clean Power Plan would yield global climate benefits in 2030 worth $30 billion (in 2011 dollars) (79 FR 67406). Gayer and Viscusi (2016) find that this estimate falls to $2.1 to $6.9 billion (in 2011 dollars), counting only domestic climate benefits. In contrast, Pizer and others (2014) argued that the global perspective is appropriate given the distinctive nature of the climate change problem and the need for global solutions. 6 The tilt toward benefits does not hold across the board. For example, Department of Homeland Security’s RIAs are often unable to quantify the benefits of safety rules that address highconsequence / low-probability events. However, the lack of quantified benefits does not necessarily avoid, and might even exacerbate, the tilt toward benefits. Under Executive Order 12866, when benefits and/or costs are unquantified, RIAs discuss whether the benefits of a regulatory action “justify” the costs. The subjective judgment about whether unquantified benefits justify the costs might allow more room for an intentional or unintentional tilt toward benefits. Deregulation: Reducing the Burden of Regulatory Costs | 101 Whether intentionally or not, other analyses have downplayed costs. For example, a regulatory analysis concluded that a 2016 rule that placed limits on consumers’ options to purchase short-term health insurance would have no effect on the majority of consumers who purchased such coverage, but did not provide quantified evidence for this conclusion. In 2018, an analysis of a deregulatory reform of the 2016 rule discussed the potential for regulatory cost savings and concluded that the deregulatory action was likely to be economically significant and have an annual impact of over $100 million. The Congressional Budget Office (CBO 2018) projected that the 2018 deregulatory reform will lead to 2 million additional enrollees in short-term insurance. The 2018 deregulatory action did more than just remove the 2016 rule’s restrictions. There is also uncertainty about the effects of the 2016 regulatory action and the 2018 deregulatory action. Despite these caveats, however, it is hard to reconcile the finding that the 2016 rule was not economically significant with the CBO’s projections and with further analysis, which estimated that the 2018 deregulatory action of the short-term health insurance market will provide cost savings worth $7.3 billion in 2021 (CEA 2019). A body of research compares the results of agencies’ prospective regulatory analyses conducted before the rules were passed with the results of retrospective analyses conducted afterward (Harrington, Morgenstern, and Nelson 2000; OMB 2005; Morgenstern 2018). These comparisons of prospective and retrospective analyses have focused on the accuracy of the original estimates. However, the prospective/retrospective comparisons do not address the problem that important categories of costs were omitted entirely in the original analysis (Harrington, Morgenstern, and Nelson 2000). Moreover, the prospective/retrospective comparisons do not shed light on the magnitude of the omitted costs or how including them might have changed the results of the prospective analyses. Whether intentionally or not, omitting important categories of costs will result in systematic underestimation of costs. Regulatory analyses typically focus on compliance costs, which are the most obvious source of opportunity costs. For example, Belfield, Bowden, and Rodriguez (2018) reviewed 28 RIAs of education regulatory actions from 2006 to 2015. They found that the education RIAs only calculated the paperwork costs of documenting compliance with regulatory actions—what Belfield, Bowden, and Rodiguez call the administrative compliance costs. Opportunity costs include, but are not limited to, administrative and other compliance costs. When a firm hires workers and purchases new capital equipment to comply with a regulatory action, for example, society gives up the value of the other goods and services that those workers and capital could have produced. Aggregate paperwork costs of regulation are substantial; if the 9.8 billion hours devoted to regulatory paperwork in 2015 instead were used by employees to create output equal to their average hourly earnings, it would total $245.1 billion, an amount equal to 1.35 percent 102 | Chapter 2 of that year’s GDP (CEA 2018). But other sources of opportunity costs can be more subtle and difficult to see (see box 2-6). For example, when the intended or unintended consequence of a regulatory action is to prevent a purchase, the action prevents a mutually beneficial exchange. The buyer’s potential gain is measured by the consumer’s surplus—the difference between the maximum the consumer is willing to pay and the amount actually paid. The seller’s potential gain is measured by producer’s surplus—the difference between the minimum the producer is willing to accept and the amount actually received. The losses of consumer and producer surpluses are part, and potentially a large part, of the regulatory action’s opportunity costs. Federal agencies’ analyses do not always measure consumer and producer surpluses because to do so would require estimates of the elasticities of demand and supply. OMB (2000, 13) argues that estimating consumer and producer surpluses “requires data that [are] usually not easily obtained and assumptions that are at best only educated guesses.” The difficulty of measuring opportunity costs has often been discussed in subsequent OMB Reports, although different Administrations have given it different emphases (Fraas and Morgenstern 2014). Even without a preceding tilt toward the benefits of the regulatory state, there are several other reasons deregulation will be needed and can lead to regulatory cost savings that more than offset the forgone benefits of the original regulatory action. First, in a dynamic economy, new products and technological developments will often require new approaches. For example, as the drone industry took off, the Federal Aviation Administration amended its rules to allow small, unmanned aircraft systems in airspace, and it changed the certification requirement of drones’ remote pilots (81 FR 42063). Small, unmanned aircraft do not raise the same safety concerns as manned aircraft. The flight training and other requirements for pilots of manned aircraft imposed high regulatory costs and created few benefits when applied to pilots of drones. The development of automated vehicles poses similar challenges for the Department of Transportation (DOT). As a first step, DOT now interprets the definitions of “driver” and “operator” as not referring exclusively to a human but also to an automated system. And DOT (2018) encourages the developers of automated driving systems to adopt voluntary technical standards as an effective nonregulatory approach. Second, new information can emerge that requires the reevaluation of regulatory actions. For example, after the Food and Drug Administration (FDA) issued a rule to implement the Food Labeling Act, companies and trade associations told the FDA about the difficulty of updating labels within the required time frame. The industry’s concerns included the need for new software, the need to obtain additional nutritional information about its products, and the possible need to reformulate its products. In a deregulatory action, the FDA extended the compliance date by 1.5 years (83 FR 19619). The cost savings Deregulation: Reducing the Burden of Regulatory Costs | 103 Box 2-6. Opportunity Costs, Ride Sharing, and What Is Not Seen The opportunity cost of a regulatory action is the value of the activities forgone because of the action. In a classic essay, the 19th-century French economist Frédéric Bastiat argued that taking into account not only that which is seen but also that which is not seen is the difference between a “good economist” and a “bad economist.” His parable of the broken window is an example of opportunity costs. The “bad economist” concludes that the broken window is good for the economy; when the shopkeeper pays the glazier to repair the window, it encourages the glazier’s trade. But the “good economist” recognizes that which is not seen; because the window needs to be repaired, the shopkeeper loses the enjoyment from the forgone opportunities to make other purchases. Likewise, in addition to the more easily seen compliance costs, regulatory actions often involve substantial opportunity costs. Measuring the opportunity costs of regulatory actions can be difficult; they are not easily seen. The development of ride sharing provides an example where the opportunity costs of regulating the taxi industry can be estimated. Most major U.S. cities restrict entry into the taxi industry. A typical regulatory approach is to require taxi medallions, which are transferrable permits required to operate a taxi (Cetin and Deakin 2017). The restriction on entry drove up the price of taxi rides and created monopoly profits for the owners of medallions, which could be worth hundreds of thousands of dollars. Ridesharing services including Uber and Lyft provide a close substitute for the services provided by taxis. The competition from ride-sharing services eroded medallion holders’ market power and led to sharp decreases in medallion prices. Cohen and others (2016) analyze data on almost 50 million individuallevel observations of users of the UberX service. The researchers exploit the richness of the data to estimate that in 2015, the UberX service generated about $2.9 billion in consumer surplus in the four cities studied. The gain in consumer surplus from UberX sheds light on the opportunity costs of the cities’ regulation of taxis. The “bad economist” might conclude that restricting the number of taxis was good for the economy because it increased taxi owners’ profits. But the “good economist” recognizes that which was not seen: the value consumers gain when ride-sharing services compete with taxis. Reviews of deregulatory actions should attempt to account for as much of the opportunity costs as possible. Current guidance already stresses the importance of measuring opportunity costs (OMB 2003). Economic theory sometimes does not provide a simple formula to extrapolate the unseen opportunity costs from more easily observed regulatory compliance causes. Nevertheless, careful analysis and consideration of the likely consequences of regulatory actions will shed light on the opportunity costs savings that are possible from deregulatory actions. Public input into the deregulatory process is likely to be helpful in this exercise. 104 | Chapter 2 from the delay offset the benefits forgone because of the delay. The extension of the compliance dates does not prevent companies from revising their labels sooner, and data show that over 29,000 products have already adopted the new Nutrition Facts label (83 FR 19619). The extension reduces compliance costs while still promoting public health by helping consumers make better decisions about their food choices. Another example of an agency using new information to reduce regulatory costs is the Federal Aviation Administration’s (FAA) revision of a rule to allow ground tests of helicopters to demonstrate compliance for night operations. The FAA’s airworthiness standards for helicopters require that each pilot compartment must be free of glare and reflection when operated at night. In the past, this aspect of airworthiness was evaluated by night flight tests, which cost, on average, about $37,000. The FAA determined that ground tests are equally effective to demonstrate compliance and cost only about $4,400 per test. The compliance cost savings for the entire industry were estimated to be about $277 million (in present value). The Cumulative Economic Impact of Regulation This section discusses the cumulative economic impact of regulation. First, we explore how the effects of regulation are transmitted through markets. Then we describe the cumulative regulatory burden—both its basic aspects and its costs along the supply chain. The Effects of Regulation Are Transmitted through Markets Even when the costs of regulations of public goods and regulations to enhance productivity might appear to fall primarily on a single industry, it is important to interpret productivity broadly for the economy as a whole because the industry’s costs affect the movement of capital and labor between the regulated industry and the rest of the economy. Take the occupation of barbers. Their production technology—scissors, chair, sink, and a shop—has hardly changed in a century, even though their inflation-adjusted wages have grown by about a factor of five (Mulligan 2015b). The development of safety and disposable razors has helped consumers substitute toward the home production of shaves, but not haircuts. Nevertheless, barbers’ wages grew even while their technology was static, mainly for two simple reasons: (1) Productivity grew in farming, manufacturing, information, and many other industries; and (2) barbers have a choice of occupation and industry, which means that either the wages of barbering keep up with the rest of the economy or barbering disappears as an occupation. Barbering is not special; every occupation has its wage determined by productivity elsewhere in the economy. Wages in any occupation and industry are determined by the industry’s supply of and demand for labor, which in turn are determined by productivity elsewhere in the economy. Deregulation: Reducing the Burden of Regulatory Costs | 105 For example, the labor supply of barbers reflects their productivity in their next best alternative occupation and industry. Given the intimate relation between regulation and productivity, regulation will therefore usually have a significant economic impact beyond the regulated industry. A regulated industry with price-elastic customer demand yields the more obvious result that higher costs increase the prices charged to customers, reduce production, and reduce industry employment and revenue. But the results of regulatory actions are less obvious, to the extent that industries like barbering face relatively price-inelastic customer demand. Consider a perfectly competitive market with constant unit costs. When a regulatory action drives up the unit cost of production, the change in industry revenues equals the change in consumer expenditures on the product. Given relatively inelastic demand, the increase in unit cost results in higher consumer expenditures and higher revenues. The higher revenues cover the costs of production and regulatory compliance. As Becker, Murphy, and Grossman (2006) pointed out, the paradoxical result is that more capital and labor are drawn into the industry, even though production and consumption are reduced. In other words, the opportunity costs in the price-inelastic case accrue primarily outside the industry because the rest of the economy must produce with less capital and labor. Policymakers sometimes emphasize the potential impact of regulatory actions on jobs (i.e., the use of labor). Under Executive Order 12866, one of the criteria for when a cost-benefit analysis is required is if the action is likely to have a material effect on jobs. Under Executive Order 13777, agencies’ Regulatory Reform Task Forces are required to prioritize repealing or modifying regulatory actions that eliminate jobs or inhibit job creation. However, for the reasons explained above, in some industries it can be misleading, both for magnitude and direction, to assess the benefits of a regulatory action solely on the basis of the jobs created or destroyed in the regulated industry (a strength of the current practice of regulatory cost-benefit analysis is that agencies do not assess benefits this way). Our framework also emphasizes that it is important to consider the effects—including job effects—outside the regulated industry. Regulatory actions that affect the degree of competition in an industry are also cases in which productivity needs to be understood broadly. For example, a monopoly withholds production, and therefore its use of capital and labor, in order to extract higher prices from its customers. Its capital and labor are used elsewhere, where they are less productive, or are not used at all. Either way, the result of a monopoly is—all else being the same—less total output, and the result of competition is more total output. We examine these effects in an economic framework similar to that of Goulder and Williams’s (2003) model of excise taxes, where the primary difference is that an excise tax delivers its revenue to the public treasury, whereas 106 | Chapter 2 regulatory action may use up the revenue in less efficient production.7 The framework has a composite commodity reflecting the value produced by the economy’s many industries combined. The industries use capital and labor with efficiency that depends on regulation, either because regulation discourages certain types of production or, as with the monopoly example, because it distorts the interindustry composition of production. In the aggregate, therefore, both regulations of public goods and regulations to enhance productivity have many of the same consequences as aggregate “productivity shocks,” which have been studied extensively in economics (Gray 1987; Crafts 2006). The Cumulative Burden I: Within Industry According to a former OIRA Administrator, “Cumulative burdens may have been the most common complaint that I heard during my time in government. Why, people asked, are agencies unable to coordinate with one another, or to simplify their own overlapping requirements, or to work together with State and local government, so that we do not have to do the same thing two, five, or ten times?” (Sunstein 2014, 588). The NFIB’s surveys of the owners of small businesses confirm the OIRA Administrator’s experience. The NFIB conducted regulation-specific surveys in 2001, 2012, and 2017. When asked which best describes the source of their regulatory problems, the majority of small business owners consistently responded that it was the total volume of regulations coming from many government agencies, as opposed to a few specific regulations coming from one or two agencies (in this question, respondents were not asked to identify specific regulations). In 2001, 50 percent of respondents identified the volume of regulations as the problem, compared with 47 percent of respondents identifying specific regulations. In 2012, the number of respondents citing the problem of the volume of regulations jumped to 62 percent. In 2017, this number dropped to 55 percent. This subsection analyzes how businesses experience cumulative burdens, and it uses the economics of convex deadweight costs and supply chains to assess these burdens and show how they have sometimes been neglected in cost-benefit analyses. Figure 2-5 begins to illustrate cumulative burdens by focusing on a specific “regulated” industry that, like any other industry, has a downward-sloping factor demand curve that reflects the diminishing marginal value of what that industry produces. Factors refer to the labor, capital, and materials used in the production process. The industry factor supply curve represents the marginal opportunity costs: holding constant the total factors of production, when more factors are used to produce in the regulated industry, fewer are available to produce in the other industries. 7 It is also possible that part of the “revenue” associated with a regulation’s distortions goes to some of the market participants. A monopoly is an example where the industry output price is distorted and the revenue from that “tax” takes the form of a monopoly rent (Harberger 1954). Deregulation: Reducing the Burden of Regulatory Costs | 107 Figure 2-5. Distorted Allocation of Resources Among Industries Marginal value product Marginal opportunity cost E D1 A C1 D2 C2 Second regulation IFD without regulation IFD with one regulation First regulation Factor usage in the regulated industry Note: IFD = Industry factor demand. The value of production combined across all industries is maximized when the regulated industry is producing a quantity exactly at the intersection of the two curves shown in figure 2-5, where the marginal value product of the factors of production are equalized between industries. For the sake of illustration, we consider first one regulatory action, and then later add a second regulatory action that has the same size impact on factor usage in the regulated industry. Each action reduces the degree of competition in the regulated industry, for example by added legal or technological barriers to entering the industry. As noted above for the case of monopoly, a less competitive industry has less factor demand and therefore uses less of the factors of production. The first regulatory action therefore reduces the value of the regulated industry’s output by the combination of areas A, C1, and C2. As a result of the reduction in the regulated industry’s output, factors of production shift to other industries. Areas C1 and C2 represent the resulting gain in output value in the other industries. The value of the output loss combined across all industries is triangular area A shown in figure 2-5. Because it is assumed for the moment that an important effect of regulation is competition, as emphasized at the end of this chapter with the joint employer standard, the part of the output represented by combined areas E and D1 is retained by the industry’s producers as economic profit rather than competitive factor incomes (i.e., competitive payments to labor and capital). For other regulatory actions, such as the two health insurance regulatory 108 | Chapter 2 actions examined at the end of this chapter, areas E and D1 are output losses rather than a transfer of income.8 In this chapter, we use the public finance concept of deadweight cost to describe cumulative effects of regulation. If a regulatory action depresses an industry’s resource usage by 1 percent, the lost transactions are likely those that were creating the least surplus, which is why these transactions disappear merely because of just one regulatory action. But when the second regulatory action comes along, the transactions of least value are already gone, so that the next 1 percent depression of the industry must eliminate relatively higher-value transactions than the first 1 percent did. This is shown in figure 2-5; even though the first and second actions have the same-size impact on factor usage in the regulated industry, the second action has a larger cost in terms of aggregate output. That is, combined areas D1 and D2, which show the incremental cost of the second regulatory action in terms of aggregate output, are greater than area A, which is the corresponding cost of the first regulatory action. The field of economics usually refers to such costs as “convex”—given that doubling the regulatory action more than doubles the costs of regulation. The other side of the coin is that assessing the incremental costs of regulation requires an estimate of how much the industry has already been distorted. In addition to showing how regulatory costs accumulate, figure 2-5 also shows why a regulatory action’s effect on industry employment is not entirely a cost. Note that the value of the regulated industry’s output is the area under the “without regulation” factor demand curve (colored red in the figure) up to the equilibrium factor usage for the industry. The impact of regulatory action on the value of the regulated industry’s output is therefore the impact on that area due to the change in the amount of factor usage. Areas C1 and C2 therefore capture the value created by labor and capital that switch to other industries, which admittedly is less than the combined values A, C1, and C2 that they would have created in the regulated industry. To the extent that the regulatory action causes capital and labor to cease employment entirely, we need to look at the aggregate factor markets, as we do in the next subsection. The Cumulative Burden II: Costs along the Supply Chain The interindustry cumulative cost shown in figure 2-5 is commonly considered in traditional cost-benefit analyses, but it is incomplete because the typical industry is surrounded by public policy distortions. The labor and capital used 8 Even when the two areas reflect a lack of competition, they may ultimately prove to be output losses to the extent that market participants use their capital and labor in order to increase their share of the economic profits at the expense of others (Tullock 1967; Dougan and Snyder 1993). When the two areas reflect an output loss, it is possible that the industry factor demand curve is rotated counterclockwise (Mulligan and Tsui 2016), rather than shifted down as shown in figure 2-5, which corresponds to the case in which the final demand for the regulated industry’s output is locally price elastic. Deregulation: Reducing the Burden of Regulatory Costs | 109 in the regulated industry, and elsewhere, are taxed. We show the accumulation of taxes and regulatory actions in figure 2-6, which shows the aggregate, economy-wide, long-run supply and demand curves for capital. For the same reason that the area under the “without regulation” industry demand curve in figure 2-5 is the value of industry output, the area under the corresponding demand curve in figure 2-6 is the value of long-run aggregate output. The aggregate capital demand curve is the sum of the capital demands of the regulated and other industries, and therefore the regulatory action shifts it down according to the regulated-industry shift shown in figure 2-5.9 Figure 2-6 shows how the industry-specific regulation has a second effect on output by reducing the aggregate amount of capital in the economy. The amount of output lost due to less capital is equal to combined areas B1, B2, and F. As discussed below, output is reduced still further, to the extent that the regulatory action shifts down the aggregate labor demand curve and thus reduces the aggregate amount of labor. Output is not necessarily the same as welfare, because increasing output with additional labor and capital comes at the cost of supplying the additional inputs—for example, the cost of delaying consumption in order to build up the capital stock. If the aggregate capital supply curve fully reflected the marginal cost of capital, then the only social loss to be found in figures 2-5 and 2-6 would be area A, representing the net loss of value created in the regulated and other industries. However, to the extent that the supply of capital is itself distorted— say, due to taxes on capital income—the marginal cost curve for capital is below the supply curve as drawn in figure 2-6, by a proportion equal to the tax rate for capital income.10 The overall cost of regulatory action therefore includes not only area A in figure 2-5 but also the deadweight cost associated with the reduction in capital, shown as combined areas B1 and B2 in figure 2-6. As found by Goulder and Williams (2003), the deadweight cost and loss of output associated with reduced aggregate factor usage often significantly exceeds the loss of output that comes from distorting the composition of 9 The quantitative relationship between combined areas A, D1, and E in figure 2-5 and combined areas B1 and B2 in figure 2-6 depends on what is also happening in the aggregate labor market, which for brevity is not shown in this chapter, and the degree to which economic profits are created or destroyed by the regulatory action. A regulation that affected only consumer goods industries and had no effect on economic profits might not shift the demand curve in figure 2-6. 10 If capital were subsidized, then the marginal cost curve would be above the supply curve. In macroeconomics, the opportunity cost of capital is often referred to as the “rate of time preference” or the “rate of impatience,” reflecting the fact that the opportunity cost of capital in the future is less consumption in the present (Romer 2011; Fisher 1930). If the regulation were increasing the value of output rather than decreasing it, then area A in figure 2-5 would be negative (an increase in productivity), so that the regulation increases the use of capital and areas B1 and B2 in figure 2-6 would be an additional benefit. 110 | Chapter 2 Figure 2-6. How Industry Regulation Affects the Aggregate Factor Market Capital rental rate Supply of capital B1 B2 F ACD without regulation Marginal opportunity cost ACD with regulation Aggregate quantity of capital Note: ACD = Aggregate capital demand. activity among industries.11 A regulatory cost imposed on a specific industry can add substantial excess burdens to the capital and labor markets. The case studies at the end of this chapter are such examples. The aggregate labor market has a diagram analogous to figure 2-6, with the gap between labor supply and marginal labor cost due to taxes on labor income and other distortions of the labor market. In a small, open economy where wages are primarily determined in international markets, the picture would be quite similar, including a horizontal supply curve and a horizontal opportunity cost curve. Otherwise, we would draw the labor supply curve sloping upward and would also shift it due to the income effects of the productivity change (Ballard and Fullerton 1992). Either way, the labor market has an additional factor cost, analogous to figure 2-6’s areas B1 and B2. Moreover, to the extent that labor and capital are complementary production factors and regulatory action reduces their aggregate employment, there are further reductions in the aggregate demand for capital and labor and therefore further reductions in aggregate output and aggregate surplus. Although not shown in figure 2-6, another possible effect of regulating an industry is to shift up the supply curves for capital and labor. For example, suppose the regulated industry has its capital taxed at lower rates than other industries. Then the cost-benefit analysis would commonly recognize that 11 Goulder and Williams (2003) examined excise taxes rather than regulations, but the aggregate analysis is the same, as long as figure 2-5’s areas D1 and E are a transfer rather than an aggregate output loss. Deregulation: Reducing the Burden of Regulatory Costs | 111 additional capital tax revenue is a benefit of a regulation that induces capital to move out of the industry. But we must also count the costs associated with the reduced aggregate supply of capital due to the fact that the regulation raises the average marginal tax rate on capital. Those costs include lower wages (resulting from less capital investment) and a loss of capital tax revenue that potentially offsets the revenue gain reflected in the usual analysis.12 The cumulative cost of regulation can nonetheless be estimated in practice, primarily with information from the regulated industry. Specifically, only information from the regulated industry is required to estimate lost factor incomes A, E, and D1, which are the result of the regulatory actions holding constant the aggregate amounts of labor and capital. Because areas B1 and B2 (and their analogues in the labor market) are the result of the lost factor incomes shown in figure 2-6, their magnitude can be included by rescaling the industry-specific effects according to the “marginal deadweight cost of government revenue,” as estimated in the field of public economics (Feldstein 1999; Saez, Slemrod, and Giertz 2012; Weber 2014).13 The additional factor costs of regulation have different implications for cost-benefit analyses, depending on whether a regulatory action is a regulation of public goods or a regulation to enhance productivity. The additional factor costs are associated only with industries that produce private goods using the factors of production and experience a net cost from the regulatory action. Take, for example, a regulation of a public good that improves environmental quality at the expense of reduced manufacturing output. Figure 2-5’s area A measures costs (associated with a reduced value of production in manufacturing and the other industries) but not necessarily net costs, because it does not include the environmental benefit. Area A generates additional factor costs, such as those shown by areas B1 and B2 in figure 2-6, because capital and labor are used in the production of private goods. There is no additional factor cost (or benefit) associated with the environmental benefit because that is a public good. In other words, recognizing the additional factor costs can change the sign of the net benefit of regulations of public goods because they are associated with the output losses but not the environmental benefits. Regulations to enhance productivity are different in this regard because their costs and benefits both accrue in industries that are producing private goods with the factors of production. In this case, if figure 2-5’s areas A, D1, and E measure net costs, then areas B1 and B2 in figure 2-6 cannot change the 12 Another example is the proposal to shift health insurance from employers to the individual market where taxation is greater. The shift has a benefit reflected in the additional tax revenue (Gruber 2011), but the shift also reduces the aggregate supply of labor because, holding tax policy constant, it raises the average marginal tax rate on work. 13 The CEA (2019) followed this practice in its analysis of health insurance deregulatory actions, taking the rescaling factor to be 1.5: for every $1 of deadweight loss in the health insurance industry, it added another 50 cents of factor market distortion costs. 112 | Chapter 2 sign of the net cost; they only increase its magnitude.14 To be more general, we note that to the extent a public good contributes to private production, some regulatory actions will be combinations of regulations of public goods and regulations to enhance productivity. Our application of Goulder and Williams’s (2003) framework has a rather simple supply chain where final goods markets (“the industries”) draw directly from capital and labor markets, so that the cumulative cost of regulation is simply the combination of costs in final goods markets (represented in figure 2-5) and costs in factor markets (represented in figure 2-6). But in reality, multiple industries can be situated in a vertical supply chain, in which case there would be more than two sets of costs to consider. The cumulative costs can be especially large when individual industries in the chain pass on their costs more than one for one, which is a result known in the industrial organization field as “double marginalization” (Tirole 1988). The specification of the joint employer standard, discussed at the end of this chapter, is an example of how the Trump Administration’s deregulatory actions have improved efficiency along supply chains. Lessons Learned: Strengthening the Economic Analysis of Deregulation This section considers lessons learned vis-à-vis strengthening the economic analysis of deregulation. First, it looks at how to diagnose market failure. Second, it describes the costs of regulatory actions that are correct on average. Third, it explores examples of the excess burdens of regulatory actions. Fourth, it looks at the burdens of nudge regulatory actions. Fifth, it describes how to expand the use of regulatory impact analysis. Diagnosing Market Failure Regulatory review should be careful to not overdiagnose market failure. The first step in a regulatory cost-benefit analysis is to identify the problem the action is intended to address: a market failure or other social purpose, such as promoting privacy and personal freedom (OMB 2003). In many circumstances, competitive markets tend to successfully guide the use of society’s resources to their highest value. In economic terminology, markets fail when resources are not achieving their most highly valued use. A typical regulatory impact analysis should compare the benefits of correcting a market failure with the opportunity costs of the regulatory action. For example, an environmental regulatory action might address the market failure created by the negative 14 For the purposes of the analysis in this chapter, the CEA assumes that the various industries affected by regulation are equally substitutable or complementary with the supplies of capital and labor. This assumption could be relaxed by examining the more general framework of Goulder and Williams (2003). Deregulation: Reducing the Burden of Regulatory Costs | 113 externalities when a manufacturing plant pollutes the air. Other market failures include a lack of market competition, inadequate consumer information, and when consumers and producers have asymmetric information. Because the cumulative regulatory burden is large, when diagnosing market failures, the burden of proof should be high. The possibility of a market failure does not by itself mean that a Federal regulatory action is appropriate. Regulatory actions are costly and, like markets, government bureaucracies are imperfect (Kahn 1979). Federal regulatory actions are more likely to be appropriate when they correct market failures that result in large misallocations of resources. OMB (2003) guidance for RIAs tells Federal agencies to focus on significant market failures and, when feasible, to describe the market failure quantitatively. The burden of proof should be high, because a claim that there is a market failure must mean that something blocks mutually beneficial exchanges from taking place. In the example given above of a polluting plant, the potential exchanges are between the public, which values cleaner air, and the manufacturer, which could take costly steps to reduce air pollution (and the consumers of the product that is now more expensive). Minor symptoms in which markets do not work perfectly should not lead to the diagnosis of a significant market failure. In situations where exchanges fail to take place, the Nobel laureate Ronald Coase (1960) identified the lack of clearly defined property rights and transaction costs as the root causes of market failure. All markets face transaction costs, so the question is not whether there is a market failure, but whether the transaction costs are a major barrier that prevents many beneficial exchanges (Zerbe and McCurdy 1999). In the polluting plant example, it is reasonable to expect that high transaction costs create a significant market failure. However, in other cases the potential market failure can be less clear. For example, indoor air pollution from secondhand cigarette smoke might seem to fit the definition of a market failure of an externality. But because the ownership of the airspace within their properties was both established and relatively easy to police, many hotel chains and some restaurant chains enacted smoking bans long before State or local laws required them to (Institute of Medicine 2009). In spite of some transaction costs—enforcement of the bans within their airspace—these voluntary bans were market successes. Hotel and restaurant owners could increase their profits by guaranteeing more valuable, clean air unpolluted by cigar and cigarette smokers to their nonsmoking customers who were willing to pay for access to it. However, voluntary bans might not go far enough to meet all social goals. In cases like this, a careful empirical analysis is required to determine the quantitative significance of the market failures that may remain.15 15 As long as all parties (consumers, workers, and so on) can make voluntary transactions, it might be profit- and welfare-maximizing to allow smoking in certain establishments. 114 | Chapter 2 The Costs of Regulatory Actions That Are Correct on Average In a market economy that is too complex for any regulator or scholar to fully understand, regulators are bound to make mistakes. Decades ago, Friedrich Hayek (1945, 524) insisted that centralized economic planning is impossible, even when regulators have access to much statistical information about the economy, because statistical information “by its nature cannot take direct account of these circumstances of time and place, and that the central planner will have to find some way or other in which the decisions depending on them can be left to the ‘man on the spot.’” At best, central planning is highly imperfect, and, as we illustrate with some important examples below, closely watched attempts to fine-tune industries with regulation have suffered costly failures. Remarking on the deregulation of the airline industry, Kahn (1979, 1) observed that “the prime obstacle to efficiency has been regulation itself, and the most creative thing a regulator can do is remove his (and her) body from the market entryway.” One reason why Executive Order 13771 places great importance on receiving public input on proposed regulatory actions is that the households and businesses that will be burdened with the costs—Hayek’s “man on the spot”—are in a better position to identify them. The convex deadweight cost approach also complements Hayek’s observation that planning is highly imperfect. Once we acknowledge that regulation involves errors of magnitude or even direction, the fact that the costs are convex means that optimal regulation is necessarily cautious because the benefit of pushing the market one unit in the direction of efficiency is less than the cost of (accidentally) pushing the market one unit in the direction of inefficiency. Regulation that is correct on average can nonetheless have a negative expected net benefit.16 Consider figure 2-5 again. The regulator identifies, say, an environmental benefit that justifies imposing a productivity cost equal to area A. This benefit would be obtained by contracting the industry by the increment shown in figure 2-5. If the regulator were perfect and the industry were contracted by that amount, the actual cost, A, would be equal to the environmental benefit. But if the regulator were imperfect—say, by having a 50 percent chance of contracting the industry by twice as much and a 50 percent chance of not contracting it at all—the expected cost of the regulatory action would be (A + D1 + D2)/2, which is greater than A because of the convex deadweight costs discussed above. This example shows how regulation would have costs equal to benefits when the regulation is exact, but expected costs exceeding benefits when the 16 For a more extensive analysis, see Mulligan (2015a). Milton Friedman (1953) makes a related argument for cautious monetary policy. The Friedman model has macroeconomic variance as the cost rather than deadweight costs, but delivers a similar conclusion—that even monetary policy that leans against the wind on average can nonetheless make the business cycle more volatile— because variance is also a convex function. Deregulation: Reducing the Burden of Regulatory Costs | 115 regulatory action is correct only on average. When acknowledging that the effects of regulation are uncertain, it follows that the best estimate from a decision perspective is one that is pessimistic as to net benefits relative to the statistical expectation (Hansen and Sargent 2008). Examples of the Excess Burdens of Regulatory Actions Regulatory reviews of deregulatory actions should routinely account for the excess burdens of regulation. Accounting for excess burdens is consistent with current guidance, but it appears to be uncommon. Current guidance for regulatory reviews stresses the need to look beyond the direct costs of a regulatory action and to examine “countervailing risks,” which are defined to include “an adverse economic . . . consequence that occurs due to a rule and is not already accounted for in the direct cost of the rule” (OMB 2003). The excess burdens of regulatory actions in other markets, such as the capital market shown in figure 2-6, fit the definition of an adverse economic consequence. One lesson from research on taxation is that the excess burden depends on the existence and levels of preexisting distortions—taxes and subsidy programs—in the economy. A standard example from taxation is the excess burden of a new tax on a certain good (e.g., restaurant food), when there is a preexisting tax on a good that consumers see as a complement (e.g., gasoline used to drive to the restaurant). The new restaurant tax further reduces gasoline sales and magnifies the distortion created by the preexisting gasoline tax. The reduction in gasoline tax revenues measures the excess burden (Harberger 1964). The source of the excess burden is the misallocation of resources due to the preexisting gasoline tax; the new restaurant tax magnifies the resource misallocation in the market for gasoline. In the same way, a new regulatory action that increases costs in the restaurant business magnifies the preexisting resource misallocation in the market for gasoline and generates an excess burden that could be measured by the reduction in gasoline tax revenues. To illustrate the potential magnitude of the regulatory excess burden due to a preexisting tax, suppose a hypothetical regulatory action increases the cost of producing a restaurant dish by $2. As a result of the price increase, suppose that the typical consumer reduces his or her purchases from 10 to 9 dishes a month. Because the restaurant dish and the gasoline are complements (due to the need to drive to the restaurant), further suppose that the restaurant regulatory action causes him or her to spend $10 less on gasoline per month. If the market for restaurant food is competitive with constant unit costs of production, the standard measure of the opportunity cost of the regulatory action is $19 per month: $18 in compliance costs ($2 for each of the 9 dishes still consumed) plus a consumer surplus loss of $1 a month. Assuming that taxes account for 30 percent of the price of gasoline (which is about true in Pennsylvania, where in 2018 the State gasoline tax of $0.587 a gallon is added to the Federal tax of $0.184 a gallon), the reduction in gasoline tax revenues 116 | Chapter 2 from this consumer—which measure the regulatory excess burden—is $3 a month. In this example, the total cost of the restaurant regulatory action is correctly measured to be $22. Failing to include the excess burden omits $3 in costs, or almost 14 percent of the total costs.17 The share of the total costs accounted for by the excess burden depends on the strength of the demandcomplementarity and the size of the preexisting tax (Goulder and Williams 2003). If the good with a preexisting tax is a substitute for the good produced by the regulated industry, the excess burden is negative—that is, the excess burden of the preexisting tax is reduced. Moving from the hypothetical example to a real-world regulatory action, the 2010 Affordable Care Act required chain restaurants to post calories of menu items. Major cost elements in the RIA of this requirement included collecting and managing records of nutritional analysis, revising or replacing menus, and training employees (79 FR 67406). The FDA estimated that the compliance costs are $84.5 million (in 2011 dollars, annualized at a 7 percent discount rate). Based on an analysis that the labels will shift consumers toward healthier foods and reduce obesity, the FDA estimated that the annualized benefits are $595.5 million (in 2011 dollars). A more complete analysis of the calorie-posting rule would not exactly parallel the hypothetical example. Unlike the hypothetical example, the calorie-posting rule mainly creates fixed costs of compliance. However, if the fixed costs restrict entry and competition, the rule would still reduce consumption of restaurant food and of the complementary good, gasoline. Although the RIA’s estimated compliance costs did not include an estimate of the excess burden imposed in the market for gasoline, in this case correcting the omission is unlikely to change the conclusion that the benefits of the regulatory action exceeded the costs. A more complete analysis could also consider other preexisting distortions that affect the chain restaurant industry, such as agricultural subsidies and the joint employer standard (discussed below). The potential complications illustrate a common challenge in RIAs—the need to include the most important distortions without making the analysis overly long and complex. A cost-benefit analysis should account for changes in tax revenues when they measure the excess burdens that regulatory actions impose in the presence of preexisting distortions (Harberger 1964). The standard economic analysis of a tax increase measures the tax revenues generated and the excess burden imposed on the economy, known as the deadweight cost of taxation (Auerbach and Hines 2002). In a cost-benefit analysis of a tax increase, the change in revenues from that tax is merely a transfer payment that leaves 17 In practice, an RIA of the restaurant regulatory action might fail to account for the reduction from 10 to 9 dishes per month. The approximation that assumes no reduction would lead the RIA to overestimate the compliance costs to be $20. The approximation in estimating compliance costs could offset part of the mistake of ignoring the $3 excess burden. In general, approximations and mistakes need not cancel each other out. Deregulation: Reducing the Burden of Regulatory Costs | 117 social benefits unchanged; the tax revenues represent a monetary payment from one group (the consumers who pay the tax) to another group. But the point of the example given above was to evaluate the hypothetical regulatory action that imposed new costs on the restaurant industry and also shifted consumer demand for gasoline when there already was a preexisting gasoline tax. Because of the preexisting tax, consumers have already given up the lower-value purchases of gasoline. Consumers’ marginal willingness to pay for gasoline exceeds—by the amount of the tax—the marginal opportunity costs of the factors of production used in the gasoline industry. The preexisting gasoline tax results in the misallocation of resources to the market for gasoline. When the regulatory action increases the price of restaurant dishes and shifts the demand for the complementary good, gasoline, the resource misallocation due to the preexisting distortion is magnified. As a result, the regulatory action creates an excess burden, which is measured by the change in tax revenues. By the same reasoning, a cost-benefit analysis should account for changes in subsidy expenditures when they measure excess burdens created by regulatory actions. Again, the common case where subsidy expenditures are treated as transfer payments does not apply. For example, chapter 4 discusses the costs and benefits of setting the Affordable Care Act’s individual mandate penalty to zero. The CBO (2017) projected that setting the penalty to zero will reduce federal expenditures on ACA subsidies by $185 billion over 10 years. The ACA premium subsidy is properly treated as a transfer when the task is evaluating the effects of the subsidy. But the analytical task in chapter 4 is to evaluate removing the mandate penalty, not to evaluate changing the ACA subsidy rules. The reduction in subsidy expenditures measures the benefits of setting the penalty to zero. Parallel to the analysis of a preexisting tax, the preexisting ACA subsidy results in the misallocation of resources, and the mandate penalty magnifies the resource misallocation. A consumer who voluntarily gives up his or her subsidy when the mandate penalty is removed is not, by comparison with his or her situation with the penalty in place, harmed because the Treasury no longer provides a subsidy. Instead, the consumer has received a benefit by no longer being constrained by the tax penalty, and at the same time taxpayers benefit by no longer having to finance the ACA subsidy. As in the case for taxation, whether the regulatory excess burden is positive or negative depends upon whether the goods are substitutes or complements, as well as on whether the regulatory action decreases or increases subsidy expenditures.18 In practice, taking into account all the adverse economic consequences of a regulatory action might seem a daunting task. To estimate the costs of the Clean Air Act and the Clean Water Act, Hazilla and Kopp (1990) constructed an 18 Self-paid treatment would also be provided in the absence of insurance enrollment and would, in the absence of behavioral considerations, be reflected in the height of the health insurance demand curve. The shapes of both the demand and supply curves would determine the discrepancy between surplus changes and federal budget effects. 118 | Chapter 2 econometric general equilibrium model that included 36 producing sectors on the supply side and a complete model of consumer behavior on the demand side. If the general equilibrium approach is taken, it is important that the models include the preexisting taxes and subsidies that drive the excess burdens of regulation. Murray, Keeler, and Thurman (2005) evaluated a possible rule of thumb that, to capture excess burdens, the direct costs of environmental regulatory actions should be adjusted upward by 25 to 35 percent. Their analysis showed that the rule of thumb is not necessarily a good approximation and concluded that whenever possible, estimates of regulatory costs should be based on the specific nature of the regulatory actions and likely interactions between the tax and regulatory systems. In many circumstances, instead of a rule of thumb, an implementable formula provides a good approximation of the excess burden that a tax or regulatory action imposes in the labor market (Goulder and Williams 2003). The formula captures general equilibrium interactions that are often left out. The use of this approximation—and, when needed, extending it to include other important sources of excess burdens—allows reviews of new regulatory and deregulatory actions to be based on more complete estimates of total regulatory costs. The Burdens of Nudge Regulatory Actions Regulatory reviews should take a cautious approach to so-called nudge regulatory actions. The relatively new field of behavioral welfare economics suggests that policy nudges can help people make better decisions (Chetty 2015). The typical definition of a policy nudge is that it changes behavior, although it is easy to avoid and has a low cost (Thaler and Sunstein 2008). For example, employers can nudge their workers to save more for retirement by making enrollment in a 401(k) retirement plan the default option (Madrian and Shea 2001). Because it was easy for the workers to opt out of the 401(k) plan, changing the default option fit the definition of a nudge. Advocates argue that nudges help consumers make choices—in this case, saving more for retirement—that are in the best interests of the consumers themselves. However, behavioral welfare economics poses a number of challenges for regulatory reviews. Behavioral economics arguments might tend to exacerbate the tilt in the regulatory process toward the benefits of expanding the regulatory state. In addition, although some nudge regulatory actions may yield important benefits, they also may involve easy-to-overlook opportunity costs. The basic challenge is whether “individual failures” should be added to the standard list of market failures as potential justifications for new regulatory actions. The logic in favor of adding them is the argument that policy nudges help people avoid making predictable mistakes—decisions that the individuals themselves would agree are not in their own best interest. The mistakes can be called “internalities”; individuals impose costs on themselves that they fail Deregulation: Reducing the Burden of Regulatory Costs | 119 to consider when making decisions. The main guidance document for regulatory review, “Circular A-4” (OMB 2003), does not discuss individual failures or internalities. OMB’s (2003) guidance emphasizes that when possible, benefits should be estimated based on consumers’ revealed preferences. In contrast, behavioral welfare economics emphasizes that because consumers make systematic mistakes, their revealed preferences are not a reliable guide for estimating benefits. For example, if consumers mistakenly fail to take into account future savings from more energy-efficient products, their revealed preference for inefficient products should not be used to measure the benefits of regulatory actions to promote energy efficiency. OMB’s guidance and behavioral economics thus place different emphases on the role of revealed preferences in benefit estimation. However, OMB’s guidance does not explicitly exclude methods of behavioral economics; nor does it exclude the argument that individual failures might provide the rationale for new regulatory actions. Executive Order 13707—issued September 15, 2015—encourages Federal agencies to apply insights from behavioral economics and, following Britain’s example, a “nudge unit” (officially, the Social and Behavioral Sciences Team) was established to explore policy options. Increasingly, in practice RIAs discuss individual failures as providing a rationale for regulatory action. In the past, Federal agencies have claimed that regulatory actions were needed because consumers and businesses failed to take into account the future savings from buying more energy- and fuel-efficient products (Gayer and Viscusi 2013). The arguments in the regulatory analyses echo long-standing claims about energy conservation policies (Allcott and Greenstone 2012). Much of the evidence for the claims came from engineering estimates of energy conservation cost curves. The engineering studies often concluded that energy can be conserved at a negative net cost—that is, that investing in energy conservation more than pays for itself. The apparently unexploited gains from investing in conservation might be viewed as evidence that many consumers and businesses make mistakes about energy conservation. However, engineering estimates typically omit opportunity costs and may fail to properly account for physical costs and risks. The shortcomings of engineering studies make the estimates “difficult to take at face value” (Allcott and Greenstone 2012, 5). The opportunity costs of investing in energy conservation can take many forms. Allcott and Taubinsky (2015) conducted two randomized experiments to estimate the effect of providing consumers with more information about the energy efficiency of lightbulbs. In both experiments, even after efforts to inform consumers and call attention to the energy savings, large shares of consumers continued to purchase incandescent lightbulbs rather than compact fluorescents. The experimental results suggest that a regulatory action that bans incandescent lightbulbs creates significant opportunity costs for those consumers who simply prefer the lighting provided by incandescents. In principle, the benefits (or costs) of a ban on incandescent lightbulbs could be estimated 120 | Chapter 2 in two steps: First, complete an engineering estimate of the value of the energy savings; and second, adjust the engineering estimate downward to account for lost consumer surplus. An analogous approach has been used to estimate the value of reducing consumption of a good that harms health (Ashley, Nardinelli, and Lavaty 2015). The practical difficulty of implementing this approach has been called “a tall order” (Levy, Norton, and Smith 2018, 26). In another important example of regulatory policy to conserve energy, the National Highway Traffic and Safety Administration (NHTSA) and the EPA set Corporate Average Fuel Economy (CAFE) standards for passenger cars and light trucks. The rule, which was finalized in 2012, increased the stringency of the fuel economy standards, which were estimated to then require manufacturers to achieve a fleet-wide standard of 40.3 miles per gallon for the 2021 model year. This rule would have increased to 48.7 miles per gallon for the 2025 model year, if the NHTSA had the statutory authority to set standards that far into the future in a single rulemaking. The 2012 NHTSA regulatory impact analysis concluded that the benefits of the standards substantially exceeded the regulatory costs. In the analysis, future fuel savings for consumers accounted for 77 percent of the estimated benefits (Gayer and Viscusi 2013). In fact, the analysis estimated that the fuel savings for consumers would exceed the additional costs they would incur in the form of higher-priced vehicles. In contrast, holding everything else being constant, the regulatory actions cannot make a rational consumer better off and might make them worse off.19 Some rational consumers might make the same fuel economy choices that the NHTSA’s analysis estimated were “right,” in which case the regulatory action would not change their behavior and thus would not create any benefits for them. Some rational consumers might instead decide that other car features are more desirable than future fuel economy, in which case the regulatory action makes them worse off. For example, under the standards, consumers might not be able to purchase cars they prefer with more powerful but less fuel-efficient engines. If the results of the 2012 analysis are accurate, one must believe that consumers who make such choices are not acting in their own self-interest. The standards also created environmental benefits, which played a “largely 19 The regulatory actions reduce choices, and in general more choices are better than fewer choices. More technically, the fuel economy regulatory actions impose additional constraints on the consumer’s optimization problem. The solution to a more constrained optimization problem cannot lead to an outcome that is preferred over the solution to a less constrained optimization problem. The regulatory actions might mean that everything else is not constant. For example, if there are economies of scale in producing more fuel-efficient cars, the CAFE regulatory actions could decrease the average cost. The cost reduction would benefit consumers who prefer more fuel efficiency. However, if there are also economies of scale in producing less fuel-efficient cars, there would be an offsetting cost increase for consumers who prefer other attributes, such as more powerful engines. Of course, all consumers can also be made better off by the reduction in externalities. The RIA measured those benefits separately. The question of consumer rationality is whether there are net private benefits for consumers from future fuel savings. Deregulation: Reducing the Burden of Regulatory Costs | 121 incidental role” in the cost-benefit analysis (Gayer and Viscusi 2013, 19). If the analysis were corrected so that consumers behaved self-interestedly, the estimated costs of the standards would have been greater than the estimated benefits (Gayer and Viscusi 2013; Allcott and Knittel 2019). Recently, a 2018 NHTSA and EPA preliminary regulatory impact analysis of the proposed Safer Affordable Fuel Efficient (SAFE) Vehicles Rule concluded that a deregulatory action—in the form of retaining the 2020 standards through model year 2026—would reduce regulatory costs by between $335 billion (in 2016 dollars; 3 percent discount rate) and $502 billion (in 2016 dollars; 7 percent discount rate) over the lifetime of the vehicles (NHTSA and EPA 2018). The regulatory analysis is complex and runs over 1,600 pages. It considers eight regulatory alternatives and multiple conceptual and empirical modeling issues. Our discussion focuses on its treatment of the question of whether consumers undervalue fuel economy when making car purchases. New empirical evidence suggests that buyers undervalue fuel economy only slightly, if at all (Busse, Knittel, and Zettelmeyer 2013; Allcott and Wozny 2014; Sallee, West, and Fan 2016). The studies analyze data on the sales of different models of cars to identify the impact of higher fuel economy on the selling price. In addition, the studies use rich data to control for the influence of other attributes—for example, more engine power—that also influence the selling price. Holding these other factors constant, the studies find that consumers are willing to pay higher prices for more efficient cars that reduce their future fuel costs. The studies compare the estimated willingness to pay for higher fuel economy with estimates of the expected fuel savings. The estimated fuel savings depend not only on the car’s fuel economy but also on future gasoline prices and the extent to which future savings are discounted. Depending on different assumptions about future fuel prices and discount rates, the studies estimate that when purchasing cars, consumers incorporate from 55 percent to over 100 percent of future fuel costs. Although the precise degree of undervaluation (if any) is difficult to know, the empirical evidence is inconsistent with the 2012 cost-benefit analysis implying that most consumers mistakenly ignore fuel economy. When a regulatory analysis argues from behavioral economics that a regulatory action corrects individual failures, the RIA should apply the same evidence standards used when evaluating standard market failures. As mentioned above, OMB’s (2003) guidance tells Federal agencies to determine that the market failure is significant, and that they should describe the failure both qualitatively and, when feasible, quantitatively. The discussion in the 2018 preliminary regulatory impact analysis of whether consumers undervalue fuel economy is a good example of an evidence-based and quantified description; the analysis suggests that the individual failure of undervaluation is probably not significant. In other cases, behavioral economics research on individual failures might sometimes fail to meet the standard of providing strong evidence for quantification. To a large extent, empirical evidence on individual 122 | Chapter 2 failures comes from experiments in economic laboratories. Although carefully designed and controlled experiments provide tight tests of specific behavioral hypotheses, it is problematic to try to extrapolate experimental results to predict how people make real-world decisions in markets. Even with empirical support that a nudge is needed, measuring the costs of a regulatory nudge is difficult. This difficulty arises in part from the issue of how to precisely define what constitutes a nudge. The criteria that a nudge is easy to avoid and has a low cost are not precisely quantified (Thaler and Sunstein 2009). Some policies that correct supposed consumer mistakes are not nudges. For example, fuel economy standards are not a nudge; the standards are not easily avoided and impose opportunity costs because they limit the availability of cars with desirable features. In contrast, the Motor Vehicle Fuel Economy Label rule is a nudge designed to correct the same consumer mistakes. If this nudge worked, fuel economy standards would be unnecessary (Gayer and Viscusi 2013). Glaeser (2006) points out that other common nudge policies essentially create a psychic tax—even though the nudges do not require explicit payments, consumers bear a real cost. Cost-benefit analyses should account for the fact that stigmatizing behavior imposes real costs, regardless of whether the behavior is in the consumers’ own best interest. More research is needed to develop empirical estimates of the costs of stigmatization and the willingness to pay to avoid it. Promising approaches include revealed and stated preference methods that have been developed to estimate the willingness to pay for other commodities that are not directly traded in markets (OMB 2003). Expanding Use of Regulatory Impact Analysis Another priority to strengthen the regulatory review process is to expand the number of complete and quantified regulatory cost-benefit analyses. Because the time, personnel, and resources available for regulatory reviews are limited, Federal agencies are only required to conduct cost-benefit analyses of significant regulatory actions. As a result, from 2000 through 2018, about 70,000 final rules were published in the Federal Register, and fewer than 6,000 of these rules were deemed significant under Executive Order 12866. Because the unreviewed rules were anticipated to not have economic effects greater than $100 million annually or other significant adverse effects, in principle they might account for a small share of total regulatory costs. However, given the volume of unreviewed rules, the uncounted regulatory costs might add up to a significant share. OMB should continue to carefully review agencies’ analyses of whether the regulatory action is significant in the first place. For a large fraction of significant rules discussed in OMB’s Reports to Congress, the agencies were not able to completely quantify the benefits and/ or costs. Furchtgott-Roth (2018) examines a number of important Federal labor market regulations, including the joint employer standard case study Deregulation: Reducing the Burden of Regulatory Costs | 123 at the end of this chapter, that were not evaluated with cost-benefit analyses when they were issued.20 Unlike 1981 Executive Order 12291, which explicitly required an analysis of whether the potential benefits exceeded the potential costs, the current regulatory review Executive Order 12866, which was enacted in 1993, requires only that the potential benefits “justify” the potential costs. Although at other points this Executive Order still refers to maximizing net benefits, the wording might leave the door partly open for an unquantified cost-benefit analysis. In many cases, regulatory analyses have been incomplete (Hahn and Tetlock 2008). Studies of the U.S. regulatory review process have found that over the past 30 years, in only about one-third to one-half of the cases was the regulatory analysis able to conclude that the benefits exceeded the costs (Hahn and Dudley 2007). In most of these cases, the original analysis was simply unable to quantify the benefits and/or the costs. After reviewing OMB’s Reports on the Benefits and Costs of Federal Regulations across different administrations, Fraas and Morgenstern (2014) concluded that the Obama Administration placed more emphasis on difficult-to-measure benefits such as the value of dignity and equity. Sunstein (2018) argues that as a general principle, regulatory cost-benefit analyses should try to measure the willingness to pay to honor moral commitments. Even when it is difficult to place a dollar value on a regulatory action’s benefits, quantifying its costs makes the tradeoffs involved more transparent. Improving cost-benefit analyses of a set of regulatory actions known as “budgetary transfer rules” is another priority. Budgetary transfer rules involve changes in receipts or outlays, such as Medicare funding. An important principle of cost-benefit analyses is that lump-sum transfers that do not change economic behavior but simply transfer income from group A to group B do not yield net benefits or net costs. The benefits for group B are exactly offset by the costs imposed on group A. However, budgetary transfer rules are not lumpsum transfers and thus cause people to change their behavior. For example, a regulatory action that changes Medicare payments is not simply a transfer from taxpayers to healthcare providers. Taxpayers and healthcare providers will respond to the changed incentives created by the regulatory action. The transfer rule has a budgetary impact and also has effects on private sector behavior. As discussed above, a cost-benefit analysis should measure all the changes in consumer and producer surplus that result when regulatory actions change private sector behavior. In the past, most agencies typically reported only the estimated budgetary effects of the transfer rules and sometimes the direct compliance costs. Recognizing that “transfer rules may create social benefits or costs,” OMB encourages agencies to report them “and will consider incorporating any such estimates into future Reports” (OMB 2017b, 22). The framework 20 Some were issued by independent agencies, or were issued as informal guidance, or were considered economically insignificant. 124 | Chapter 2 we develop above provides guidance for more complete cost-benefit analyses of transfer rules. A complete cost-benefit analysis of transfer rules also requires consideration of preexisting distortions—namely, subsidies and taxes. By the nature of transfer rules, the actions often change behavior that is already affected by government subsidies. For example, a Medicare transfer rule might increase or decrease coverage for healthcare services. A transfer rule might also increase or decrease total Federal expenditures that need to be financed through taxes. In many cases, one component of the costs of a transfer rule will be the rule’s budgetary impact, rescaled by an estimate of the marginal deadweight cost of government revenue. Until 2018, the OIRA review process generally excluded two important sets of regulatory and deregulatory actions: tax regulatory actions taken by the Department of the Treasury, and regulatory actions taken by independent agencies. Just as with the regulatory actions that are currently subject to costbenefit analysis, these regulatory actions promoted important goals, but at an opportunity cost. A regulatory cost-benefit analysis is thus still needed to help strike the right balance. On April 11, 2018, the Department of the Treasury and OMB signed a memorandum of agreement that outlines a new process for OMB to review tax regulatory actions under Executive Order 12866 (White House 2018). This agreement reflected Treasury’s and OMB’s shared commitment to “reducing regulatory burdens and providing timely guidance to taxpayers,” particularly guidance necessary to unleash the full benefits of the Tax Cuts and Jobs Act. Under the agreement, a tax regulatory action will be subject to OIRA review if it has an annual nonrevenue effect on the economy of $100 million or more. Many tax regulatory actions are focused on improving the collection of tax revenues, and there is a long-standing process to review the revenue effects of the Department of Treasury’s regulatory actions. However, similar to other agencies’ regulatory actions, some tax regulatory actions are designed to change incentives so as to promote social goals. For example, the Department of the Treasury issued tax regulatory actions that clarify which transactions would quality for beneficial tax treatment for investments in Opportunity Zones, such as equity investments made in Qualified Opportunity Funds that invest in the Opportunity Zones. The proposed rule is expected to qualify as a deregulatory action because it will reduce taxpayers’ planning costs. By reducing taxpayers’ uncertainty, the rule should promote the goal of encouraging investments to flow into Qualified Opportunity Funds. (The Opportunity Zone initiative is discussed in more detail in chapter 3.) Regulatory and deregulatory actions continue to be issued by independent agencies are not subject to the OMB regulatory review process. The economic framework we develop above is broad enough to encompass independent agencies’ actions. The principles of regulatory cost-benefit analysis apply Deregulation: Reducing the Burden of Regulatory Costs | 125 equally well to these actions, although of course they will need to be applied to the specific contexts of the independent agencies. Several independent agencies have created groups to conduct economic analyses internally. In 2009, the Securities and Exchange Commission created the Division of Economic and Risk Analysis. In recent developments, the Consumer Financial Protection Bureau has established its own Office of Cost-Benefit Analysis, and the Federal Communications Commission is in the process of establishing an Office of Economics and Analytics. There remains an unmet need for cost-benefit analyses of the regulatory actions taken by the independent agencies. Coglianese (2018) discusses three proposed policy options for improving independent agencies’ regulatory analyses: through the courts, through the OMB process, or through a required analysis undertaken outside OMB. Case Studies of Deregulatory Actions and Their Benefits and Costs This section presents three case studies of deregulatory actions and their benefits and costs. The first case study describes association health plans. The second study examines short-term, limited-duration insurance plans. And the third study discusses the specification of the joint employer standard. Case Study 1: Association Health Plans A major theme of this chapter is that the burdens of regulatory actions accumulate, which means that the cumulative costs of a set of actions will be larger than the sum of the costs of each regulatory action analyzed one by one. Case studies 1 and 2 illustrate the process in reverse: The cost savings from deregulatory actions also accumulate. The CEA’s (2019) analysis used CBO projections and other evidence to conduct prospective cost-benefit analyses of two deregulatory actions taken in 2018 that expanded consumer health coverage options: the association health plan (AHP) rule; and the short-term, limited-duration insurance (STLDI) rule. These deregulatory reforms restore and expand options in health insurance markets within the existing statutory frameworks, including the Affordable Care Act. We discuss the benefits and costs of each action separately, but the analysis accounts for the cumulative nature of the deregulatory actions. Specifically, the CBO (2018) projected the combined impact of the AHP and STLDI rules. The CBO’s projections also incorporated the fact that the Tax Cuts and Jobs Act of 2017 had already set the individual mandate penalty to zero owed by consumers who did not have Federally-approved coverage or an exemption. (Chapter 4 provides a more detailed analysis of the individual mandate penalty.) Taking into account the zero-mandate penalty, the CBO (2018) projected that by 2023, the AHP and STLDI rules will lead to 4 million more AHP enrollees and 2 million more STLDI enrollees. 126 | Chapter 2 Before 2018, under Title I of the Employee Retirement Income Security Act (known as ERISA), the Department of Labor had adopted criteria in subregulatory guidance that restricted the establishment and maintenance of AHPs. On June 21, 2018, the Department of Labor issued the AHP deregulatory action to establish an alternative pathway to form AHPs that modified some of the criteria. The AHP rule is an example of how deregulation does not always involve the elimination of an existing rule, but can instead involve revising subregulatory guidance through notice-and-comment rulemaking. The AHP rule’s removal of regulatory burden expands the ability of small businesses and working owners without other employees to join AHPs. AHPs allow small businesses and certain working owners to group together to selfinsure or purchase large group insurance. AHPs allow small businesses to offer their workers more affordable and potentially more attractive health coverage. Summing up over the groups of consumers whose health coverage options are expanded by the AHP rule, the CEA (2019) estimated that in 2021, after consumers and markets have had time to adjust, removing the regulatory burden will yield net social benefits worth $7.4 billion. In addition, these savings are estimated to reduce regulatory excess burdens by $3.7 billion. Many uninsured Americans today work for small businesses. The ACA subjected health insurance coverage for small businesses to mandated coverage of essential health benefits and price controls (in the form of restrictions on how premiums are set) that are not required for large businesses. Under the ACA, AHP coverage provided to employees through an association of small businesses and certain working owners is regulated the same way as coverage sold to larger businesses. Interpreting ERISA, the AHP rule provides a new pathway to form AHPs that modified the earlier subregulatory restrictions. New AHPs will be able to form by industry or geographic area (e.g., for metropolitan areas and States).21 Fully insured AHPs could be established beginning on September 1, 2018, while self-funded AHPs needed to wait until early 2019. Two studies provide estimates of the effects of the AHP rule on insurance coverage and ACA premiums. The CBO (2018) projects that after the rule is fully phased in, it will expand AHP enrollments by about 4 million people. Also, the CBO projects that consumers who switch to AHP coverage will be healthier than average enrollees in small group or individual plans. Based on the CBO’s projections, the CEA (2019) estimated that the AHP rule will cause gross (of subsidy) premiums in the nongroup market to increase by slightly more than 1 percent. Another study estimated that the proposed rule on AHPs will cause 21 The AHP rule expands organizations’ ability to offer AHPs on the basis of common geography or industry. For example, existing organizations such as local chambers of commerce could offer potentially large AHPs. According to the Association of Chamber of Commerce Executives, local chambers of commerce range in size from a few dozen firms to more than 20,000 firms. Depending upon the number of workers per chamber member, the potential group size of chambers of commerce-based AHPs range from the hundreds to the tens of thousands. Deregulation: Reducing the Burden of Regulatory Costs | 127 3.2 million enrollees to leave the individual and small group markets and enter AHPs by 2022 (Avalere 2018). The AHP rule will allow small businesses to offer their workers more affordable health coverage by reducing the administrative cost of coverage through greater economies of scale. The share of the premium accounted for by administrative costs falls with insurance group size; the share is 42 percent for firms with 50 or fewer employees, compared with 17 percent for firms with 101 to 500 employees and 4 percent for firms with more than 10,000 employees (Karaca-Mandic, Abraham, and Phelps 2011). The AHP rule allows the average group size to expand, which reduces the average cost of AHP coverage—a significant advantage for many small and medium-sized businesses. The AHP rule also gives small businesses more flexibility to offer their workers health coverage that is more tailored to their needs. At this point in time, it is speculative whether AHPs will provide relatively comprehensive coverage or more tailored coverage. Providing more choices over tailored coverage options could have substantial value for consumers. An analysis of choices made in the employment-related group market found that offering more preferred plan choices was as valuable for the median consumer as a 13 percent premium reduction (Dafny, Ho, and Varela 2013). The CEA’s (2019) analysis did not include a separate estimate of the value of more tailored plan options. In some circumstances, there may be a trade-off between AHP group size and the extent of tailoring, because the more tailored plan might not be attractive to all potential AHP members. In this context, the estimate of the benefits of reduced administrative costs provides a lower bound for benefits; consumers who do not take advantage of the lower administrative costs of larger AHPs do so because they value tailored coverage more highly than the cost savings. The AHP rule affects four groups of people: consumers who move out of ACA-compliant individual coverage in the nongroup market to ACA-compliant group coverage through an AHP; consumers who move out of small-group coverage; consumers who would have AHP coverage with or without the rule; and consumers who would have been uninsured without the rule. To estimate the effects of the AHP rule, the CEA (2019) used data from the CBO’s (2018) projections and estimates of administrative costs. The AHP rule’s addition of a new pathway to form AHPs, which modified the criteria for the creation of AHPs, decreased costs and thus increased the consumer surplus for AHP enrollees. The CEA’s (2019) estimates include changes in the consumer surplus, and the reductions in the excess burden of regulatory costs. As discussed above, the consumer surplus and excess regulatory burden are often omitted. The CEA’s (2019) analysis of the AHP rule provides a useful case study and guide to estimate these important aspects of regulatory costs. The first step is to estimate the benefits that flow from consumers moving out of ACA-compliant individual coverage in the nongroup market. Based on differences in administrative costs of ACA-compliant coverage in the 128 | Chapter 2 individual market versus AHPs’ ACA-compliant coverage in the group market, the CEA estimated that each enrollee who shifts from ACA-compliant individual coverage to ACA-compliant group AHP coverage saves $619 in administrative costs and enjoys $309 in net surplus from the cost reduction. In addition, the CEA estimated that after accounting for the loss of cross-subsidies and their effects on ACA-compliant premiums and subsidies in the nongroup market, each enrollee who shifts from ACA-compliant individual coverage into ACAcompliant AHP group coverage reduces third-party expenditures by $1,933. Aggregated over the 1.1 million enrollees who shift, in 2021 these effects of the AHP regulatory reform yield benefits worth $2.5 billion. The second step is to estimate the benefits that flow from the roughly 2.5 million consumers who respond to the rule by moving out of small-group coverage into AHP coverage. By allowing enrollees to switch to AHPs that are larger than their existing small group plans, the CEA estimated that the AHP rule will on average reduce insurance administrative costs by $1,924, so each enrollee enjoys $962 of surplus from this cost reduction. The CEA assumed that the reduction in administrative costs also reduces Federal tax expenditures on health insurance by an average $349 per enrollee. Aggregated over the 2.5 million enrollees who make this shift, these effects of the AHP rule yield benefits worth $3.3 billion. The third step is to estimate the benefits that the AHP rule generates for the consumers who would have AHP coverage with or without the rule. Due to the increase in average AHP group size, the CEA estimated that the rule reduces administrative costs by $335 per enrollee. The CEA assumed that the reduction in administrative costs also reduces Federal tax expenditures on health insurance by an average $61 per enrollee. The aggregate benefits from this effect of the AHP rule are worth $1.7 billion. The fourth step is to estimate the benefits the AHP rule generates for consumers who would have been uninsured without the rule. The CBO (2018) projected that the AHP regulatory reform will reduce the number of uninsured consumers by 400,000. Because they are responding to a reduction in administrative costs that averages $619 per enrollee (as above), each newly insured AHP enrollee enjoys a consumer surplus of $309 from their purchase. The CEA (2019) also estimated that third-party costs of uncompensated care fall by $989 for each newly insured AHP enrollee. Offsetting these benefits, Federal tax expenditures on health insurance increase by an estimated $1,519 per newly insured AHP enrollee. The aggregated net costs of these effects of the AHP rule are $0.1 billion. Summing up over the four groups of consumers whose insurance options are expanded by the AHP rule, the CEA (2019) estimated that in 2021, the rule yields social benefits worth $7.4 billion. The estimate of social benefits takes into account both the benefits and costs, including the possibility that the AHP Deregulation: Reducing the Burden of Regulatory Costs | 129 rule imposes new costs on a subset of enrollees in the nongroup market who pay higher insurance premiums. Case Study 2: Short-Term, Limited-Duration Insurance Plans The second case study considers an August 2018 deregulatory action that expanded short-term, limited-duration insurance (STLDI) plans. The 2018 STLDI rule revised a rule issued by the previous Administration in 2016. At the time of the enactment of the ACA and until the 2016 rule, STLDI plans had longer durations than allowed by the 2016 rule. The 2016 rule expressed a concern that consumers were purchasing STLDI plans as their primary form of coverage to avoid ACA requirements. The 2016 rule therefore shortened the total duration of STLDI plans from less than 12 months to less than 3 months (81 FR 75316). The 2018 STLDI rule removed the restrictions created by the 2016 rule, which allows consumers more flexibility to purchase short-term insurance. On August 3, 2018, the Department of the Treasury, Department of Labor, and Department of Health and Human Services published a final rule that extended the length of the initial STLDI contract term to less than 12 months and allowed for the renewal of the initial insurance contract for up to 36 months, which is the same as the maximum coverage term required under COBRA continuation coverage (U.S. Congress 1985). Because the administrative costs and hassles of purchasing health insurance can now be spread out over a longer period of coverage, the STLDI rule also has the effect of lowering the average costs consumers pay for insurance. The CEA (2019) estimated that in 2021, the STLDI rule will yield benefits worth $7.3 billion. In addition, the savings in costs were estimated to reduce excess burdens by $3.7 billion. Because STLDI plans are not considered to be individual health insurance coverage under the Health Insurance Portability and Accountability Act and the Public Health Service Act, STLDI coverage continues to be exempt from all ACA restrictions on insurance plan design and pricing. This allows STLDI plans to offer a form of alternative coverage to those who do not seek permanent individual health insurance coverage. The STLDI rule requires that STLDI policies must provide a notice to consumers that these plans may differ from ACAcompliant plans and, among other differences, may have limits on preexisting conditions and on health benefits, and have annual or lifetime limits.22 Insurers were allowed to begin issuing STLDI plans on October 2, 2018—60 days after publication of the final rule. Four studies provide estimates of the effects of the STLDI rule on insurance coverage and ACA premiums. The CBO projects that the STLDI regulatory reform will result in an additional 2 million consumers in STLDI plans by 2023 (CBO 2018). Based on CBO projections, the CEA (2019) estimated that the STLDI 22 ACA-compliant coverage, including coverage offered on the exchange, continues to have no limits on preexisting health conditions. 130 | Chapter 2 rule will increase gross premiums by slightly more than 1 percent in the same time frame. The Centers for Medicare & Medicaid Services (CMS) project that by 2022, 1.9 million consumers will have STLDI policies and that, as a result, gross premiums for ACA coverage could increase by up to 6 percent (CMS 2018). A study published by the Urban Institute in 2018 predicts that the rule could increase STLDI enrollment by 4.2 million, but does not provide an estimate of the impact on gross ACA premiums (Blumberg, Buettgens, and Wang 2018). A 2018 study published by the Commonwealth Fund estimates that the rule could increase STLDI enrollment by 5.2 million and could increase gross ACA premiums by 2.7 percent (Rao, Nowak, and Eibner 2018). Under both the 2016 and 2018 rules, STLDI plans are exempt from ACA requirements, including the mandated coverage of the 10 essential health benefits (CCIIO 2011). The 2016 STLDI rule limited the duration of an STLDI contract to less than 3 months. The 2016 rule’s restrictions on the duration of an STLDI contract exposed potential STLDI enrollees to the risk of losing their STLDI coverage at the end of three months, or if they could obtain a new STLDI policy, having their deductibles reset, among other things. The CEA (2019) therefore modeled both the renewability restriction and the limited terms as an addition to the load costs and hassle of STLDI plans associated with applying for coverage every 3 months rather than every 36 months, which are hereafter referred to as “loads.”23 Assuming no tax penalty on the uninsured, the CEA compared high-loaded STLDI plans (2016 rule) with low-loaded STLDI plans (new rule), and took the difference to be the impact of the new rule. Allowing for STLDI plans under the 2016 rule makes the CEA’s analysis different from some others (e.g., Blumberg, Buettgens, and Wang 2018) that assume that no STLDI plan is available under the 2016 rule, and fundamentally changes some of the results. According to the CEA’s approach, even under the 2016 rule, there would be little reason for consumers paying premiums far in excess of their expected claims to continue with ACA-compliant individual coverage, because at least they have the expensive but not impossible option of reapplying for STLDI coverage every three months. The marginal STLDI enrollees must instead be those who receive either an exchange subsidy or a cross-subsidy from other members of the ACA-compliant individual market risk pool.24 The CEA’s approach also does not permit adding an additional benefit to STLDI enrollees from relief from the essential health benefits mandate, because they already had that relief under the previous rule, albeit with higher loads. 23 The CEA notes that, under the 2016 rule, a consumer having difficulty continuing STLDI coverage could turn to ACA-compliant plans, which in a sense is a choice with extra loading to the extent that the applicable regulations deviate from the consumer’s preferences. 24 It is possible that the 2017 ACA-compliant risk pool included a number of consumers with a low ratio of expected claims to net premiums, but this Report is looking at plan years 2019 through 2028, when the individual mandate penalty is zero and market participants have had time to adjust to the reality of high premiums for ACA-compliant plans. Deregulation: Reducing the Burden of Regulatory Costs | 131 Lower premiums result from smaller loads because premiums finance both claims and loads. But, with the exemption from ACA regulations, STLDI plans also have more freedom to control moral hazard and to dispense entirely with loads associated with unwanted services by excluding those services from the plan. These are some of the reasons why premiums for STLDI coverage are often less expensive than premiums for ACA-compliant individual market insurance plans (CMS 2018; Pollitz et al. 2018). Many health insurance simulation models treat consumer choice as a negative- or zero-sum game. A person who reduces his or her net premium spending by $1,000 when he or she forgoes unneeded coverage merely increases by $1,000 the premiums that must be collected from those who retain that coverage. This assumption is unrealistic because of moral hazard, administrative costs, and the fact that the exchanges cap and means-test premiums. For example, this person’s gross premium for the forgone coverage may have been $1,500 (he receives premium subsidies on the exchange), $300 of which goes to administrative costs, and another $1,200 goes to the person’s own claims that were of little value but are made as long as he or she is forced to have the coverage. This person’s enhanced choice saves taxpayers $500, and imposes no cost on the risk pool. As demonstrated in the CEA’s 2019 report, a broader and more realistic range of insurance market frictions, and thereby more reliable conclusions, are possible without unduly complicating the analysis. The STLDI rule affects three groups of consumers: consumers who move out of ACA-compliant individual coverage and into STLDI coverage; consumers who would have chosen STLDI coverage with or without the rule; and consumers who would have been uninsured without the rule. To estimate the effects of the STLDI rule, the CEA (2019) used data from the CBO’s (2018) projections, estimates of the elasticity of demand for health insurance, and estimates of the administrative and time costs of STLDI coverage. Before the 2018 rule, the 2016 rule’s restrictions on STLDI coverage increased costs and thus reduced consumer surplus for STLDI enrollees. The CEA’s (2019) estimates include changes in the consumer surplus, and reductions in the excess burden of regulatory costs. As discussed above, the consumer surplus and excess regulatory burden are often omitted. The CEA’s (2019) analysis of the STLDI deregulatory action provides a useful case study and guide to estimate these important aspects of regulatory costs. The first step is to quantify the benefits that the STLDI rule generates for consumers who move out of ACA-compliant coverage into STLDI coverage. The CBO projects that the rule will result in 2 million new enrollees in STLDI plans. The CEA (2019) estimated that over 1 million of these are consumers who shift from ACA-compliant individual coverage to STLDI coverage. The CMS projects that the average STLDI premium in 2021 will be $4,200. Assuming that the elasticity of demand for STLDI coverage is –2.9, the CEA estimated that by removing 132 | Chapter 2 the combined effects of the limits on renewability, the limited term, and the administrative costs and hassles, the STLDI rule reduces the load by $1,218. On average, each enrollee who switches from ACA-compliant individual coverage to STLDI coverage thus enjoys a consumer surplus of $609. (The average net surplus equals one-half the total cost savings of $1,218.) After accounting for the loss of cross-subsidies, we estimate that each enrollee who shifts from ACA-compliant individual coverage to STLDI coverage reduces third-party expenditures by $3,459. Aggregated over the 1.3 million enrollees who shift, in 2021 these benefits of the STLDI rule are worth $5.3 billion. The effects of the STLDI rule depend upon how many consumers shift from ACA-compliant individual coverage to STLDI coverage, and of those, how many received ACA premium subsidies. The CEA (2019) used the CBO’s (2018) projections that over 1 million consumers will switch from ACA-compliant individual coverage. The economic analysis in the STLDI rule assumes that in 2021, 600,000 enrollees will switch from ACA exchange plans to STLDI coverage, and another 800,000 will switch from off-exchange plans. In terms of how many switchers received ACA premium subsidies, we assume that the STLDI switchers will be on average similar to the enrollees projected to respond when the tax penalty is set to zero (CBO 2018). This assumption is uncertain. The CMS (2018) projects that mostly unsubsidized enrollees will switch to STLDI coverage. Similarly, the economic analysis in the STLDI Final Rule anticipates that most consumers who switch to STLDI coverage will have incomes that make them ineligible for ACA premium subsidies. The CEA (2019) conducted a sensitivity analysis that estimated the benefits from the STLDI rule under different assumptions about the number of consumers who switch from ACA-compliant individual coverage and the number of unsubsidized switchers. The second step is to quantify the benefits that the STLDI rule generates for consumers who would have chosen STLDI coverage with or without the new rule. The CEA (2019) assumed that 750,000 consumers would have chosen STLDI coverage with or without the new rule. Each of these consumers gains the $1,218 in reduced load costs (as noted above). Aggregating over 750,000 consumers, the STLDI rule yields an additional $0.9 billion in benefits. The third step is to quantify the benefits that the STLDI rule generates for the consumers who would have been uninsured without the rule. The CBO (2018) projects that the STLDI deregulatory reform will reduce the number of uninsured consumers by 0.7 million people, each of whom also enjoys a consumer surplus of $609 from their purchases (as noted above). The CEA (2019) also estimated that third-party costs of uncompensated care will fall by $989 for each newly insured STLDI enrollee. Aggregated over 0.7 million, the benefits for previously uninsured consumers who move into STLDI plans add another $1.1 billion. Summing up over the three groups of consumers whose insurance options are expanded by the STLDI rule, the CEA (2019) estimated that in 2021, Deregulation: Reducing the Burden of Regulatory Costs | 133 the rule yields benefits worth $7.3 billion. The estimate of social benefits takes into account both the benefits and costs, including the possibility that the STLDI rule imposes new costs on a subset of enrollees in the nongroup market who pay higher insurance premiums. Case Study 3: Specifying the Joint Employer Standard During the Obama Administration, a new, expansive standard for determining joint employers dramatically changed the landscape of labor regulation for the employers of millions of American workers. This new standard especially burdened franchising, which is a large and rapidly growing part of retail, technology, and other sectors. This subsection explains why returning to the previous—narrow—standard, as the Trump Administration is doing, enhances productivity, competition, and employment in labor markets, with a net annual benefit likely exceeding $5 billion. These results occur in large part because the expansive standard increased entry barriers into local labor markets and discouraged specialization along the supply chain. The working conditions of many of the Nation’s employees are “affected by two separate companies engaged in business relationship.”25 A joint employer standard specifies when two or more companies are simultaneously the employer for legal purposes, and therefore both joint and severally liable “for unfair labor practices committed by the other.” The definition of a joint employer is pertinent to legal liability in Fair Labor Standards Act litigation, enforced by the Department of Labor, and to collective bargaining rules overseen by the National Labor Relations Board. The NLRB established a common law standard by deciding various cases over the years, although that standard was volatile between 2014 and 2018. In 2018, including a board member appointed by President Trump, the NLRB issued its Notice of Proposed Rulemaking proposing to follow the standard before 2015, which was that “to be deemed a joint employer under the proposed regulation, an employer must possess and actually exercise substantial direct and immediate control over the essential terms and conditions of employment of another employer’s employees in a manner that is not limited and routine.”26 In August 2015, a decision by the NLRB established a more expansive standard that did not require the control to be direct or to be actually exercised. The NLRB’s shift to a more expansive standard became apparent to the business community no later than July 2014, when the NLRB Office of the General Counsel asserted that McDonald’s was a joint employer (NLRB 2014; Greenhouse 2014). The Department of Labor had also, in 2016, provided 25 The quotations in this section are from 83 FR 46681–82. 26 In addition to issuing the Notice of Proposed Rulemaking in 2018, the NLRB issued a December 2017 decision returning to the earlier (narrower) standard, although that decision was vacated in 2018 “for reasons unrelated to the substance of the joint-employer issue” (83 FR 46685). 134 | Chapter 2 informal guidance specifying a more expansive standard, and then during the Trump Administration withdrew that guidance. Consider a few examples. Company ABC retains a temporary agency TMP for some clerical staffing that needs to be performed at ABC’s location. If TMP has no supervisor at ABC’s location and ABC is selecting and supervising the temporary employees, then by both standards ABC is a joint employer of those employees. The determination would, under the narrower standard, be the reverse if TMP was doing the supervision without detailed supervisory instructions from ABC. Company FRA is a franchisee for company XYZ, which specifies the daily hours that FRA stores are open for business but does not involve itself with individual scheduling assignments. XYZ would not be a joint employer under the narrower standard but probably would be under the expansive standard.27 Under the expansive standard, the NLRB charged McDonald’s, which has the vast majority of its restaurants owned and operated by independent franchisees, as a joint employer for its franchisees’ actions (Elejalde-Ruiz 2016). The McDonald’s case was settled in 2018, with McDonald’s no longer designated as a joint employer (Luna 2018), although an administrative judge rejected the settlement, which may be headed back to the NLRB for approval. From an economic perspective, a joint employer determination prohibits the division of management responsibility that normally coincides with the assignment of management tasks along a supply chain. By restricting the allocation of responsibility along the supply chain, the chain will be less productive and involve less division of tasks (Becker and Murphy 1992). As an important example, franchisers may need to abandon their franchise status and either abandon the company assets or deploy them in a less productive corporate (nonfranchise) structure, where all the workers in the chain are employed by the franchiser. Franchising, which is “a method of distributing products or services, in which a franchiser lends its trademark or trade name and a business system to a franchisee, which pays a royalty and often an initial fee for the right to conduct business under the franchiser’s name and system,” is by itself a ubiquitous business practice—about half of retail sales in the United States involve franchised operations (83 FR 46694; Norton 2004). About 9 million workers are employed by franchises (Elejalde-Ruiz 2016; Gitis 2017). Temporary help services is another important business model affected by the joint employer 27 The NLRB’s expansive standard is more speculative to apply and derives from a single NLRB decision, Browning-Ferris; two dissenting NLRB members found it to be “impermissibly vague.” Franchisers can be joint employers under the narrower standard if, e.g., they specify that franchisees provide specific fringe benefits to franchisee employees. Deregulation: Reducing the Burden of Regulatory Costs | 135 standard, with 266,006 firms in 2012 obtaining about 2.5 million employees from 13,202 supplier firms.28 An expansive joint employer standard affects the competitiveness of the markets for labor as well as the productivity of the affected industries. The NLRB’s Office of the General Counsel, advocating the expansive standard, stated that such a standard is needed to give workers additional market power (Phillips 2014). To the extent that the expansive standard causes franchises to be absorbed by the franchiser, the monopsony market power of employers would be increased.29 Either of these adverse competition effects of the expansive standard are represented by areas D1 and E in figure 2-5 above, with the “regulated industry” understood to be franchised supply chains or the temporary help industry. In order to quantify the annual amounts contained in these areas, we first estimate that employment in these businesses is about 8 percent of total employment. Because their average pay is lower, we estimate that total wages for 2017 are about $386 billion for franchisees and temporary help services.30 The monopsony power created by the expansive standard reduces the wages paid by these businesses. For each 1 percent that wages are reduced, the aggregate wedge between wages paid and marginal productivity is about $7.7 billion in 2017 and $11 billion a year on average for the years 2020–29.31 Krueger and Ashenfelter (2018) estimate that increasing the market power of employers by fully eliminating competition for labor among those franchisees associated with the same franchiser would create a labor wedge equal to about one-sixth of the inverse of the wage elasticity of industry labor supply. At an industry supply elasticity of 1 (or 4), this means that worker’s wages are depressed by 8 (or 2) percent, respectively. Assuming that some franchising would have continued even with the expansive standard, these are upper bounds but suffice to show that the 1 percent effect hypothesized above is plausible. The $11 billion per year is a transfer, but it also represents a reduction in the aggregate demand for labor. Given that labor is already significantly taxed, 28 The number of firms is from 87 FR 46694; and the number of employees is from FRED (2018). Temporary help employment has now exceeded 3 million. 29 Consolidation on the worker side of the market (that is unionization) may offset the wage effect of consolidation on the employer side. However, the two are reinforcing in terms of the amount of labor and therefore inefficiency because each side with market power tends to reduce quantities (demanded or supplied, as applicable) in order to squeeze the other side of the market (Williamson 1968; Farrell and Shapiro 1990; Whinston 2006). 30 The Bureau of Labor Statistics finds that the annual mean wage of temporary help service workers was $37,090 in May 2017. We proxy franchisee workers’ wages using the annual mean wage of workers in retail trade, i.e., $32,930. We then create an annual mean wage of both groups by taking a weighted average of the two groups’ wages. 31 This assumes wage elasticities of labor supply and demand that are approximately equal in magnitude, so that a 1 percent movement down the labor supply curve is associated with about the same movement up the labor demand curve. To convert a 2017 amount to an average for 2020 through 2029, we assume 5 percent annual growth. 136 | Chapter 2 there is a social cost of reduced labor demand for the reasons discussed in connection with figure 2-6. Using the same 50 percent deadweight cost factor used for the health insurance case studies, that makes an annual net social loss of about $5.5 billion from the anticompetitive aspects of the expansive standard. A similar cost calculation would result if it were assumed instead that the expanded standard creates a similar-sized wedge in the labor market due to additional unionization. Any productivity effects need to be added to the anticompetitive aspects. The CEA has not yet been able to quantify these effects, aside from noting above the number of workers employed by franchisees and in the temporary help industry. Nevertheless, the productivity effects may be important because franchisers view the franchise system as essential for them to be innovative and adaptive to changing market conditions (Hendrikse and Jiang 2007). McDonald’s is a major franchiser, and its annual reports show how it has increased the number of franchisee stores while decreasing the number of company stores. Its goal is to have 95 percent of its stores be franchise stores. Related subregulatory guidance issued in 2015 by the Department of Labor proposed a revised test for independent contractor status. Independent contractors account for about 7 percent of U.S. employment (BLS 2018a; Furchtgott-Roth 2018) and are important in the relatively new sharing economy. The 2015 test shares many of the same economic issues with the expansive joint employer standard: specialization, competition, innovation, and so on. But potentially unique to independent contractors is the direction of the “tax gap”; the expansive independent contractor standard may increase revenues from employee and business taxation, whereas the expansive joint -employer standard may reduce it.32 The additional labor or capital tax revenue resulting from the regulation is reflective of a benefit, although there is also a cost in the other direction due to a reduction in the aggregate supplies of labor and capital (recall figure 2-6). In summary, the recent decisions by the NLRB and the Department of Labor to return to the narrow joint employer standard will create an annual net benefit of billions of dollars in the forms of added competition and productivity in low-skill labor markets. A rough estimate suggests that the net annual benefit will probably exceed $5 billion. Conclusion Regulation involves trade-offs. Many regulatory actions have helped protect workers, public health, safety, and the environment. However, ever-growing 32 One issue, not yet resolved by the CEA, is whether business income franchisers are taxed at a lower marginal rate than business income of franchisees, especially now that the statutory corporate income tax rate has been cut. Regarding the employee / independent contractor tax gap, for varying conclusions see Bauer (2015) and Eisenbach (2010). Deregulation: Reducing the Burden of Regulatory Costs | 137 cumulative regulatory costs have burdened the U.S. economy. In 2017 and 2018, the Trump Administration’s regulatory cost caps turned around the growth in regulatory costs. Small business owners, consumers, and workers gain when less regulation means lower business costs, lower consumer prices, more consumer choice, and higher worker productivity and wages that exceed any reduction in the regulations’ benefits. Guided by cost-benefit analyses, Federal agencies are eliminating and revising regulatory actions when the benefits do not justify the costs and to improve the cost-effectiveness of regulatory actions in accomplishing their important goals. This chapter has used an economic framework to analyze the need for, and potential of, the Trump Administration’s deregulatory agenda. The framework emphasizes what small-business owners have long known—that regulatory costs accumulate and multiply. When an industry is regulated, the effects are felt across the U.S. economy. Starting in 2017 and 2018, the economy has started to grow stronger as the cost savings from deregulatory actions have begun to accumulate. Deregulation is improving the country’s fundamental productivity and incentives to enable sustained economic growth. 138 | Chapter 2 x Chapter 3 Expanding Labor Force Opportunities for Every American Consistent with the robust pace of economic growth in the United States, the labor market is the strongest that it has been in decades, with an unemployment rate that remained under 4 percent for much of 2018. Although this low unemployment rate is a sign of a strong job market, there is a question whether the rapid pace of hiring can continue and whether there are enough remaining potential workers to support continued economic growth. This pessimistic view of the economy’s potential overlooks the extent to which the share of primeage adults who are in the labor market remains below its historical norm. It also fails to capture the extent to which these potential workers could be drawn back into the labor market by increasing worker productivity and wages as well as by correcting labor market distortions from past tax and regulatory policies. This chapter explores trends in employment and wages as well as the positive effects of the Trump Administration’s policies on increasing the returns to work and encouraging additional adults to engage in the labor market. Fundamentally, when people opt to neither work nor look for work it is an indication that the after-tax income they expect to receive in the workforce is below their “reservation wage”—that is, the minimum value they give to time spent on activities outside the formal labor market. For some, this reflects the low wages that they expect to earn through formal work—either because they lack the education and skills desired by employers or because firms lack the physical capital necessary to enhance their productivity. Reskilling programs can prepare these individuals for higher-wage jobs. Similarly, the reduction in corporate income tax rates, the expensing of businesses’ investment in 139 equipment, and the creation of the Opportunity Zones provided by the Tax Cuts and Jobs Act each makes it less costly for firms to invest in the necessary physical capital to increase worker productivity, which results in higher wages. Consistent with strong economic growth, wages continued to increase through 2018, and this wage growth has been particularly strong among the lowestearning workers. This wage growth has the potential to give more people incentives to begin looking for work. For others who remain outside the formal labor market, this decision reflects the tax and regulatory distortions that limit the after-tax return that they would receive from formal work. Some regulations, such as occupational licensing, directly raise the costs of entering the labor market and therefore reduce the number of people seeking work. The high cost of child care, in part driven by regulatory and other requirements, provides a disincentive to work in the formal labor market and an incentive to take care of one’s own children rather than hire others to do so. This Administration’s deregulatory policies have reduced these labor market distortions, thereby drawing some prime-age workers back into the labor market. This chapter outlines recent labor market trends, both for the adult population as a whole and for key population subgroups. It also considers policies that could further remove government distortions and increase the after-tax return to formal work, thereby increasing work incentives for potential entrants into the labor market. These policies will encourage additional workforce growth and further expansion of the U.S. economy. F or adults (age 16 years and over) who are working or looking for work, the current labor market is among the strongest in recent decades. The economy is in the midst of its longest consecutive streak of monthly job creation in at least 80 years. The national unemployment rate sat at 3.9 percent in December 2018, after reaching a near 50-year low in November. Since 1970, the national unemployment rate has reached a rate below 4 percent during only 13 months, with 8 of these months occurring in 2018. Additionally, for the first time since the Bureau of Labor Statistics (BLS) began tracking job 140 | Chapter 3 openings in 2001, there are more job openings than unemployed workers, suggesting that firms still seek to hire more people. Despite the strong job market and the surplus of open positions, millions of adults are neither employed nor seeking work. Because these individuals are not actively looking for work, they are considered to be out of the labor market and are not counted as unemployed, even though many of them are in their prime working years and could be working. In fact, the share of primeage (25–54 years) adults who are working remains below the share seen at the peaks of the previous two economic expansions. The availability of these prime-age adults who currently remain outside the labor force creates the potential for continued increases in employment, despite the historically low unemployment rates. Doing so, however, necessitates a better understanding of the reasons these adults are currently not working and the development of economic policies and workforce training opportunities to draw them into the labor market. A key component of efforts to draw additional workers into the labor market is increasing the potential wages that they could receive. This is because, when jobs are available and plentiful, the decision to remain out of the labor force signals a belief among those individuals that the wage they could receive is below the value they place on their time outside the labor market. Among those who are employed, there is evidence that wages are rising for the typical worker. Real hourly earnings based on the Personal Consumption Expenditures (PCE) Price Index, which is a measure of inflation, rose by 1.5 percent for all workers and by 1.7 percent for nonsupervisory workers. This is the sixth consecutive year of positive real hourly earnings growth for nonsupervisory workers and the longest streak since the eight years of consecutive earnings growth from 1995 through 2002. Encouragingly, wage growth is accelerating, as real hourly earnings increases for both all workers and for nonsupervisory workers in 2018 exceeded those in either 2016 or 2017. Wage gains in 2018 have also been particularly strong among the lowest-earning workers. These wage gains are an indication that policies designed to increase the productivity of workers, such as the corporate tax rate reductions in the Tax Cuts and Jobs Act, are translating into higher paychecks. But despite these recent improvements, there is still room for further wage growth, both from new policies designed to enhance productivity and from the effects of recent policies to reaching additional workers. With the dual goals of further growing the workforce and increasing the wages of those who are working, in this chapter we consider labor market trends in recent decades, both for the population as a whole and for key demographic subsets. While recognizing that most adults engage in productive nonwork activities, we also explain how potential distortions caused by taxes and regulations can lead some adults whose most productive use of time is in the formal labor market to instead engage in other activities. Furthermore, Expanding Labor Force Opportunities for Every American | 141 we consider the reasons that the potential wage rates for some workers on the sidelines are below their reservation wages, and what could be done to enhance their productivity and increase their potential after-tax earnings if they entered the labor market. Finally, we discuss this Administration’s policies to increase economic opportunities for a diverse range of adults to enable them to engage more fully in the growing economy.1 Long-Run Trends in Adult Employment, Labor Force Participation, and Wage Earnings What do long-run trends in the labor market tell us about the economy? This section considers these trends, focusing on adult employment, labor force participation, and wage earnings. Employment and Labor Force Participation From 1960 through 2001, there was a marked increase in adult (age 16 and older) labor force participation (i.e., the share of all adults who are working or unemployed and looking for work) in the United States (see figure 3-1). Largely driven by more adult women entering the workforce, it rose by over 8 percentage points, from 58.5 percent in March 1960 to 67.1 percent in February 2001.2 Since 2001, however, the trend in participation rates has reversed and has been in decline. By the end of 2015, the 62.7 percent participation rate was more than 4 percentage points below its 2001 peak. Earlier in 2015, participation rates had reached their lowest since 1977. This decline can only be partially attributed to the Great Recession, as participation rates also fell before the recession from 2001 through 2007 and after the recession from 2009 through 2015. From 2015 through 2018, the participation rate stabilized, and as of December 2018, it remained at 63.1 percent. The share of all adults who are working—the employment-to-population ratio—shows a similar long-run pattern, but has additional business cycle volatility because unemployment rises during recessions and falls during expansions. In recent years, as the labor force participation rate has stabilized and the unemployment rate has fallen, the share of all adults who are working has risen. In December 2018, 60.6 percent of adults were working, which is more than 2 percentage points above where it stood seven years ago, in 2011. But it still remains considerably below where it stood at the turn of the 21st century. 1 The CEA previously released research on the topics covered in this chapter. The text that follows builds on these research papers produced by the CEA: “Returns on Investments in RecidivismReducing Programs” (CEA 2018e); “Addressing America’s Reskilling Challenge” (CEA 2018a); “Military Spouses in the Labor Market” (CEA 2018d); and “How Much Are Workers Getting Paid? A Primer on Wage Measurement” (CEA 2018c). 2 These dates reflect the final month before the official start of each recession, according to the National Bureau of Economic Research (NBER 2010). 142 | Chapter 3 Figure 3-1. Labor Force Participation Rate and Employment-toPopulation Ratio, 1950–2018 Percent 75 Dec–18 Labor force participation rate 70 65 60 Employment-topopulation ratio 55 50 1950 1960 1970 1980 1990 2000 2010 Source: Bureau of Labor Statistics. Note: Shading denotes a recession. In part, the decline in both the labor force participation rate and the employment-to-population ratio since 2001 was to be expected, due to the aging of the population. For over 30 years, from the late 1960s through the late 1990s, the share of the population over age 55 was nearly unchanged, making up between 26 and 28 percent of all adults. But since then, as members of the Baby Boom generation have aged, this stability has dissipated, with those over age 55 growing as a share of all adults. In 2018, more than 35 percent of the adult population was over age 55—with 19 percent over age 65 (see figure 3-2). As those who are of traditional retirement age account for a larger share of the total adult population, participation rates will decline if rates for workers at any given age remain unchanged. The effects of the aging population on overall employment and participation rates have been partially offset by rising participation of those at or near traditional retirement age (see figure 3-3). As discussed in chapter 3 of the 2018 Economic Report of the President, this increase in participation rates among older adults is partially attributable to improved health statuses relative to earlier cohorts (CEA 2018b). The increase is also consistent with policy changes that reduce the incentives to retire early, including the delayed full retirement age for Social Security and encouraging the use of defined contribution retirement plans, which do not have built-in incentives for early retirement. But even as the most recent cohorts of adults Expanding Labor Force Opportunities for Every American | 143 Figure 3-2. Adult Population by Age (Years), 1950–2018 Share of adult population (percent) 100 2018 Age 65+ 80 Age 55–64 60 Age 25–54 40 20 Age 16–24 0 1950 1960 1970 1980 1990 2000 2010 Sources: Bureau of Labor Statistics; CEA calculations. Figure 3-3. Labor Force Participation Rate by Age, 1950–2018 Percent 100 2018 Age 25–54 80 Age 16–24 60 Age 55–64 40 20 Age 65+ 0 1950 1960 1970 Source: Bureau of Labor Statistics. 144 | Chapter 3 1980 1990 2000 2010 reaching older ages are working more than those of similar ages did in the past, those over age 65 still work at substantially lower rates than younger age groups. Hence, the higher-than-traditional participation rates of these older adults are not sufficient to fully offset the loss in participation associated with the aging population. To separate out the effects of aging, economists often focus on prime-age adults, who are age 25–54 years. This group is of particular importance because they generally are neither in school nor retired. Thus, they represent those adults who are most expected to be working. Among prime-age adults, labor force participation rates fell from a high of 84.1 percent in 1999 to a 30-year low of 80.9 percent in 2015. This decline in prime-age participation accounted for between 35 and 40 percent of the overall decline in participation over this period—indicating that the falling overall participation rates among the adult population over the past 20 years cannot be attributed to aging alone. However, the last three years have been more positive with regard to prime-age participation. In both 2016 and 2017, the labor force participation rates of prime-age adults rose by 0.4 percentage point, offsetting some of the declines over the previous 15 years. In 2018, the growth in participation of prime-age adults continued, suggesting that recent progrowth policies are encouraging more businesses to hire and more people to enter the labor force. Although it is common to consider prime-age employment as a whole, embedded within both the long-term decline in employment rates and improvements over the past few years are diverse population groups with occasionally differing trends. These trends have been markedly different for males and females as well as for married and single individuals, and for those who do and do not have children. Although all races and ethnicities have seen increases in their employment rates in recent years, the relative increases among African Americans and Hispanics have been particularly strong, and the employment rate among working-age adults with a high school degree or less grew faster in 2018 than for those with more education. Although employment in rural areas still lags far behind that in urban areas, recent employment growth in manufacturing and other sectors that are disproportionately located in rural communities offers hope for employment recovery in those communities. To the extent that employment trends differ across population groups or geographies, there is an opportunity to both explore the source of these divergences and to consider targeted policies to address specific challenges and further increase labor market participation, as is done later in this chapter. Wages and Labor Earnings The wages that workers earn in the labor market are of similar importance to employment trends for determining the financial well-being of American Expanding Labor Force Opportunities for Every American | 145 households. Economists have long understood that increases in wages, as well as increases in the demand for labor, are driven by rising worker productivity (Hellerstein, Neumar, and Troske 1999). In a competitive labor market, firms pay workers a wage that is equal to the value of their marginal product. As worker productivity increases, the value of each hour of labor to firms will rise.3 Consequently, wages will subsequently rise as well, because firms that do not increase their pay will see their workers go to other, higher-paying, firms. Hence, policies that increase workers’ skills, such as additional education or training, increase the amount firms are willing to pay for their now-moreproductive labor. Policies that increase the amount of capital that workers have at their disposal to produce goods and services will also increase their productivity and, subsequently, increase their wages. For example, because high corporate taxes act as a disincentive for firms to invest in the capital that would make workers more productive, numerous researchers have found that workers bear much of the burden of corporate taxes. Consequently, reductions in the corporate income tax rate lead to increases in the wages paid to workers (Hassett and Mathur 2006; Desai, Foley, and Hines 2007; Felix 2007, 2009). (For a more detailed discussion of this relationship with respect to the Tax Cuts and Jobs Act, see chapter 1.) Although the BLS and other Federal agencies report several different wage measures, each of which has its own advantages, we focus here on wage trends from the Census Bureau’s Current Population Survey (CPS). (For a comparison of the 12 surveys and programs administered by the BLS, with information on pay and benefits, see BLS 2018.) The statistics reported by the BLS using CPS data focus on full-time workers and do not capture the value of fringe benefits and bonuses—whose growth has contributed to total compensation growth among workers in recent decades.4 They also do not directly capture the changing composition of the workforce, including the education and skill levels of those who are working.5 In addition, these statistics focus on wages for all working adults (age 16 and older), and not just those of prime working-age. However, these data are particularly useful for understanding the full distribution of wage trends, given that researchers can use them to consider wages at different points in the distribution. Figure 3-4 shows the trend in nominal wage growth among all adult, fulltime workers in the CPS data. In the fourth quarter of 2018, median nominal weekly wages grew by 5.0 percent over the previous year. Under any measure 3 Labor productivity affects wages regardless of whether the labor market is competitive, “monopsonistic,” or “monopolistic.” 4 From 1982 through 2018, total compensation growth in the BLS Employment Cost Index grew about 0.3 percent per year faster than wages alone from the same survey. 5 For a broader discussion of these composition effects, see CEA (2018b). Some researchers, including Daly and Hobijn (2017) and the Federal Reserve Bank of Atlanta (2018), attempt to correct for these composition effects by controlling for worker characteristics or following the same workers over time. 146 | Chapter 3 Figure 3-4. Nominal Weekly Wage Growth Among All Adult Full-Time Wage and Salary Workers, 2010–18 Percent (year-over-year) 7 2018:Q4 6 10th percentile 5 Median 4 3 2 1 0 -1 2010:Q1 2012:Q1 2014:Q1 2016:Q1 2018:Q1 Sources: Bureau of Labor Statistics; Current Population Survey; CEA calculations. Note: Data are non–seasonally adjusted. of inflation, this suggests that real wages are growing. Based on the Consumer Price Index for all Urban Consumers (CPI-U), which the BLS traditionally uses to track inflation, real median weekly wages of full-time workers grew by 2.7 percent from the fourth quarter of 2017 through the fourth quarter of 2018. And on the basis of the Chained Consumer Price Index (Chained CPI), which academics consider a more accurate reflection of cost of living adjustments than the CPI-U and is now used for indexing tax brackets, real median weekly wages of full-time workers grew by 3.0 percent during this time. Moreover, based on the PCE Price Index, which is the inflation measure preferred by the Congressional Budget Office (2018) and the Federal Reserve Board of Governors (2000), real median weekly wages of full time workers grew by 3.1 percent over this time. In addition, recent wage growth has been the fastest for those at the bottom of the wage distribution. Over the past two years, (from the fourth quarter of 2016 through the fourth quarter of 2018), nominal wages for the 10th percentile of the full-time wage distribution have increased by an annual average of 4.8 percent. Looking just at the past year (from the fourth quarter of 2017 through the fourth quarter of 2018), wage growth at the 10th percentile was an even stronger 6.5 percent. This wage growth at the 10th percentile over the past two years outpaces the 3.0 percent annual growth in median nominal Expanding Labor Force Opportunities for Every American | 147 wages among full-time workers and the 3.5 percent annual growth at the 90th percentile. The trend under this Administration for wage gains of the full-time wage distribution’s 10th percentile to exceed the growth rate for the distribution’s middle and top stands in sharp contrast to that seen in the 2001–7 business cycle. During that period, wage growth for the 10th percentile was frequently the slowest of these three measures. Year-over-year wage growth for the 10th percentile only outpaced wage growth for the 90th percentile in six quarters over the six-year period, and even then only did so by more than 0.4 percentage point once. Although the bottom of the distribution has since experienced rapid wage growth, in 2013, this growth followed 2012, when there were nearly no wage increases at the bottom of the distribution, and this growth was not sustained into future years. If the trend under this Administration continues, with the most rapid earnings gains occurring among those lower in the wage distribution, it would be consistent with that seen in the late 1990s, when unemployment was similarly low and the bottom of the distribution also experienced several years of robust wage growth (Ilg and Haugen 2000). Although the most recent quarter saw the nominal weekly wages at the 10th percentile and at the median of full-time workers grow at their fastest year-over-year pace since at least 2001, some economists question why wage growth has not been faster in recent decades. In addition, given the current strong labor market and low unemployment rate, one could have expected even larger wage gains in recent years. The primary factor in understanding wage growth relates to the productivity growth of workers, given the close link between productivity and wages. During the period from the official end of the Great Recession in June 2009 through the end of 2016, productivity growth averaged just 0.9 percent a year.6 This is dramatically slower than the productivity growth during the previous two expansions (figure 3-5). During both the 1991–2001 and 2001–7 business cycle expansions, as reported by the National Bureau of Economic Research (NBER), productivity growth exceeded 2 percent a year, on average. Economists disagree on the long-term potential for high productivity growth. Fernald (2015) notes that a productivity slowdown predated the Great Recession and believes that the period of strong productivity gains in the mid-1990s and early 2000s was the exception. Others, including Yellen (2016), are more optimistic about potential productivity improvements. This view is supported by Borio and others (2015), who found that the credit boom and subsequent financial crisis misallocated labor to sectors with low productivity 6 The BLS uses the Implicit Price Deflator when tracking productivity growth. This deflator shows slower productivity growth than the CPI-U, Chained CPI, and PCE inflation indexes. Consequently, research comparing productivity growth with wage growth must have caution in using the same inflation measures. Failing to do so results in an artificial gap between compensation growth and productivity growth (Brill et al. 2017). 148 | Chapter 3 Figure 3-5. Nonfarm Business Sector Real Output per Hour, 1980–2018 Percent (year-over-year change) 7 2018:Q3 6 5 4 3 2 1 0 -1 -2 -3 1980 1985 1990 1995 2000 2005 2010 2015 Source: Bureau of Labor Statistics. Note: Shading denotes a recession. growth, suggesting that the recent period of low productivity growth is not reflective of the economy’s future potential. Between 2017 and 2018, productivity growth ticked up, averaging 1.3 percent a year through the third quarter of 2018. Although this remains below the productivity gains during previous expansions, these improvements are consistent with the optimistic perspective that there is still a potential for faster productivity growth. Moreover, the changes in the Tax Cuts and Jobs Act intending to boost productivity by encouraging capital investment were just recently enacted, so their full effect on productivity has likely not yet been realized. These changes include the reduction in corporate tax rates discussed in chapter 1 and the creation of Opportunity Zones discussed later in this chapter. Although productivity growth is the most important driver of wage gains, some economists have also debated nonproductivity factors that affect wage growth. Much of this discussion relates to the bargaining power among workers. One reason for this lack of bargaining power is the possibility that the economy remains in a relatively elastic range of the labor supply curve, meaning that there are still more potential workers who would be willing to work without a substantial increase in the wage rate. Historically, the current low unemployment rate would suggest that firms desiring to hire additional workers would need to increase wages (Leduc and Wilson 2017). However, Expanding Labor Force Opportunities for Every American | 149 Figure 3-6. Share of Adults Starting Work Who Were Not in the Labor Force Rather Than Unemployed, 1990–2018 Percent 75 2018:Q4 70 65 60 55 50 1990 1995 2000 2005 2010 2015 Sources: Bureau of Labor Statistics; Current Population Survey; CEA calculations. Note: Shading denotes a recession. focusing exclusively on the unemployment rate ignores the lower prime-age employment-to-population ratio than in earlier decades as well as the growing share of older workers in good health who could be drawn into the labor market. These potential workers who are not currently in the labor market contribute to the elasticity of the labor supply. For this reason, Ozimek (2017) recently suggested that the ratio of employment to population is more relevant than the unemployment rate for understanding wage growth trends. In support of this theory, researchers can use CPS data (which track individuals over several months) to observe the prior-month labor force status of those who find employment in any given month. These data include both adults who are starting work for the first time and those who are starting a job after a period of not working. In the fourth quarter of 2018, 73.1 percent of all adults who started working had been out of the labor force in the previous month—compared with just 26.9 percent who had been unemployed (figure 3-6). This is the largest share coming from out of the labor force since tracking of labor flows began in 1990. It suggests that firms are finding workers who are not currently in the labor force and that these adults who are currently out of the labor force remain relevant for understanding both wage growth and the potential for further increases in employment. An additional hypothesis that some have recently considered for the slower-than-expected wage growth in recent decades is that firms are exercising 150 | Chapter 3 monopsony power in the labor market. Under this hypothesis, if the number of firms competing for workers decreases, the remaining firms have increased market power and can depress wages (Webber 2015; Muehlemann, Ryan, and Wolter 2013; Ashenfelter, Farber, and Ransom 2010; Twomey and Monks 2011). Although it does appear that higher industry concentration can result in lower wages, recent research suggests that this has not exacerbated a reduction in wage growth during this period. This is because increases in concentration have not been sufficiently large to play a meaningful role. Bivens, Mishel, and Schmitt (2018) find that increased concentration may have reduced wage growth by just 0.03 percent a year between 1979 and 2014. In addition, recent research by Rinz (2018) observes that when looking at industry concentration measured at a local level where firms are competing for workers, rather than at the national level, industry concentration has actually declined over the past four decades, which is counter to the claims of rising concentration slowing wage growth. Prime-Age Employment by Gender Shifting from considering labor market trends for the entire adult population to those for specific demographic groups, figure 3-7 shows the labor force participation rate and the ratio of employment to population among prime-age men and women over the past 58 years. Individuals who are neither working nor looking for work are out of the labor force. The gap between the participation rate and the employment-to-population ratio reflects unemployed workers, so as unemployment falls, the gap between these two series will decline. From 1950 until the early 1970s, over 95 percent of males age 25 to 54 were in the labor force every month. In the late 1960s, the combination of a strong labor market and a lack of young males looking for work in the civilian labor market due to the Vietnam War led to historically low unemployment rates. Consequently, in 1968 not only were just over 95 percent of prime-age males in the labor force but nearly 95 percent were working. Fifty years later, in December 2018, the employment rate for these primeage males was nearly 10 percentage points lower, as just over 86 percent of these males were employed. This reflects a long-term secular decline in primeage male employment. Although employment rates rise during economic expansions and fall during recessions, when looking at the peak employment rate across business cycles (based on NBER definitions), the peak employment rate of prime-age males in each business cycle since 1968 has failed to reach the peak achieved in the previous business cycle.7 During the current expansion, only in 2017 did the employment rate for prime-age males reach the trough of the previous business cycle from 2003. 7 Employment in 1989 recovered to above the peak in 1981 between the double-dip recessions. However, it did not reach the 1979 peak before this pair of recessions. Expanding Labor Force Opportunities for Every American | 151 Figure 3-7. Labor Force Participation Rate and Employment-toPopulation Ratio for Prime-Age Adults by Gender, 1950–2018 Percent 100 Male participation rate Dec–18 90 Male employment-to-population ratio 80 70 Female participation rate 60 Female employment-to-population ratio 50 40 30 1950 1960 1970 1980 1990 2000 2010 Source: Bureau of Labor Statistics. Notes: Prime-age adults are those age 25–54 years. Shading denotes a recession. Although the prime-age male labor force participation rate has also fallen over this period, from nearly 95 percent in 1968 to 88.2 percent in December 2015, it appears to have leveled off in the past three years, and was at 89.0 percent in December 2018. As a result of the rising employment rate and flat participation rate for prime-age males since 2015, the gap between these two series (which reflects the share of prime-age males who are unemployed) has declined. If the male participation rates had not stabilized since 2015, the continued growth in male employment that occurred would not have been possible without reaching an even lower rate of unemployment. Nonetheless, this long-term decline in the employment and labor force participation rates of prime-age males represents a substantial decline in the size of America’s workforce. The gap of 1.1 percentage points between the current prime-age male employment-to-population ratio and that from November 2007 at the peak of the previous business cycle reflects about 700,000 primeage men who are not working. And the gap of 2.7 percentage points between the current employment-to-population ratio and that from February 2001 at the peak of the previous business cycle reflects 1.7 million prime-age males who are not working. Some of these nonworkers are unemployed, while others remain out of the labor force. Because the number of prime-age males who are out of the labor force exceeds that seen in earlier business cycles, 152 | Chapter 3 this represents an opportunity to further increase employment even while the unemployment rate remains near historical lows. Despite the well-established decline in prime-age male labor force participation rates and employment-to-population ratio over the past 50 years, the precise reasons for the decline remain unclear. One explanation for the recent weakness of male participation rates is the rise in opioid-related disorders (see box 3-1). The longer-term decline is also consistent with patterns finding that employment growth over the past 40 years has been weakest in male-dominated industries, including the decline of manufacturing that was occurring until recent years. Since the late 1960s, over two-thirds of all manufacturing workers have been males, including 72 percent of manufacturing workers in 2018. However, over the course of the 50 years from 1966 to 2016, the number of manufacturing jobs declined by over 5.5 million, despite an increase in total employment of 80 million jobs. Consequently, the share of males working in manufacturing jobs fell from 30 to 12 percent. Similarly, mining and logging as well as construction, whose workforces are each nearly 90 percent male, both saw slower employment growth than the workforce as a whole. However, since the fourth quarter of 2016, employment in manufacturing has increased by 3.6 percent, in construction by 8.5 percent, and in mining and logging employment by 16.1 percent. Each of these exceeds the 3.3 percent growth in total employment over this period. In considering whether the long-term decline in male employment and labor force participation rates can be reversed, it is useful to look back to the 1960s, which is the last time when the unemployment rate was below 4 percent for a longer consecutive stretch of months than in 2018. The strength of the labor market during that 1960s business cycle resulted in the prime-age male employment rate increasing from peak to peak. No business cycle since then has accomplished this feat. There is some early, limited evidence that the current strong labor market may at least be limiting the continued decline of participation among prime-age males, and perhaps increasing it slightly. In 2018, the average monthly participation rate of prime-age males was up 0.4 percentage point relative to 2017 and up 0.5 percentage point relative to 2016. 2018 was also the fourth consecutive year where the average monthly male prime-age participation rate increased—the first time that this has happened for four consecutive years since at least the 1950s. This indicates that more prime-age males are entering or staying in the labor force. Standing in sharp contrast to the employment patterns of prime-age males over this period, the labor force participation rates and employment rates of prime-age females, as shown in figure 3-7, rose nearly continuously for nearly 40 years from the late 1950s through the late 1990s. In the early 2000s business cycle, however, the consistent increases abated. The period from 2001 through 2007 saw the first peak-to-peak decline in either female employment or female participation rates in 50 years. Hence, the continued decline in Expanding Labor Force Opportunities for Every American | 153 Box 3-1. The Opioid Epidemic and Its Labor Market Effects The opioid epidemic that is affecting communities throughout the United States has resulted in a decline in the health of Americans and the health of the economy. Over the past decade, the number of opioid-related deaths in the United States per year has more than doubled, from 19,000 in 2007 to 49,000 in 2017 (NIH 2018). Life expectancy has fallen for the third year in a row, in part due to more frequent opioid and drug overdoses. This opioid crisis has important economic repercussions. Ghertner and Groves (2018) find a correlation between substance use measures and economic measures, including unemployment rates and poverty. Although the Federal Reserve Board of Governors (2018a) does not find a similar correlation with objective economic outcomes, it does observe a correlation between opioid exposure and subjective perceptions of the local economy. The CEA (2017) found that the total cost of the opioid crisis was $504 billion in 2015; and several researchers, including Krueger (2017), have suggested that opioid usage has exacerbated a decline in labor force participation among prime-age males. Krueger (2017) notes that 47 percent of prime-age males who are out of the labor force report using pain medication, with almost two-thirds of them using prescription pain medication on a given day. He finds a strong association between county-level opioid prescriptions in 2015 and declines in labor force participation between 2000 and 2015, with opioid prescriptions potentially accounting for a decline of 0.6 percentage point in prime-age male participation during this period. Other recent research has also documented a strong link between opioid prescriptions and lower participation, using more detailed data on prescribing practices or including additional areas or years of data (Aliprantis and Schweitzer 2018; Harris et al. 2018). Although its applicability to the U.S. context is uncertain, Laird and Nielsen (2016) find evidence that such a link may be causal, at least in Denmark. They observe that when people who move their place of residence wind up with a doctor who tends to prescribe more opioids, they are more likely to drop out of the labor force. However, Currie, Jin, and Schnell (2018) do not find evidence that higher rates of prescription opioids reduce participation in the United States. Ultimately, more research is needed to determine what impact illicit opioid use may have on labor market activity. Nonetheless, the strong link suggests that the fatal costs of the opioid epidemic may not capture its full cost to society. In response, President Trump has mobilized the Administration to confront this crisis. In October 2017, the President declared a national public health emergency, which directed all executive branch agencies to employ every appropriate resource to combat the opioid epidemic (White House 2018b). By enlisting the aid of the executive agencies, the President has expanded access to services while also seeking to limit the availability of prescription and illicit opioids. 154 | Chapter 3 In March 2018, President Trump launched the Initiative to Stop Opioid Abuse and Reduce Drug Supply and Demand, which seeks to negate the epidemic through primary prevention, evidence-based treatment, and recovery support services. This includes implementing the Safer Prescribing Plan, which supports State prescription drug monitoring programs, and calls for all federally employed healthcare providers and nearly all federally reimbursed opioid prescriptions to follow best practices within five years. It also targets overprescription and illicit drug supplies by enlisting the Department of Justice to crack down on illegal supply chains in U.S. communities. With help from Congress, the President has signed into law the SUPPORT for Patients and Communities Act, which is a step forward in fighting the opioid epidemic. This legislation improves access to treatment and recovery services, improves the inspection capabilities of mail-handling facilities to detect controlled substances entering the United States, and authorizes grants to States for their work monitoring substance use. The President and Congress allocated $6 billion in new funding in the Budget Resolution for 2018 and 2019 to further the fight against the opioid epidemic. With more resources, executive branch agencies are now be able to scale up their efforts to contain the effects of opioid misuse while providing more resources to Americans seeking help and treatment. By counteracting the damaging health effects of the opioid crisis, the labor market will continue to improve, as more Americans leave the sidelines and enter the workforce. The Administration’s commitment to American workers goes beyond a prosperous and booming economy; it also includes encouraging healthy and productive lives. prime-age male employment rates along with the plateau of prime-age female employment rates resulted in the overall decline in the share of prime-age adults who were working in the early 2000s. More recently, the employment rate of prime-age women, which was 73.4 percent in December 2018, finally surpassed the previous business cycle peak of 72.7 percent. In addition, the labor force participation rate of prime-age women increased, from 73.9 percent in December 2015 to 75.9 percent in December 2018. As a result of the rapid growth in female employment and slower growth in female participation, the gap between these two series (which reflects the share of prime-age females who are unemployed) has declined. As was the case for males, if the female participation rate had not increased since 2015, the continued growth in female employment that occurred would not have been possible without reaching an even lower rate of unemployment. Nevertheless, female prime-age employment remains more than 10 percentage points below that for prime-age males. This suggests that policies that remove remaining barriers to working have the potential to further increase employment rates among these prime-age females. These include Expanding Labor Force Opportunities for Every American | 155 paid family leave policies discussed in chapter 3 of the 2018 Economic Report of the President (CEA 2018b), the transfer program policies discussed in chapter 9 of this Report to encourage self-sufficiency among low-income females, as well as the policies we discuss below that target middle-income females who bear the primary responsibility for child care. Barriers to Work from Child Care Expenses One potential reason for lower labor force participation rates among primeage females than prime-age males is the division of child care and home production responsibilities across genders, especially among families with young children.8 The relationship between child care responsibilities and employment patterns among females can be seen in figure 3-8, which shows the participation rates among prime-age females based on their marital status and the presence and age of children in the household. As discussed above, during the 1980s the employment rates and the labor force participation rates of females rose sharply. From figure 3-8, we see that the increase participation in the 1980s came primarily among married females. The participation rates of married mothers of young children under age 6 (the green line), married mothers of older children (the light blue line), and married females without children at home (the yellow line) all increased by at least 7 percentage points over the decade between 1982 and 1992. No similar increases were seen during the 1980s among single females with children (the gray and red lines) or without children (the dark blue line). Consequently, by the early 1990s, the participation rates for mothers of young children were similar across marital statuses, as were the rates for mothers of older children. However, participation rates still differed based on the age of the child, because the rates for both married and single females with young children were substantially below those for females with either no children or older children. The similarity in levels of labor force participation rates among married and single mothers of young children in the early 1990s was only temporary, however. Starting in the mid-1990s, there was a dramatic increase in the participation rates for single mothers with young children, which rose by over 15 percentage points between 1994 and 2001, while their unemployment rate also declined. As discussed in greater detail in chapter 9, this increase has largely been attributed to the success of social assistance programs and welfare reforms that have brought these single mothers with children into the labor 8 An increasing share of fathers also indicate that they are not working because they are taking care of their family. In the March 2018 CPS, 1.6 percent of prime-age fathers of young children (under age 6) said they were not working because they were taking care of their family—up from 0.2 percent in 1989. This trend could be an exacerbating factor in the broader decline in prime-age male employment over this period. Nevertheless, the fewer than 2 percent of prime-age fathers of young children who are not working to take care of their family is far below the 28 percent of primeage mothers of young children who were not working to take care of their family. 156 | Chapter 3 Figure 3-8. Labor Force Participation Rates Among Prime-Age Females by Marital and Parental Status, 1982–2018 Single, no children Single, children age 6 or older Married, children under age 6 Single, children under age 6 Married, no children Married, children age 6 or older Labor force participation rate (percent) 90 2018 80 70 60 50 1980 1985 1990 1995 2000 2005 2010 2015 Sources: Current Population Survey; CEA calculations. Notes: Data represent an annual average across all months. Data for 2018 are the average through July. Prime-age females are those age 25–54 years. market (Juhn and Potter 2006; Meyer 2002). Similar increases did not occur among married mothers of young children, whose participation rate reached a plateau in the early 1990s. In contrast to that seen for single mothers, the current 63.1 percent participation rate among married mothers with children under age 6 in 2018 is slightly below where it was in 1994. The participation rates among married mothers of young children is also well below that for all other prime-age females. Prime-age married females with young children are less likely to be working or looking for paid work than are other married females, and they make up a disproportionately large share of all prime-age married females who are out of the labor force. Although married mothers with young children represent 27 percent (10.1 million / 36.9 million) of all married prime-age females, they make up 37 percent (3.7 million / 10.2 million) of all married prime-age females who are out of the labor force (table 3-1). These married mothers of young children who are out of the labor force are evenly distributed across the educational spectrum, although on average they have somewhat less education than married mothers of young children as a whole. About 35 percent of married mothers of young children who are out of the labor force have a high school degree or less, whereas 39 percent have at least a bachelor’s degree. This Expanding Labor Force Opportunities for Every American | 157 Table 3-1 Number of Prime-Age Females by Marital and Parental Status, 2018 Employed (thousands) Unemployed (thousands) Single, no children 13,249 524 Not in labor force (thousands) 3,610 Single, children under age 6 Single, children age 6 or older Married, no children Married, children under age 6 Married, children age 6 or older 2,435 218 901 3,554 4,855 258 1,107 6,220 9,695 258 3,068 13,021 6,238 170 3,744 10,152 10,110 259 3,379 13,748 Category Total (thousands) 17,382 Sources: Current Population Survey (CPS); CEA calculations. Note: Average across monthly CPS data through July 2018. Prime-age females are those age 25–54 years. compares with 24 percent of all prime-age married mothers of young children who have a high school degree or less. Supporting this conclusion that child care responsibilities are important for the labor force participation decisions of parents with young children, the Federal Reserve’s “Survey of Household Economics and Decisionmaking” finds that among nondisabled prime-age females age 25–54 who are not working or working part time and have a child in their home under age 6, over 60 percent say that child care plays a role in this decision (Federal Reserve Board of Governors 2018b). Among nondisabled, prime-age females whose youngest child is between the ages of 6 and 12, half of those who are not working and one-third of those working part time say that child care contributed to their decision. This suggests that there remain females who are taking care of children rather than engaging in the formal labor market due to child care responsibilities. However, the survey does not differentiate between child care costs and other reasons that parents of young children may be less likely to pursue formal employment, such as a preference for working at home and investing directly in their children’s well-being. In some instances, parents opting to engage in child care activities may reflect an efficient allocation of resources if this is a more efficient use of these parents’ time than working in the formal labor market. However, several distortions of the child care market caused by tax and regulatory policies could prevent this from being the case. One such distortion occurs because labor market activities are taxed, whereas time spent on home production activities is not. Consequently, some females may decide that their after-tax wage is too low to 158 | Chapter 3 justify formal work, even though they might have chosen formal work if it had not been for the taxes. However, some programs exist to help offset this distortion. For example, the child and dependent care tax credit offsets this distortion by providing a tax credit of 20 to 35 percent of the first $3,000 of child care expenses for one child (and $6,000 for two or more children). However, this tax credit will not fully offset the distortion for those whose required expenses for basic child care exceed these amounts. A second distortion of the child care market occurs because regulation can raise the costs of child care.9 Although some regulation is necessary for the safety and well-being of children, other regulations and requirements can raise the costs of this care, thereby reducing access to child care and discouraging parents from engaging in formal labor market activities. According to data from ChildCare Aware of America (2018), the average annual cost of child care nationwide for a four-year-old is about $9,000, whereas the average annual cost for an infant is about $11,500.10 The cost for toddlers typically falls between the higher infant cost and the lower four-yearold cost. For comparison with earnings from employment, these costs can be converted to hourly terms by dividing the cost of full-time care for each State by 2,000 hours (50 weeks multiplied by 40 hours per week). Based on these data, the hourly child care costs for a four-year-old and an infant are respectively about $4.50 and $5.75. At the State level, these hourly costs range from $2.34 for a four-year-old and $2.65 for an infant in Mississippi to $9.33 for a four-year-old and $11.83 for an infant in the District of Columbia. When considering the net returns to employment and whether to enter the labor market after having a child, these costs can offset any wages earned 9 Although this section focuses on the high cost of child care and labor force participation rates, it is also the case that these costs can act as a disincentive to have children. Milligan (2005) and LaLumia, Sallee, and Turner (2015) find small increases in fertility in response to increased child tax benefits, although Crump, Goda, and Mumford (2011) suggest that this response is small and only in the short term, and Baughman and Dickert-Conlin (2009) find no relationship between child-based tax credits and fertility rates among the targeted population. To the extent that fertility decreases as the costs of raising a child increases, high child care costs may also lead some people to forgo having children or to have fewer children. Though increased fertility rates are beneficial for long-run economic growth, exploring the relationship between child care costs and fertility rates is outside the scope of this chapter. 10 ChildCare Aware of America (2018) offers three separate methodologies for calculating the national average that produces a range of average cost for infants between $11,314 and $11,959. These estimates are broadly in line with estimates from other sources. In 2015, the National Survey of Early Care and Education found that the price for center-based child care for an infant was $4.40 an hour at the median and $7.80 an hour on average (HHS 2015). For 40 hours per week of care year round, this reflects $9,152 (median) to $16,224 (mean) of expenses for the year. A separate survey of parents by Care (2018) found that the average cost of infant care paid by parents was $211 per week, or $10,972 for a year. Knop and Mohanty (2018) found in the 2014 Survey of Income and Program Participation that working mothers who paid for child care and whose youngest child is under age 6 paid an average of $240 per week on child care for all their children, although their analysis of the 2014 CPS found lower estimates. Expanding Labor Force Opportunities for Every American | 159 through employment. Figure 3-9 therefore shows the average hourly cost for center-based child care for four-year-old children and infants in each State as a share of the median before-tax hourly wage in the State. In every State, the hourly cost of care for one young child represents at least 15 percent of the median hourly wage in the State. Parents with two or more young children in child care will have a larger financial burden. Similarly, lower-income adults may also pay a larger share on child care if they are unable to find lower-cost providers (see chapter 9 for a discussion of work and child care decisions for lower-income adults who are potentially eligible for welfare programs). Across all States, the hourly cost of child care for a single four-year-old is on average 24 percent of the State median wage, while the cost of child care for an infant is 30 percent of the State median wage. Including commuting times, when a child is with a paid caretaker but his or her parent is not paid for working, the financial burden is even greater.11 Given that a decrease in the cost of child care essentially means an increase in the effective wage rate for those who use child care to go to work, one way to determine the potential labor supply response to a reduction in the cost of child care is by considering estimates of the response of labor supply decisions to wages. If work increases when wages go up, then work should also increase when child care costs go down. Based on their extensive literature review, McClelland and Mok (2012) conclude that for every 1 percent increase in the wage rate, there is a 0.1 percent increase in the number of people who work, and a 0.1 percent increase in hours worked among those who were already working.12 For workers (or potential workers) earning $20 an hour facing child care costs of $5 an hour (about the national average cost for one child in center-based care), a 20 percent decrease in child care costs (from $5 to $4) would increase the effective wage by 7 percent. For workers (or potential workers) earning $20 an hour with two children in child care, the effective wage would rise by 20 percent (from $10 an hour, after $10 an hour of child care costs, to $12 an hour after including the now-reduced child care costs). Applying the labor supply elasticities from McClelland and Mok (2012), a 7 percent increase in the effective wage would increase the number of workers and hours worked among current workers by 0.7 percent each. A 20 percent increase in the effective wage would lead to an increase of 2 percent each. However, such calculations are only illustrative, because they do not account for the actual hours of child care purchased relative to hours worked, the number of children each family has in child care, or the wage distribution of people who might use child care. McClelland and Mok (2012) also conclude from their 11 Based on data from the American Community Survey, workers who commute 5 days a week spend an average of about 4 hours a week commuting to and from work. 12 McClelland and Mok (2012) report a range of 0 to 0.2 for both the elasticity with regard to the decision to work and the decision regarding how many hours to work. Here we use the middle value of 0.1. 160 | Chapter 3 Figure 3-9. Child Care Costs as a Percentage of States’ Median Hourly Wage, 2017 4-year-old Additional infant cost Alabama Alaska Arizona Arkansas California Colorado Connecticut Delaware District of Columbia Florida Georgia Hawaii Idaho Illinois Indiana Iowa Kansas Kentucky Louisiana Maine Maryland Massachusetts Michigan Minnesota Mississippi Missouri Montana Nebraska Nevada New Hampshire New Jersey New Mexico New York North Carolina North Dakota Ohio Oklahoma Oregon Pennsylvania Rhode Island South Carolina South Dakota Tennessee Texas Utah Vermont Virginia Washington West Virginia Wisconsin Wyoming 0 5 10 15 20 25 30 35 Percentage of median wage 40 45 50 Sources: ChildCare Aware of America (2018); Bureau of Labor Statistics; CEA calculations. Notes: Child care costs per hour are obtained by dividing the cost of full-time, center-based child care for 4-year-olds by 2,000 hours. Montana's child care costs are for 2016. Infant care costs are not available for South Dakota, so these costs are computed as the toddler care costs scaled up by the national average percent difference between costs for infant and toddler care. Expanding Labor Force Opportunities for Every American | 161 literature review that the labor supply elasticity of married females (about 0.2) is larger than that for unmarried females (about 0.05), suggesting that married females are more likely to respond to reductions in the cost of child care.13 An alternative way to assess the potential responsiveness of work to reductions in child care costs is to consider studies that explicitly test how previous reductions in child care costs affected work. Fortunately, a number of studies have explored this question. Baker, Gruber, and Milligan (2008) study a policy in Quebec that gradually provided new child care subsidies requiring parents to pay at most $5 a day for each child age four and under, regardless of family income. They find that child care subsidies increased the use of care by almost 15 percentage points, and increased labor force participation among mothers by close to 8 percentage points. Lefebvre and Merrigan (2008) find similar effects of Quebec’s child care subsidies; and Lefebvre, Merrigan, and Verstraete (2009) find that these effects persist in the medium and long terms. Herbst (2017) uses historical U.S. data to estimate the impact of the Lanham Act of 1940, which provided child care funding to U.S. communities in response to the deployment of many males to World War II. He finds substantial effects on the labor supply of women in the 1950 and 1960 Census years. Outside North America, studies looking at Spain and Norway find mixed effects of child care subsidies on maternal employment (e.g., Havnes and Mogstad 2011; Nollenberger and Rodríguez-Planas 2015). In a review of the literature on the effects of child care costs on maternal labor supply, Morrissey (2017) concludes that a 10 percent decrease in costs increases employment among mothers by about 0.5 to 2.5 percent. Altogether, the empirical evidence on the responsiveness of labor supply decisions to wages in general and to child care costs more specifically suggests that a reduction in the cost of care could lead to increases in the number of people who participate in the workforce and also the number of hours worked among current workers. This is consistent with responses from survey data showing that child care costs are an important barrier to work or to additional work.14 Policies to Reduce Barriers to Work Resulting from Child Care Expenses In considering potential policies to reduce the barriers to work from child care expenses among married mothers, it is useful to first consider how existing 13 McClelland and Mok (2012) report a labor supply elasticity range of 0 to 0.3 for married women and 0 to 0.1 for men and single women. 14 Although we focus in this chapter on the labor supply effects of child care, the effect on child outcomes are also an important consideration. Here the effects of child care subsidies are mixed, with Herbst and Tekin (2016) finding that children receiving subsidized early child care score lower on cognitive ability tests in kindergarten and Havnes and Mogstad (2011) finding that subsidized care had strong positive effects on children’s educational attainment and labor market outcomes. 162 | Chapter 3 policies may result in the divergent employment outcomes for single and married mothers of young children discussed above. Some programs focus on reducing employment disincentives among both single and married parents. These include the child and dependent care tax credit, which provides a tax credit for a portion of child care costs when working; the child care development fund, which provides assistance through block grants for people attending job training or educational programs; and dependent care flexible spending accounts, which allow parents to pay for child care expenses using pretax dollars. Nevertheless, as discussed in greater detail in chapter 9, many of the public policies targeted at increasing employment in the 1990s were focused on families living in poverty, often without a worker in the family. These included the Earned Income Tax Credit (EITC), which provides substantial incentives to enter the labor market for single parents and parents without a working spouse. Hence, the EITC effectively increases the average hourly compensation of low- and middle-income parents who are entering the workforce, as long as there is no other working parent in the family, which can offset these child care expenses. However, among married women with a working spouse, the EITC typically has either no effect or a negative effect on after-tax hourly wages. This is because policymakers structured the EITC to incentivize work among lowincome individuals who would represent the first worker in a family, rather than encouraging both parents to work. For example, consider the EITC that a married couple with two children with at least one full-time worker receives (table 3-2). The maximum EITC benefits of $5,716 is reached with just one fulltime, year-round worker making the federal minimum wage ($7.25 per hour) in the family; and if this worker makes $15 per hour, the couple will be in the phase-out region of EITC benefits without any earnings from the second parent. This means that once one family member is working full time, adding a second worker to the family cannot increase EITC benefits and will frequently result in a reduction of these benefits. Without the offsetting EITC benefits, the combination of child care expenses and tax liabilities can offset nearly all the financial benefits from work, even for relatively well-paid workers. Consider a married mother with one child whose spouse earns $20 per hour and who is considering starting to work full time herself at an hourly wage of $20. If her child requires care that costs $5 per hour each, these expenses would offset 25 percent of her pretax hourly wage. Based on the Urban Institute’s (2012) “Net Income Change Calculator”—which incorporates any Federal, State, local, and payroll tax liabilities—the combined expenses from child care and taxes could constitute half of her pretax wages. If she had two children that require care, the combination of additional taxes and child care expenses could represent about three-fourths of her pretax wages. These substantial child care expenses act as a similar burden for a single Expanding Labor Force Opportunities for Every American | 163 Table 3-2. EITC Benefits for a Married Couple with Two Children, Based on the Additional Earnings from a Second Full-Time Worker, 2018 First full-time worker’s hourly wage (dollars) Second full-time worker’s hourly wage (dollars) 0.00 7.25 10.00 15.00 20.00 7.25 5,716 4,737 3,578 1,472 – 10.00 5,716 3,578 2,420 314 – 15.00 4,526 1,472 314 – – 20.00 2,420 – – – – 25.00 314 – – – – Sources: Internal Revenue Code; Internal Revenue Service (2018); CEA calculations. Note: EITC=Earned Income Tax Credit. Assumes both workers are working full-time, year-round (40 hours per week and 50 weeks per year). The maximum possible EITC benefits that a parent of two children can receive is $5,716. female considering work, but the additional EITC benefits for which she qualifies through work will partially offset these expenses associated with entering the labor market. One way to reduce the financial burdens of child care for both single and married females considering working is to reduce the direct costs of care. Given the high costs of care relative to wages, it is important to consider how government policies may drive up these costs. Regulations that impose minimum standards on providers can decrease the availability and increase the cost of obtaining care, thus serving as a disincentive to work. Because staff costs constitute the majority of child care costs, regulations that constrain the number, characteristics, and required activities of staff members can greatly affect costs (National Center on Early Childhood Quality Assurance 2015). Although States differ on which facilities are exempt from licensing requirements, all States license child care facilities and require a minimum ratio of staff members to children (for details on licensing regulations and exemptions from these requirements, see HHS 2014). For 11-monthold children, minimum staff-to-child ratios ranged from 1:3 in Kansas to 1:6 in Arkansas, Georgia, Louisiana, Nevada, and New Mexico in 2014. For 35-month-old children, they ranged from 1:4 in the District of Columbia to 1:12 in Louisiana. For 59-month-old children, they ranged from 1:7 in New York and North Dakota to 1:15 in Florida, Georgia, North Carolina, and Texas. Assuming an average hourly wage of $15 for staff members (inclusive of benefits and payroll taxes paid by the employer), the minimum cost for staff per child per hour would range from $2.50 in the most lenient State to $5 in the most stringent State for 11-month-old children, from $1.25 to $3.75 for 35-month old-children, and from $1.00 to $2.14 for 59 month-old children. Figure 3-10 shows the distribution of States over minimum staff-to-child ratios, as well as the average 164 | Chapter 3 Figure 3-10. Number of States and Average Center-Based Child Care Cost by Minimum Staff-to-Child Ratio and Age Group Number of States Average hourly cost (dollars) 10 32 Frequency of ratio for infants (left axis) 24 Frequency of ratio for 4-year-olds (left axis) Average cost for 4year-olds (right axis) 16 5 Average cost for infants (right axis) 8 0 0 1:3 1:4 1:5 1:6 1:7 1:8 1:9 1:10 1:11 1:12 1:13 1:14 1:15 Minimum staff-to-child ratio Sources: Childcare Aware of America; Early Childhood Training and Technical Assistance System. Note: Infants are up to 11 months for maximum staff ratios and 12 months for average annual cost. Maximum ratio data are for 2014. Annual cost data are for 2017. hourly cost of center-based care in each State. For both infants and 4-year-old children, costs tend to fall as fewer staff members are required. Of course, minimum ratios are likely correlated with other State-level factors that determine costs, including demand from residents for different quality levels of child care. Still, these ratios may be binding constraints for many families, especially for low- to moderate-income families in States with high minimum ratios. In addition to the number of staff members required, the wages they are paid add to the overall cost of child care. Wages are based on the local labor market demand for the employees’ skills and qualifications, as well as the availability of workers in the field. Regulations that require higher-level degrees or other qualifications drive up the wages required to hire and retain staff, increasing the cost of child care. Though recognizing that some facilities are exempt from these requirements, all States set requirements for minimum ages and qualifications of staff, including some that require a bachelor’s degree for lead child care teachers. Other staff-related regulations that can drive up costs include required background checks and training requirements. In addition to standards regarding staff, many States set minimum requirements for buildings and facilities, including regulating the types and frequency of environmental inspections and the availability of indoor and outdoor space. Expanding Labor Force Opportunities for Every American | 165 Also, most States set a maximum number of children who can be included in a given care group, which can require additional building space. These regulations are often beneficial for the health and safety of the children (Hotz and Xiao 2011). To the extent that these regulations increase safety and reduce injuries in child care settings, they have measurable societal benefits. Nevertheless, some regulations likely have little effect on children’s well-being or the quality of care being provided while acting as a barrier to entry that can limit competition and increase prices (Gorry and Thomas 2017). As discussed later in this chapter, this concern exists for a range of licensed occupations in addition to child care workers. Consistent with this concern, research generally finds that child care regulations increase the cost and reduce the supply of care options. Hotz and Xiao (2011) study how changes in regulations over time affect the number of centerbased care establishments. They estimate that decreasing the maximum number of infants per staff member by one (thereby increasing the minimum staff-to-child ratio) decreases the number of center-based care establishments by about 10 percent. Also, each additional year of education required of center directors decreases the supply of care centers by about 3.5 percent. Similarly, Currie and Hotz (2004) find that when States adopt more stringent education requirements for child care center directors, increase minimum staff-to-child ratios, and require more frequent inspections, the number of children enrolled in center-based care falls. Other studies focus on variation in regulations at a point in time within States across age groups—for example, determining whether States with relatively more stringent regulations for four-year-old children than infants have relatively higher costs of care for four-year-old children compared with infant care. With this approach, Blau (2007) finds that tighter regulations do not necessarily increase costs, while Gorry and Thomas (2017) find some evidence that they do increase costs. Ultimately, the regulation of child care is designed to increase the quality of care provided to children. These quality improvements may benefit children who remain in care, but they may also increase the costs paid by parents beyond their willingness or ability to pay. Evidence for this can be seen in the shift away from center-based care and toward family care providers (also known as home-based care) after regulations on care centers are increased (Hotz and Xiao 2011). These family care providers, where children are cared for in the provider’s home rather than at a center, are typically subject to less regulation and offer care at a lower cost than at a center. The National Survey of Early Care and Education found that the median cost of home-based infant care was 28 percent below that for center-based care and 19 percent lower for a four-year-old (HHS 2015). For some parents, family care centers reflect a more cost-effective way to obtain care and may offer a preferred environment for the children or greater convenience for the parent. However, to the extent that regulations are shifting more parents from center-based to home-based 166 | Chapter 3 care, this may indicate that regulations are distorting the market and parents are not willing or able to pay for the resulting higher costs. Furthermore, regulations designed to increase the average quality of care may not do so if parents forgo the more tightly regulated market as a result. In addition, regulations designed to increase the quality of child care centers may not actually do so if centers respond by reducing other inputs, such as teacher training, that also affect quality (Blau 2007). Thus, by loosening regulations that do not substantially affect the safety or quality of care, States may be able to reduce the cost of formal child care and increase parental work effort. Prime-Age Employment by Race, Ethnicity, and Education A key concern in evaluating the economy and the labor market is the extent to which certain demographic groups are consistently left behind. In particular, black and Hispanic employment rates have consistently fallen short of those of whites. This is apparent in figure 3-11, which shows the prime-age ratio of employment to population by race. To reduce the noise in the series, we use quarterly average employment rates. Since this series began in 1994, the white prime-age employment rate has consistently been at least 4 percentage points above that for blacks, and at least 3 percentage points above that for Hispanics. Although there is further progress to be made in closing the racial and ethnic employment gaps, it is apparent that the economic growth during the current business cycle is having the largest positive effect on the employment of blacks and Hispanics. The average difference of 4.6 percentage points between the prime-age employment rates of blacks and whites in 2018 and of 3.5 percent between Hispanics and whites are each the smallest annual gaps ever recorded since the BLS began publishing prime-age employment-topopulation ratios by race in 1994. Although prime-age employment-to-population ratios are not available by race before 1994, prime-age labor force participation rates are available for earlier years (see figure 3-12). The gaps in participation rates across races and ethnicities have not closed as rapidly as have employment-to-population ratios, as figure 3-11 shows. This is because unemployment rates for prime-age blacks and Hispanics have both declined more rapidly than the unemployment rate declined for whites. Nevertheless, the average gap of 2.6 percentage points in prime-age participation rates between whites and blacks in both 2017 and 2018 were the smallest since 1983. As discussed in box 3-2, the current disparity in employment and participation rates between white and black prime-age adults is almost completely attributable to a racial employment gap among males rather than females. Similar results are apparent when considering the recent trends in primeage employment-to-population ratios by education level, because those with Expanding Labor Force Opportunities for Every American | 167 Figure 3–11. Employment-to-Population Ratio for Prime-Age Adults by Race, 1994–2018 Percent (non–seasonally adjusted) 85 2018:Q4 White 80 Hispanic 75 70 Black 65 1994 1998 2002 2006 2010 2014 2018 Source: Bureau of Labor Statistics. Note: Prime-age adults are those age 25–54 years. The series for Hispanics starts in 1994:Q4. The BLS does not publish prime-age employment-to-population ratios for any race before 1994. Shading denotes a recession. Figure 3-12. Labor Force Participation Rate for Prime-Age Adults by Race, 1972–2018 Percent (non–seasonally adjusted) 90 2018:Q4 White 85 Black 80 Hispanic 75 70 1970 1975 1980 1985 1990 1995 2000 2005 2010 2015 Source: Bureau of Labor Statistics. Note: Prime-age adults are those age 25–54 years. The series for Hispanics starts in 1994:Q4. Shading denotes a recession. 168 | Chapter 3 Box 3-2. Employment Rates among Black Men The ratio of employment to population is an important measure of the share of the civilian noninstitutional population who are employed and allows us to combine information from both the labor force participation and unemployment rates. As figures 3-i and 3-ii show, there has historically been a wide gap in employment rates between black and white prime-age adults. However, a notable aspect of this disparity in employment rates is that it currently appears to be driven primarily by the employment disparity for males across the two races, rather than females. For instance, while the employment-to-population ratio for prime-age white males in the fourth quarter of 2018 was 87.7 percent, for prime-age black males it was 9.4 percentage points lower, at 78.4 percent. For females, conversely, the prime-age employment-to-population ratio had been higher for black females than white females from September 2016 until the fourth quarter of 2018. In the fourth quarter of 2018, the prime-age employment-to-population ratio for black females was within 0.3 percentage point of that for white females. Numerous researchers have explored the black/white employment disparity, trying to better understand the factors driving this gap. This research suggests that the employment gap results from multiple sources (Bound and Freeman 1992), with common explanations including differences in education or skills (Wilson 2015; Moss and Tilly 1996; Neal and Johnson 1996), labor mar- $"0- тҊ$ѵ(+'*4( )/Ҋ/*Ҋ*+0'/$*)/$*!*--$( Ҋ" ' .4 Ѷршшу–спрч Percent (non–seasonally adjusted) шф спрчѷу шп #$/ чф чп '& цф цп хф ршшу ршшц сппп сппт сппх сппш спрс спрф спрч *0- ѷ0- 0*! *-//$./$.ѵ */ ѷ-$( Ҋ" (' .- /#*. age сф–фу4 -.ѵ#$)" )*/ .- ..$*)ѵ Expanding Labor Force Opportunities for Every American | 169 Figure 3-ii. Employment-to-Population Ratio for Prime-Age Females by Race, 1994–2018 Percent (non–seasonally adjusted) 76 2018:Q4 74 72 White 70 68 Black 66 64 1994 1997 2000 2003 2006 2009 2012 2015 2018 Source: Bureau of Labor Statistics. Note: Prime-age females are those age 25–54 years. Data are non–seasonally adjusted. Shading denotes a recession. ket discrimination (Bertrand and Mullainathan 2004; Darity and Mason 1998; Shulman 1987), and the “first fired, last hired” phenomenon, which asserts black workers are hit much harder by recessions and take a longer time to recover from economic downturns, as noted by Couch and Frailie (2010) and Weller (2011). (Couch and Fairlie observe, however, that the decline in unemployment late in a business cycle comes more from a reduction in the rate of job losses rather than actually being the last hired.) Especially because the racial disparity is driven by male, rather than female, employment, an additional explanation is the lasting effects of higher incarceration rates among black males (Western and Pettit 2000, 2005; Holzer, Offner, and Sorensen 2005; Pager, Western, and Sugie 2009; Neal and Rick 2014). In 2016, close to 70 percent of people incarcerated were racial or ethnic minorities, with over one-third being black (Carson 2018). Black males are six times more likely to be incarcerated than white males (figure 3-iii). According to the Sentencing Project (2018), about 1 in 12 black males in their 30s is in prison or jail every day. Those who are incarcerated are not included in employment statistics; but if those with a criminal record are less likely to find employment after their release, these high incarceration rates could exacerbate the lower employment rates among black males. Previous research has found that this is the case. Bhuller and others (2016) find that spending time in prison has a negative effect on employment outcomes after release. They assert that incarceration may result in depreci- 170 | Chapter 3 ated human capital and limit employment opportunities due to societal stigma. Western and Pettit (2000) observe that individuals with criminal records have significantly fewer employment opportunities and lower earnings. They go on to say that it is impossible to truly understand patterns of employment without also considering incarceration rates. Criminal justice reform has been a leading priority of the Trump Administration. In March 2018, the President issued an Executive Order (White House 2018a) that would bring numerous Federal agencies together, helping to identify ways to improve the reentry of formerly incarcerated individuals into the labor force, in addition to reducing recidivism and improving public safety overall. The Administration has also worked with Congress to pass the FIRST STEP Act, which the President signed into law on December 21, 2018. This legislation will help strengthen reentry programs for federal prison inmates while reducing recidivism. For further discussion on recidivism reducing programs in the United States see the 2018 report released by the CEA (2018d). The Trump Administration has also emphasized policies that further the period of economic growth, recognizing that the “first fired, last hired” phenomenon suggests that the black workers who are less likely to find work early in economic expansions disproportionately benefit from extended periods of hiring. Consistent with this philosophy, the disparity in the black and Figure 3-iii. Rate of Imprisonment, by Gender and Race, 2016 Black males 2,415 400 White males Black females 96 White females 49 0 500 1,000 1,500 2,000 2,500 Rate of imprisonment per 100,000 3,000 Source: Bureau of Justice Statistics. Expanding Labor Force Opportunities for Every American | 171 white employment-to-population ratios has been steadily declining. In 2018, the average black/white employment gap among prime-age adults reached 4.6 percentage points, a historical low since BLS began publishing prime-age employment-to-population ratios by race in 1994. Among individuals of all ages, the average gap in 2018 was an even lower, at 2.4 percentage points, also representing a historical low. less education who traditionally were the least likely to be working have made the greatest gains in employment over the past two years. In the fourth quarter of 2016, 86.2 percent of prime-age adults with at least a bachelor’s degree were employed, relative to 69.5 percent of those with a high school degree or less (figure 3-13). But since the end of 2016, gains in prime-age employment have been most prevalent among those with less education. As of the third quarter of 2018, the employment rate of those with a bachelor’s degree is essentially unchanged, falling by 0.1 percentage point, while the employment rate for those with a high school degree or less has risen by 2.3 percentage points and the employment rate for those with some college, but no bachelor’s degree, has risen by 0.4 percentage point. The relative rise in employment rates for those with less education and the rise among prime-age black adults mirror the rises from the latter years of the late 1990s business cycle, when there were notable increases in employment rates among these groups. This is consistent with research that lower-skilled and marginalized workers are often hit hardest during economic downturns (Kaye 2010; Elsby, Hobijn, Sahin 2010) and that unemployment gaps between black and white workers narrow late in expansions near business cycle peaks (Couch and Fairlie 2010). This historical pattern illustrates the importance of continued progrowth policies that increase the productivity of workers and encourage further hiring of these workers. Despite these recent improvements, the substantial gap in employment rates between those with a bachelor’s degree and those with a high school degree or less highlights the need to rethink and improve our approaches to training workers so that more adults can gain the skills desired by employers in the current economy. This includes both improving the alignment of workers’ skills with those sought by employers and evaluating regulations that mandate additional training for workers that employers may not otherwise require. Increasing Workers’ Skills and Closing Skill Mismatches Even during periods with a strong labor market, some low-skill workers will be unable to find work if they lack the skills currently required by hiring firms. Other workers may simply opt against even looking for work if they perceive a skills mismatch between their current skills and those employers seek. 172 | Chapter 3 Figure 3-13. Employment-to-Population Ratio for Prime-Age Adults by Education Level, 1992–2018 High school degree or less Some college or an associate degree Bachelor’s degree Percent 95 2018:Q3 90 85 80 75 70 65 60 1990 1994 1998 2002 2006 2010 2014 2018 Sources: Current Population Survey; CEA calculations. Note: Prime-age adults are those age 25–54 years. Data are non–seasonally adjusted. Shading denotes a recession. For some workers, the skills mismatch occurs purely because they lack skills required across a range of industries. According to an international survey on adult skills conducted by the Organization for Economic Cooperation and Development (OECD 2013), a somewhat larger share of adults in the United States have lower mathematics and problem-solving skills than in other OECD member countries, while literacy skills in the United States are similar to those in other countries. For others, however, skill mismatches occur because they were trained in an industry where the growth in employment has failed to keep up with the overall population. Figure 3-14 shows employment growth by industry since 1979, relative to the total adult population change. During this period, although several primarily service occupations—including education and health services as well as professional and business services—have expanded faster than the U.S. population has grown, others have exhibited slower growth. For workers trained in these slower-growing industries, improvements in their employment prospects necessitate either a revitalization of their current industry or retraining to allow them to transition to industries where employment is growing more rapidly. This is also consistent with the latest data on job openings from the BLS, which found that the industries with the highest vacancy rates in 2018 Expanding Labor Force Opportunities for Every American | 173 Figure 3-14. Employment Growth by Industry Relative to Total Adult Population Growth, 1979–2018 Female employment Male employment Total population growth Mining and logging Construction Manufacturing Wholesale trade Retail trade Transportation and warehousing Utilities Information Financial activities Professional and business services Education and health services Leisure and hospitality Other services Government –2 –1 0 1 2 3 4 Average growth per year (percent) Sources: Bureau of Labor Statistics; CEA calculations. were largely the industries that have been exhibiting the fastest employment growth in recent decades (figure 3-15). In some instances, employers may address any skills gaps among their workers by offering training to new employees. This is especially true in a tight labor market, when there are relatively few people looking for work with the skills necessary to do the job. However, employers may be reluctant to undertake this investment if they are concerned that after training a worker, the firm will lose him or her to a rival firm. In considering these concerns about “poaching,” economists often distinguish between general and specific human capital. General human capital includes the set of skills workers obtain that can be applied to multiple firms, whereas specific human capital is more narrowly applicable to a single firm or a narrow set of firms. For example, learning to operate a proprietary computer system would be specific human capital, but capabilities to write in a ubiquitous programming language would constitute general human capital. From an employer’s perspective, spending on specific human capital is a safer investment, because it is less likely to give workers who receive the training increased opportunities for outside jobs. However, not all skill gaps can be bridged with specific human capital, which suggests that employers individually may not 174 | Chapter 3 Figure 3-15. Job Opening Rates by Industry, 2018:Q4 Mining and logging Total job opening rate Construction Manufacturing Wholesale trade Retail trade Transportation, warehousing, and utilities Information Financial activities Professional and business services Education and health services Leisure and hospitality Other services Government 0 2 4 6 Percent (non–seasonally adjusted) Source: Bureau of Labor Statistics. have the financial incentive to bridge the gap between the skills they require to be globally competitive and the skills the U.S. workforce possesses. If employers will not pay to train workers in new skills, workers may engage in training themselves. According to the OECD (2013), although the United States does better than most countries in employing low-skill workers, the returns to additional skills are particularly strong in the United States. This suggests that it would be advantageous for many workers to increase their own skills, even in the absence of employer-provided training. There is evidence that Americans do engage in more adult learning than is seen in many other countries—although adult learning rates in the United States are much higher for those who already have at least basic levels of skills than among the lowest-skilled adults. According to the OECD (2013), 40 percent of low-skilled adults participated in adult education in the year before their study, compared with 70 percent of higher-skilled adults who did so. Although most of the benefits from reskilling programs accrue to workers and their employers, in some instances public participation in the reskilling of workers may be appropriate, for several reasons. First, although workers who are successfully reemployed will reap the majority of the resulting financial benefits, these benefits do not accumulate solely to workers. The public also stands to benefit from successful reemployment, both because Expanding Labor Force Opportunities for Every American | 175 it increases public tax revenues and because it reduces reliance on social safety net programs.15 Moreover, persistent unemployment can subsequently affect local communities, including a potential link to opioid use, and have intergenerational effects (e.g., decreased income) on the children of displaced workers (Charles, Hurst, and Schwartz 2018; Oreopoulos, Page, and Stevens 2008; Stevens and Schaller 2010). Many of these same considerations drive other public workforce investments: ensuring access and funding for students through grade 12, partially financing postsecondary education, and providing high-quality metrics to guide students to successful postsecondary programs. Skills training, in some ways, fits nicely into a portfolio of public investments in education and workforce development already in place. Figure 3-16 illustrates the spending by the U.S. government on labor market programs compared with other countries. This measure includes other labor market policies (e.g., public expenditures on retraining as well as on job counseling and job search assistance, as defined by the OECD) and not just skills training. Nonetheless, it suggests that the United States spends relatively little on these programs relative to that by most developed countries, especially as measured as a share of GDP.16 That adult learning rates are higher in the United States than in many other countries, and that public expenditures on adult education are lower suggests that much of the adult learning results from private sector expenses. Figure 3-17 shows that this is the case. During childhood, public education constitutes the majority of education spending. Education spending among adults is lower overall than it is for children. But this is especially true for public education spending, given that most of the spending on education for those age 30 and older comes from either private sources or from training paid for by employers. The Trump Administration has emphasized the need to redouble the private sector’s involvement in increasing the skill levels of the American workforce. Through the Pledge to America’s Workers, the President has secured pledges from American businesses to create enhanced career and training opportunities for 6.5 million workers. Additionally, through the National Council for the American Worker, the Administration intends to develop a national workforce strategy that increases the efficiency and effectiveness of Federal workforce programs and better cooperates with the private sector to equip workers with the skills desired by employers (see box 3-3). 15 However, we note that workers forgo earnings as they acquire skills, which means that for a time the Treasury forgoes tax revenue and might spend more on safety net programs. This also means that minimum (cash) wage laws may act as a barrier to skill acquisition (Hashimoto 1982; Neumark and Wascher 2003). 16 Of course, funding decisions should be made based on cost-benefit assessments as well as an understanding of the gaps in private labor market expenditures, and researchers have found that several European training programs do not pass the cost-benefit test (Kluve 2010; Card, Kluve, and Weber 2010). 176 | Chapter 3 Figure 3-16. Public Expenditures on Active Labor Market Programs, 2016 Percentage of GDP 3.5 3.0 2.5 1.5 1.0 0.5 Mexico United States Japan Lithuania Czech Republic Chile Slovak Republic New Zealand Israel Latvia Poland Korea Slovenia Estonia Australia Canada Norway Hungray Switzerland Luxembourg Germany Ireland Portugal Sweden Austria Belgium Netherlands Finland Denmark 2.0 0.0 Source: Organization for Economic Cooperation and Development. Figure 3–17. Expenditures on Education and Skills Training by Age and Source, 2017 Public education spending Private education spending Employer costs, formal training Employer costs, informal training Expenditures per capita 16,000 14,000 12,000 10,000 8,000 6,000 4,000 2,000 0 3 8 13 18 23 28 33 38 Age (years) 43 48 53 58 63 Sources: Organization for Economic Cooperation and Development; Census Bureau; Bureau of Economic Analysis; Georgetown Center on Education and the Workforce; CEA calculations. Expanding Labor Force Opportunities for Every American | 177 Box 3-3. The President’s National Council for the American Worker As technology advances and the economies around the globe become more interconnected, the skills demanded by employers change. This may in part explain why there are 1 million more job openings than job seekers in the U.S. economy. As such, it is vital for workers to keep pace with change and adapt and/or update their skill sets to meet the needs of the labor market, enabling the employment of every American who desires to work. In July 2018, the President signed an Executive Order establishing the President’s National Council for the American Worker. This council’s goal is to develop a national strategy to ensure that American workers have access to innovative education and job training or retraining opportunities that will equip them to succeed in the global economy. The Federal government currently has over 40 grant programs that support workforce development. The new council seeks to make these programs more effective, innovative, and results-driven. Another crucial aspect of the council is that it helps promote working partnerships between American businesses, workers, and educational institutions. Information gaps can hinder the economy and limit the opportunities for American workers; therefore, the council intends to link all participants in the economy, informing them about what jobs are available, where they are located, what skills are required to succeed, and how best to obtain these skills. The Executive Order also provides for the formation of an advisory board made up of leaders in education, philanthropy, state government, and the private sector. Together with the Administration, these leaders are working toward implementing successful job-training programs, including both formal and informal educational opportunities. The Administration has also established the Pledge to America’s Workers, which calls upon businesses to commit to investing in America’s workers. Companies have pledged to create enhanced career and training opportunities for more than 6.5 million Americans through a variety of tried-and-true methods, such as apprenticeship programs and on-the-job training. The President’s National Council for the American Worker is devoted to helping every American worker obtain the skills necessary to succeed and to ensure that every business’s needs are met, guaranteeing that every American benefits from the prosperous and booming economy that American ingenuity has built. In considering how to encourage more Americans to seek additional skills training, it is important to consider why some individuals may not seek out further training, despite the positive financial returns it is likely to provide. In some instances, the lack of information about jobs available in the local labor market, the skills required for these jobs, and training programs that can 178 | Chapter 3 best equip them with these skills may be to blame. The expansion of online job aggregators has greatly eased the search for job openings, but it has not necessarily assisted workers in determining the skills required for these jobs and the specific training steps that are needed. Closely related to this problem is the uncertainty about which skills will be necessary to remain competitive in the labor market of the future. Another reason some workers may not engage in skills training is the real or perceived costs of postsecondary education, which serves as a barrier to individuals who are not already successful in the labor market. This is a particular concern for those who are budget constrained and unable to fund training efforts while maintaining the financial stability of their households. Yet without making this investment, the likelihood that these individuals will find a way into the labor market is reduced. Predictions about the growth in jobs linked to automation, which require advanced programming and information technology skills, suggest that workers with fewer years of education (who are likely to have lower incomes) may have fewer job opportunities in the future (Manyika et al. 2017; OECD 2018b; PwC 2018). Those who are unemployed may find it even harder to reenter the job market if they do not have technological skills, although several State and Federal programs are targeted to assist them to develop the necessary skills. The State of New Jersey, for example, allows unemployment benefits to be extended to individuals who are working to complete training programs after they would have otherwise expired. Additionally, the Reemployment Services and Eligibility Assessment Program at the Department of Labor, which was funded through the Bipartisan Budget Act of 2018, provides States with funding for programs that provide reemployment services to help unemployed adults develop marketable job skills and reenter the job market more quickly. For individuals who cannot engage in skills training programs due to financial constraints, some low-cost or no-cost models support training and retraining without imposing additional financial burdens. In particular, apprenticeships and other on-the-job learning opportunities provide a financial bridge so workers can earn a wage during their training and do not face the personal expenditure outlays and lost income associated with enrolling in formal education. Apprentices and those participating in other types of earn-and-learn opportunities undertake productive work for an employer, earn wages, receive training primarily through supervised earn-and-learn training models, and engage in related classroom instruction. Moreover, apprenticeships reduce the need for individuals to figure out on their own which skills are most desired by employers because employers help design these programs based on indemand knowledge and skills. Additionally, apprenticeships have been shown to provide a strong boost to workers’ future labor market outcomes (Neumark and Rothstein 2005; Lerman 2014). Despite these advantages, apprenticeships Expanding Labor Force Opportunities for Every American | 179 make up less than 0.5 percent of the U.S. labor force, compared with roughly 2 to 4 percent in Australia, Britain, Canada, and Germany. The President highlighted the benefits of apprenticeships and workbased learning in his June 2017 Executive Order to expand apprenticeship, including by establishing new Industry-Recognized Apprenticeship Programs (IRAPs) developed by third parties. This Executive Order also directed Secretary of Labor Alexander Acosta, in partnership with the Secretary of Commerce and the Secretary of Education, to establish the Task Force on Apprenticeship Expansion (2018) “to identify strategies and proposals to promote apprenticeships, especially in sectors where apprenticeship programs are insufficient.” The task force met for almost a year, and in May 2018 published its report, which makes a number of recommendations. The Department of Labor is now actively working to implement the task force’s recommendations and set up a new IRAP system, and other Federal agencies are doing their part to support these recommendations as well. Another approach to increasing access to reskilling programs includes increasing the flexibility of unemployment insurance (UI) benefits for those seeking additional skills. Because UI benefits are conditional on a displaced worker not being rehired, they may discourage some recipients both from quickly finding new employment opportunities and from enrolling in an apprenticeship program, which also must pay wages. Apprenticeship programs include well-planned work-based and classroom learning. For this reason, it may be appropriate to allow apprentices to continue receiving a portion of the UI benefits they would otherwise receive to offset lost earnings while they are learning. This would further incentivize individuals to seek and participate in apprenticeships to learn new skills after a layoff. Despite the logic of extending some or all UI benefits during periods of retraining, there is scant empirical evidence on the benefits of these programs. The State of Georgia launched a now-defunct program called GeorgiaWorks, which allowed workers to receive full UI benefits while participating in unpaid apprenticeship programs. The success of GeorgiaWorks, however, is unclear because it did not include a well-designed evaluation component. The Trade Adjustment Assistance Program, however, uses a similar model to allow for the collection of UI benefits while receiving job training and was found to be largely unsuccessful (Schochet et al. 2012; Decker and Corson 1995). This highlights the importance of ensuring that any apprenticeships are structured so workers do not only learn new skills but also that those skills will be valued in the workforce and lead to successful employment opportunities and careers. Reforming Occupational Licensing For a substantial share of positions, many low-skill workers must not only demonstrate to an employer that they have the requisite skills for a job but 180 | Chapter 3 also obtain a professional license. In the first half of 2018, just under one-fourth of all workers reported that they have an active professional certification or license. These licenses are often a requirement for employment, as over 80 percent of those with a license say that this license is required for their job. The share of jobs requiring occupational licensing has risen sharply since the 1950s, when only 5 percent of all jobs were covered by licensing laws (Kleiner and Krueger 2010). The traditional justification for occupational licensing is to protect consumer health and safety, especially in occupations where the quality of a service provider cannot be easily evaluated by consumers (Akerlof 1970; CEA, Department of the Treasury, and Department of Labor 2015; Kleiner 2000; Shapiro 1986). Given this, it is perhaps unsurprising that healthcare practitioners are the most frequently licensed, with about three-fourths of workers reporting that they have a license. Nevertheless a sizable share of workers in a wide-range of non-healthcare occupations report having a professional license—including two-thirds working in legal professions; over 30 percent of financial specialists; and over 20 percent of installation, maintenance, and repair workers (figure 3-18). These licenses are also not limited to highly skilled workers within each occupation. As illustrated in figure 3-18, in many occupations the share of workers who have a professional license is similar when considering only those workers without a bachelor’s degree. If all these licenses were necessary for the health and safety of consumers, one would expect some uniformity of occupations requiring licenses across States. However, a 2012 study showed that the share of low-wage occupations requiring licenses ranged from 24 percent in Wyoming to 70 percent in Louisiana (Carpenter et al. 2012). One potential explanation for the difference in licensing requirements is that States are simply weighing the relative risks of unlicensed workers differently. If this were the case, then States with greater licensing requirements would license the same occupations as less regulated States while simply adding additional occupations. Instead, however, there are idiosyncrasies in the occupations that States license. For example, despite having the lowest share of low-wage occupations requiring a license, Wyoming is just one of 21 States to have licensing requirements for travel guides (Carpenter et al. 2012). Although it is possible that state-specific needs lead to these idiosyncratic licensing requirements, this suggests that other factors are likely contributing to which occupations States choose to license. These occupational licenses come at a significant cost to the U.S. labor market by acting as a barrier to entry for new workers seeking to join a profession and, in turn, artificially raising wages in the occupation for incumbent workers. It has also been found that some state licensing boards have engaged in practices that result in unfair competition and antitrust activities. For example, in the case of North Carolina State Board of Dental Examiners v. Federal Trade Commission (2015), the lack of oversight by the State of North Expanding Labor Force Opportunities for Every American | 181 Figure 3-18. Workers with a Professional License or Certification, by Occupation and Education Level, 2017 All workers Workers without a bachelor’s degree Healthcare practitioners Legal Education, training, and library Healthcare support Protective service Community and social services Financial specialists Personal care and service Life, physical, and social sciences Architecture and engineering Installation, maintenance, and repair Management Transportation and material moving Construction trades Extraction Business operations specialists Sales and related Computer and mathematical Arts, entertainment, sports, and media Production Office and administrative support Farming, fishing, and forestry Cleaning and maintenance Food preparation 0 20 40 60 80 Percentage with a professional license 100 Sources: Current Population Survey; CEA calculations. Carolina allowed dentists to successfully lobby to prevent nondentists from participating in tooth-whitening procedures, despite the relatively low risk of such procedures for patients. Though there is no clear consensus on the precise effect of licenses on compensation, recent estimates suggest a wage premium for licensed workers ranging from about 7.5 percent (Gittleman and Kleiner 2016; Gittleman, Klee, and Kleiner 2018) up to 15 percent (Kleiner, Krueger, and Mas 2011). If the wage premium from occupational licensing is entirely due to economic rents, Kleiner, Krueger, and Mas (2011) estimate that with a labor demand elasticity of 0.5, the 15 percent wage premium would reflect 2.8 million fewer jobs in these occupations due to licensing. Applying a similar calculation with the lower-end estimate for the wage premium, a 7.5 percent wage premium would reflect about 1.4 million fewer jobs if the licenses are not reflecting additional human capital and increased productivity among these workers. State occupational licensing also reduces worker mobility because many licenses cannot be transferred from one State to another. Recognizing 182 | Chapter 3 that high migration is seen as a strength of the U.S. labor market relative to other countries and reflects a key way that workers can adjust to labor market shocks, economists have expressed concerns about declines in geographic mobility in the United States in recent decades (for an overview of these concerns, see Molloy, Smith, and Wozniak 2017). These declines are documented by Molloy, Smith, and Wozniak (2011) and by Kaplan and Schulhofer-Wohl (2017), who each find that interstate mobility has reached a 30-year low.17 Johnson and Kleiner (2017) suggest that occupational licensing has exacerbated this decline. On the basis of their estimates, interstate migration is 36 percent lower for those working in occupations with State-specific licensing relative to other occupations. They also estimate that the rise in licensing from 1980 to 2015 can explain between 3 and 13 percent of the overall decline in interstate mobility over this time. The effects of licensing on interstate mobility observed by Johnson and Kleiner are consistent with a 2015 report by the CEA that observed substantially lower interstate mobility rates for workers in highly licensed occupations relative to those in less licensed ones (CEA, Department of the Treasury, and Department of Labor 2015). Occupational licenses also impose an additional burden on military spouses, who move much more frequently than the general population and potentially face relicensing requirements with each interstate move. The 2016 American Community Survey indicated that working-age military spouses were seven times as likely to move across State lines in the United States as the civilian noninstitutionalized working-age population in general (CEA 2018d). Also, though the 690,000 military spouses represent a relatively small share of the overall working population, military spouses are more likely than the general population to work in an occupation requiring a license, given that 35 percent of military spouses in the labor force worked in occupations requiring a license or certification (DOD 2016; Department of the Treasury and DOD 2012). Employment Experiences in Rural Areas Beyond differences in employment patterns by demographic characteristic, differences in employment patterns appear across geographies, including whether the community is in an urban or rural environment. Although there are several ways that communities can be defined as urban or rural, for the purposes of this chapter we do so based on whether or not the county is located in a metropolitan statistical area. In general, from the early 1980s through the early 2000s, the prime-age employment patterns in urban and rural areas largely followed similar trajectories. Between the fourth quarter of 1980 and the fourth quarter of 2007, rural 17 Using CPS data, for example, Kaplan and Schulhofer-Wohl (2017) find that about 1.5 percent of people moved across State lines in 2010, down from closer to 3 percent in 1980 and over 3 percent in 1990. Expanding Labor Force Opportunities for Every American | 183 employment rates for prime-age adults rose by 5.3 percentage points—which is just slightly higher than the rise of 5.1 percentage points in urban employment rates during this time (figure 3-19). The similarities in employment patterns across these communities diverged, however, after the Great Recession. Though both urban and rural employment fell sharply in the Great Recession, prime-age urban employment rates experienced a nearly complete recovery, and are approaching their prerecession level from the end of 2007. This is despite the fact that some urban areas have restrictive zoning, which increases the costs of housing and real estate and limits employment growth (OECD 2018a). In contrast to the experience in urban areas, prime-age rural employment rates have not shown the same level of recovery. In rural areas, as of the third quarter of 2018, the prime-age employment-to-population ratio has only risen by 2.9 percentage points since the end of 2011, and remains 2.4 percentage points below where it was at the end of 2007. This divergence is actually even greater if one looks at all adults, rather than only those of prime working age, due to the faster aging of the labor force in rural areas (USDA 2017c). There are several reasons why employment patterns in urban and rural area may diverge. One is a purely technical explanation: that every 10 years, the Census Bureau reclassifies nonmetropolitan counties that have grown as large as metropolitan ones. Goetz, Partridge, and Stephens (2018) show that population growth in counties considered rural in 1950 is more than double that of counties considered urban in 1950. The supposedly slow historical population growth of rural areas results from the reclassification of fast-growing counties as urban—so, using definitions of urban and rural, it appears that rural areas have grown more slowly. They note that one analyst likened this to taking the best team out of a sports league each year and then wondering why the remaining teams are not performing as well as before. Although the reclassifications are based on population growth, they could also influence observed economic trends if the rural areas with stronger economic performance are more likely to undergo reclassification. However, these reclassifications only result in an implicit trend break once each decade (when the reclassification occurs) and thus should not alter trends within decades, when the definitions are stable. Hence, this cannot explain why the employment of prime-age adults has lagged in rural areas since the Great Recession after decades of similar prime-age employment rates in these two types of communities. A second reason relates to the industry compositions across urban and rural areas. Although manufacturing represents only 6.2 percent of all urban employment, it constitutes a much larger share, 10.8 percent, of employment in rural areas. In fact—after wholesale and retail trade, education and health services, and public administration—manufacturing is the fourth-largest industry for rural employment (figure 3-20). As such, the declines in manufacturing 184 | Chapter 3 Figure 3-19. Employment-to-Population Ratio for Prime-Age Adults by Geography, 1976–2018 Percent 85 2018:Q3 Urban (in a metro area) 80 75 Rural (not in a metro area) 70 65 1975 1980 1985 1990 1995 2000 2005 2010 2015 Sources: Current Population Survey (CPS); CEA calculations. Note: Prime-age adults are those age 25–54 years. These data aggregate monthly CPS surveys within a quarter. Metro area definitions are subject to change over time. Data are non– seasonally adjusted. Shading denotes a recession. employment in recent decades have had a disproportionate effect on rural communities. A third potential explanation relates to the differences in the characteristics of urban and rural populations. For example, education levels in rural areas have historically been lower than in urban areas, and this gap has been growing. In 2016, 19 percent of adults in nonmetropolitan areas had a bachelor’s degree versus 33 percent in urban areas—a gap of 14 percentage points (USDA 2018c). Earlier, in 2000, this gap was somewhat smaller, at 11 percentage points; 26 percent of adults in urban areas had bachelor’s degrees, versus 15 percent in rural areas (figure 3-21).18 Recognizing that there are substantial differences in employment rates by education level, as discussed earlier in this chapter, the growing educational divide between urban and rural adults can further exacerbate their divergent employment trajectories. One potential reason for the growing education divide is the out-migration of young adults. The exit of college-educated young adults is often identified by policymakers as an important concern for rural areas. This indicates 18 One potential reason for the growing education divide is that education seems to earn a higher return in urban areas versus rural ones. A recent analysis by the U.S. Department of Agriculture (USDA 2017e) found that adults in urban areas with a bachelor’s degree earn $70,146, versus $51,996 in rural areas. Expanding Labor Force Opportunities for Every American | 185 Figure 3-20. Industry Employment by Geography, 2017 Rural Urban Farm, forestry, fishing, and related Mining and logging Construction Manufacturing Wholesale and retail trade Transportation and utilities Information Finance, real estate, and insurance Professional and business services Education and health services Leisure and hospitality Other services Public administration 0 5 10 Percent 15 20 Sources: Bureau of Economic Analysis; CEA calculations. Note: Metroplitan area definitions are subject to change over time. that the problem may not be so much the education level of rural youth but their retention once they complete higher education. Reichert, Cromartie, and Arthun (2014) found that geographically challenged rural areas are particularly dependent on young adults who decide to move back to their rural communities. Fiore and others (2015) found that the cost of living and strength of the local economy were of primary importance in persuading rural youth in Iowa to return. In a survey of young adults from a sampling across the rural United States, Reichert, Cromartie, and Arthun (2014) find that deciding to return to the rural community of their raising is tied closely to place and personal ties maintained with their home community. Returnees sometimes were able to return because they were able to work remotely. They also returned to become part of both farm and nonfarm family businesses. Finally, though not directly related to the growing employment gap between urban and rural areas, an important component of rural economies that cuts across many sectors is self-employment or entrepreneurship. Numerous analyses have shown the importance of entrepreneurship for the economic health of rural areas. Rupasingha and Goetz (2013) provide strong empirical evidence that higher self-employment rates in rural counties are associated with increases in income and employment and with reductions in poverty rates. Self-employment is also particularly important for rural 186 | Chapter 3 Figure 3-21. Educational Attainment in Rural versus Urban Areas Percent 100 80 Bachelor’s degree or higher Associate’s degree Some college, no degree High school diploma or equivalent 15 19 6 9 20 22 60 40 26 33 6 8 21 21 36 36 27 14 19 2016 2000 26 20 24 0 2000 Rural 12 2016 Urban Source: Department of Agriculture. Note: Metro area definitions are subject to change over time. Data may not sum to 100 due to rounding. For full details, see USDA (2018c). communities, because the rate of self-employment in rural areas exceeds that in urban environments (Wilmoth 2017). Goetz and Rupasingha (2009) found that greater self-employment growth was associated with higher shares of construction and services employment. Also, self-employment and entrepreneurship are enhanced when rural areas are close to growing, small metropolitan areas (Tsvetkova, Partridge, and Betz 2017). Larger increases in self-employment growth were also associated with more females in the labor force. But more retail employment in a county was associated with smaller increases in self-employment over time. Goetz and Rupasingha (2009) also used the Economic Freedom of North America Index to evaluate the effect of government policies on proprietorships. This index measured the extent of restrictions on economic freedom. Not surprisingly, more economic freedom is associated with higher rates of business formation. Self-employment opportunity goes hand in hand with the return of young adults who left to pursue higher education. Reichert, Cromartie, and Arthun (2014) report that those who return to rural communities contribute to economic growth, often through entrepreneurship. Policies that promote small businesses and self-employment could encourage the return of young adults to their home communities. Their return then enhances economic growth and Expanding Labor Force Opportunities for Every American | 187 strengthens rural economies, which can further encourage the return of more young adults. Farming employs a shrinking share of the labor force in rural areas. In 2015, 6 percent of rural employment was directly in the farming sector. Agriculture and related industries made up 11 percent of U.S. employment in 2017, but not all in rural areas. With just over 2 million farms, many rural residents live on farms (USDA 2018b). Small family farms, often with a nonfarming occupation, make up nearly 90 percent of the 2 million farms but only produce about one-fourth of the output (Burns and Macdonald 2018). Though farming reflects a smaller share of the rural labor force than it once did, the rural manufacturing advantage in part comes from closer proximity to raw materials, including those grown or raised on farms. For example, food manufacturing is prevalent in rural areas because of the proximity of raw products to process—making up 18 percent of all rural manufacturing employment. Similarly, 7 percent of wood products manufacturing is in rural areas, which is consistent with the closer proximity to inputs into this manufacturing process. However, between 2001 and 2015, the decline in rural manufacturing employment was widespread across manufacturing sectors. During this period, employment in every manufacturing sector except for tobacco and beverage manufacturing declined in rural areas (USDA 2017d). This highlights the importance of policies targeted at revitalizing manufacturing generally for the economic health of rural communities. Policies to Enhance Rural Communities Recent policies of the Trump Administration have been particularly beneficial to these rural communities. These policies include efforts to revitalize industries that are disproportionately located in rural communities; supporting small businesses and entrepreneurship; and promoting economic development in less developed areas through Opportunity Zones, including in many rural communities. One component of revitalizing rural areas involves restoring the manufacturing industries that have been languishing and losing jobs in recent decades. Although manufacturing jobs are important for both urban and rural communities, the larger share of rural employment that is in manufacturing industries means that these jobs are particularly important for rural communities (USDA 2017a). Reflecting the priority that this Administration has placed on revitalizing manufacturing, over the past two years manufacturing has experienced substantial growth. As seen in figure 3-22, in 2018 manufacturing employment grew by just over 2 percent, the fastest annual growth since 1994. And this acceleration of growth in manufacturing is part of a broader increase in employment in goods-producing industries generally. Goods-producing employment grew by at least 1 percent each year from 2011 until 2015, but in 2016 this growth stalled and the industry only grew by 0.4 percent. In 2017, 188 | Chapter 3 Figure 3-22. Manufacturing Employment Growth, 1980–2018 Percent (annual change) 6 2018 4 2018 growth rate 2 0 -2 -4 -6 -8 -10 -12 1980 1985 1990 1995 2000 2005 2010 2015 Sources: Bureau of Labor Statistics; CEA calculations. goods-producing employment gains accelerated again, and this acceleration continued into 2018 (figure 3-23). In 2018, goods-producing employment rose by 3.2 percent—its second-fastest annual growth rate since 1984. A second set of policies that benefits many areas, but especially rural economies, is the existence and facilitation of entrepreneurship through proprietorships and self-employment. One-sixth of all self-employed adults live in rural communities, and a larger share of the rural population (6.7 percent in 2016) is self-employed than is the case in suburbs (6 percent) or center cities (5.7 percent) (Wilmoth 2017). Consequently, policies that encourage the growth of small businesses and benefit self-employed entrepreneurs have the potential to disproportionately benefit rural communities. A major objective of the Tax Cuts and Jobs Act of 2017, as discussed in chapter 2, is to facilitate the success of entrepreneurs. Self-employed workers and pass-through entities, which make up the majority of small businesses, benefit from the act’s lower individual tax rates. Most also qualify for the new 20 percent deduction for pass-through entities and will further benefit from the expanded Section 179 deduction for the purchase of business equipment. A 2018 survey of small business owners by the National Federation of Independent Businesses (NFIB 2018b) indicates that 87 percent of small Expanding Labor Force Opportunities for Every American | 189 Figure 3-23. Goods-Producing Employment Growth, 1980–2018 Annual change (percent) 6 2018 growth rate 4 2018 2 0 –2 –4 –6 –8 –10 –12 –14 1980 1985 1990 1995 2000 2005 2010 2015 Sources: Bureau of Labor Statistics; CEA calculations. business owners recognize that the Tax Cuts and Jobs Act will have a positive impact on the economy. Given the finding that less restrictive business environments help entrepreneurs, reducing regulatory burdens is similarly important. According to the NFIB’s (2018a) survey of small businesses, its Small Business Optimism Index has remained at near-record high levels since the Trump Administration came into office. The NFIB includes the unburdening of small businesses from taxes and regulations as factors in this surging optimism. In addition to incentivizing small businesses by removing regulatory barriers and reducing the marginal tax rates of the self-employed and pass-through businesses, the Trump Administration is also incentivizing improvements in rural infrastructure. One such investment that can enhance growth in rural areas is increased high-speed, high-capacity Internet access (USDA 2017b). Kim and Orazem (2016) found that rural firms are 60 to 101 percent more likely to locate in ZIP codes with good broadband access. Their study, which focuses on start-up firms in rural areas, emphasizes the importance of providing adequate infrastructure to enhance the location and success of entrepreneurs. Although improved Internet access can benefit a range of communities, their study suggests that good broadband access benefits most rural areas that are close to urban areas or that have higher populations. Good broadband access enables 190 | Chapter 3 Box 3-4. Strengthening Local Economies through Opportunity Zones The Tax Cuts and Jobs Act of 2017 included a provision that offers tax incentives for private investment in distressed areas designated by State governors as Opportunity Zones. Under the law, taxpayers who invest their unrealized capital gains in Opportunity Zones, via so-called Opportunity Funds, can defer taxes on these gains for as long as they remain in Opportunity Funds (but no later than the end of 2026). In addition, taxpayers can avoid paying a portion of the original capital gains tax depending on how long they keep these gains in an Opportunity Fund. They can avoid all taxes on capital gains accrued based on investment in the Opportunity Fund (above the original capital gain) if they keep these funds in the Opportunity Fund for at least 10 years. Governors designated Opportunity Zones in their States in early 2018, with their choices finalized by the U.S. Treasury in June 2018. Out of about 75,000 census tracts in the United States (each designed to contain about 1,200 to 8,000 residents), over half were eligible and over 8,700 were chosen. Among eligible census tracts, governors tended to designate as Opportunity Zones those with higher poverty rates and lower median incomes. The average poverty rate among Opportunity Zones in 2016 was 29 percent, compared with an average of 25 percent in all eligible census tracts, and an average of 15 percent across all census tracts in the country (Gelfond and Looney 2018). In addition, rural areas make up almost a quarter of tracts designated as Opportunity Zones (Economic Innovation Group 2018), exceeding the overall share of the population living in rural communities. Although the scale and flexibility offered by Opportunity Zones are new, place-based policies to encourage investment in distressed areas are not. State and Federal Enterprise Zone programs generally offered tax incentives for businesses that located in certain areas or employed people who lived in such areas. Most studies found that Federal Empowerment Zone programs tended to increase employment and wages in designated areas, although there were no similar positive effects of State-based programs on employment (e.g., Neumark and Kolko 2010; Busso, Gregory, and Kline 2013). Another Federal initiative, the New Markets Tax Credit, is more similar to Opportunity Zones, in that it targets census tracts with low incomes and high poverty rates, and offers tax incentives for investment made in designated areas. Unlike Opportunity Zones, however, eligible investments are more restricted and must be preapproved by public authorities. The New Markets Tax Credit led to increased investment in targeted industries with some evidence of positive effects in reducing unemployment and poverty (Gurley-Calvez et al. 2009; Harger and Ross 2016; Freedman 2012). Bernstein and Hassett (2015) suggest that the effectiveness of previous place-based policies was limited by weak or misaligned incentives for investment, overly burdensome bureaucratic requirements, and limited scope for the types of investments that could be made. Opportunity Zones Expanding Labor Force Opportunities for Every American | 191 offer a means of flexibly investing in distressed areas without encumbrance by bureaucratic requirements. The scope of potential investment is large, with trillions of dollars in unrealized capital gains that could be harnessed. In addition, State and local governments have signaled their own efforts to complement Federal incentives in Opportunity Zones. At the Federal level, President Trump signed Executive Order 13853 on December 12, 2018, which establishes the White House Opportunity and Revitalization Council and directs Federal agencies to streamline Federal programs and offer greater flexibility to States to target public investment in Opportunity Zones whenever possible under current law. The large potential scale of the Opportunity Zone investment, complemented by public efforts, could unleash substantial economic growth in communities that have been most left behind throughout the United States. people to work remotely from rural areas who would otherwise need to live closer to urban areas. With the importance of broadband access in mind, in January 2018 President Trump signed Executive Order 13821, which streamlines the process to expand broadband to rural areas (White House 2018c). An additional priority of the Administration is encouraging private investment in areas that previously lacked private capital spending so economic growth can be spread more widely. A key component of this approach is through the creation of Opportunity Zones in the Tax Cuts and Jobs Act, which encourages a wide spectrum of investment in infrastructure in rural areas and other communities where economic growth could be enhanced through this influx of capital (see box 3-4). Policies beneficial to rural communities are further promoted by the Strengthening Career and Technical Education for the 21st Century Act, which the President signed in July 2018, and by the Task Force on Agriculture and Rural Prosperity, which the President established in April 2017. The Strengthening Career and Technical Education for the 21st Century Act specifically benefits rural areas by allowing States to designate up to 15 percent of allocated funds to a reserve for targeted rural education and training needs. The task force similarly promotes rural development by identifying and recommending policy changes that ensure good broadband access, improve the quality of rural life, support the rural workforce, harness technological innovation, and enhance economic development (White House 2017). Conclusion Given the historically low unemployment rates that were achieved in 2018, it is clear that maintaining the recent rapid pace of employment growth necessitates a better understanding of the reasons that some adults, and particularly 192 | Chapter 3 those of prime working-age, remain outside the formal labor market. Especially because there are already more job openings than unemployed people looking for work, continued growth of the workforce requires overcoming the barriers that have kept some adults outside it. Fundamentally, if people voluntarily remain outside the labor market when there is a surplus of available jobs, it is an indication that they value their time spent on other activities above the amount that employers are willing to pay. As a result, central to expanding the number of people engaged in the labor force are policies that increase workers’ wages, decrease the fixed costs of entering the labor force, or remove distortions that cause some whose most productive use of time is in the formal labor market to instead engage in other activities. As outlined in this chapter, policies of the Trump Administration focus on each of these areas. The corporate tax rate reductions and expensing of business investment in equipment in the Tax Cuts and Jobs Act incentivized additional capital spending by employers, which in turn leads to higher productivity and larger wage gains. The individual tax cuts in the act similarly mean that workers keep a larger share of any wage earnings, and more potential workers will find it worthwhile to seek employment. Increased investments in human capital, along with physical capital, also increase the returns to work. There is strong evidence that wages are higher and unemployment is lower among those with higher levels of education. Efforts to increase the education and skill levels of the American workforce, including the pledges from businesses secured by the Trump Administration to train or retrain over 6.5 million workers, should raise the potential wages and employment prospects for the recipients of this training. Further removing regulatory distortions can also increase the likelihood that the returns to work are sufficiently high to draw additional adults into the labor market. These deregulatory efforts include reducing occupational licensing, which imposes a fixed cost on potential labor market entrants, and reducing regulations on paid child care activities, which raise the costs of child care and discourage parents from seeking formal employment. Although many policies to remove distortions and enhance workers’ productivity and wages are nationally focused, there has been a clear disparity in the recovery from the Great Recession in urban compared with rural areas. This geographic divide necessitates policies focused on industries that are prevalent in rural communities so there are more employment opportunities for potential workers in rural areas throughout the country. The Administration’s focus on industries, including manufacturing and mining, that are disproportionately located in rural areas, as well as place-based policies such as the creation of Opportunity Zones, have the potential to broaden the scope of the Nation’s economic expansion to areas that did not experience strong employment gains in earlier years. Expanding Labor Force Opportunities for Every American | 193 Although the labor market faces headwinds as members of the Baby Boom generation reach traditional retirement age, these demographic trends do not dictate that the United States will face secular stagnation brought on by slow employment growth in the coming years. Through the policies of this Administration, as discussed in this chapter, there is the potential to increase economic opportunities for all Americans by increasing the wages of those who are working and by drawing more people into the labor market than has been the case in recent years. 194 | Chapter 3 x Chapter 4 Enabling Choice and Competition in Healthcare Markets America is unique in both the extent to which it employs private markets to deliver and fund healthcare and in the quality of care provided. While there is substantial government involvement in healthcare regulation and funding, government payers often utilize private market mechanisms in their programs, and most Americans obtain their healthcare through private markets. The delivery of high-quality, innovative care in the United States is the result of market forces that enhance patients’ welfare by allowing parties to act in accord with their own, self-determined interests. Nevertheless, the ability of markets to provide affordable, high-quality care for the entire population and the value of government interventions in healthcare markets have been debated for decades. This chapter discusses the rationales commonly offered for the government’s intervention in healthcare and explains why such interventions often, unnecessarily, restrict choice and competition. The resulting government failures are frequently more costly than the market failures they attempt to correct. Though some features of healthcare—such as uncertainty, third-party financing through insurance, information asymmetry, barriers to entry, and inelastic demand—interfere with efficient market function, we argue that these features are neither unique to healthcare markets nor so disruptive that they mandate extensive government interventions. We contend that competitive markets for healthcare services and insurance can and do work to generate affordable care for all. 195 Current proposals to increase government involvement in healthcare, like “Medicare for All”, are motivated by the view that competition and free choice cannot work in this sector. These proposals, though well-intentioned, mandate a decrease or elimination of choice and competition. We find that these proposals would be inefficiently costly and would likely reduce, as opposed to increase, the U.S. population’s health. We show that funding them would create large distortions in the economy. Finally, we argue that the universal nature of “Medicare for All” would be a particularly inefficient and untargeted way to serve lower- and middle-income people. We contrast such proposals with the Trump Administration’s actions that are increasing choice and competition in healthcare. In the health insurance arena, we focus on the elimination of the Affordable Care Act’s individual mandate penalty, which will enable consumers to decide for themselves what value they attach to purchasing insurance and generate $204 billion in value over 10 years. Expanding the availability of two types of health insurance—association health plans; and short-term, limited-duration health plans—will increase consumers’ choices and insurance affordability. We find that, taken together, these three sets of actions will generate a value of $453 billion over the next decade. For biopharmaceuticals, the Food and Drug Administration has increased price competition by streamlining the process for drug application and review. Record numbers of generic drugs have been approved, price growth has fallen, and consumers have already saved $26 billion during the first year and a half of the Administration. In addition, the influx of new, brand name drugs resulted in an estimated $43 billion in annual benefits for consumers in 2018. Data through the end of 2018 show that, for the first time in 46 years, the Consumer Price Index for prescription drugs fell in nominal terms—and even more in real terms—during a calendar year. T he dominant theory in economics for centuries in the Western world has been the efficiency of the free market system. For a free market to be efficient, free choice and competition must exist in the market 196 | Chapter 4 to allow consumer demand to be met by suppliers. In markets, prices reveal economically important information about costs and consumers’ needs, and send signals to both sides of the market to facilitate an efficient allocation of resources. Centrally set prices undermine the important allocative role of prices in the economy. Of course, many markets deviate in substantial ways from the conditions under which markets are perfectly efficient. Market failures occur to a greater or lesser extent throughout the economy. The important question is what to do about them. Market failures may be less damaging than the distortions and costs introduced by various interventions intended to correct them. Following the research of Kenneth Arrow (1963), many economists and policymakers have argued that unique features of healthcare make it impossible for competition and markets to work. They claim that uncertainty in the incidence of disease and in the effectiveness of treatment, information asymmetry between providers and consumers of healthcare, barriers to provider entry, and the critical importance of and inelastic demand for health services all interfere with market function and justify government intervention in—or even its takeover of—healthcare markets. Some members of Congress have proposed nationalizing payments for the healthcare sector (which makes up more than a sixth of the U.S. economy) through the recent “Medicare for All” proposal. This policy would distribute healthcare for “free” (i.e., without cost sharing) through a monopoly government health insurer that would centrally set all prices paid to suppliers such as doctors and hospitals. Private insurance would be banned for the services covered by the “Medicare for All” program. This chapter begins by critically examining the rationales offered for the government’s intervention in healthcare. We find that though some characteristics of healthcare may present obstacles to a perfectly functioning market, these are not insurmountable problems that mandate the government’s intervention in healthcare and can be overcome by market and nonmarket institutions. Moreover, these problems also occur in markets for many other goods without calls for government takeovers and the suppression of consumer choice and competition. Government intervention in healthcare is only clearly warranted where the political process has made a determination that some level of healthcare for low-income people is a merit good—a beneficial good that would be underconsumed, justifying replacing consumer sovereignty with another norm—so that government redistribution programs to provide healthcare in kind for low-income people might enhance efficiency. We next critique the “Medicare for All” proposal. This plan would eliminate choice and competition—everyone would be forced to participate in the same insurance, with mandatory premiums set through tax policy and without the option of choosing an alternate insurance if they dislike the government’s plan. Our analysis shows that the proposal would reduce longevity and health in the U.S., decrease long-run global health by reducing medical innovation, Enabling Choice and Competition in Healthcare Markets | 197 and adversely affect the economy through the large tax burden required to fund the program. In contrast to proposals that diminish health and damage the economy by curtailing market forces, the next section of this chapter details the Trump Administration’s efforts to improve choice and competition in health insurance markets so as to help them better serve low- and middle-income people. The Administration has reduced the penalty associated with the Affordable Care Act’s (ACA’s) individual mandate to zero, so consumers can decide for themselves the value of purchasing health insurance. We analyze this deregulatory reform and find that it will generate $204 billion in value over 10 years. In addition, the administration has increased the choices and affordability of available health insurance plans by expanding association health plans and extending the available terms and renewability of short-term, limited-duration insurance plans. As opposed to sabotaging healthcare markets, conventional incidence analysis by the CEA implies that these three deregulations of health insurance markets together will benefit Americans by $453 billion during the next decade. Finally, the last section discusses the Administration’s reforms to enhance choice and competition in biopharmaceutical markets by streamlining the drug application and review process in a way that effectively lowers barriers to entry while ensuring a supply of safe and effective drugs. This deregulatory effort is contributing to a record number of generic drug approvals since January 2017, resulting in slower price growth and savings of $26 billion over the first year and a half of the Administration. In addition, the influx of new, brand name drugs since January 2017 has induced price reductions, resulting in an estimated $43 billion in annual benefits for consumers in 2018, even though the methods currently being used to estimate changes in drug prices do not reflect this. For the first time in 46 years, the Consumer Price Index for prescription drugs fell in both nominal and real terms during a calendar year. We conclude that the market for health insurance and healthcare should be supported through increased choice and competition, not hampered by increased government intervention. Competitive markets for healthcare services and insurance—whether privately or publicly funded—can and do work to provide high-quality care for people at all income levels.1 1 The CEA previously released research on topics covered in this chapter. The text that follows builds on the following research papers produced by the CEA: The Opportunity Costs of Socialism (CEA 2018c), The Administration’s FDA Reforms and Reduced Biopharmaceutical Drug Prices (CEA 2018a), and Deregulating Health Insurance Markets: Value to Market Participants (CEA 2019). 198 | Chapter 4 Rationales for the Government’s Healthcare Interventions That Restrict Competition and Choice This section reviews the specific rationales for the government’s intervention in healthcare markets and argues that they are often exaggerated; are not unique to healthcare; and, when present in markets for other types of goods and services, have not been used to call for government control. In a market economy, free choice among competing suppliers generally leads to an efficient allocation of resources and maximizes consumer welfare. In the market system that predominates in the United States, people are mostly free to spend their own money and are therefore more careful in deciding how much to spend and on what the money is spent compared with when money is spent by governments on their behalf. Fiscal and regulatory policies that limit choice and competition distort allocations and reduce consumer welfare from what it would be in the absence of these policies. Unfortunately, every market has features that deviate from it working perfectly, and healthcare is no exception. Some argue that specific features of healthcare make it unsuitable for the market mechanisms that we employ in the rest of the economy. Fifty-six years ago, the economist Kenneth Arrow published a seminal article identifying ways in which healthcare deviates from perfectly competitive markets and thus could generate an inefficient allocation of resources (Arrow 1963). The primary factors he identified included: 1. Uncertainty in the incidence of disease and in the effectiveness of treatment, and hence the likelihood of recovery. 2. Information asymmetry between providers of medical services and patients who lack an understanding of disease processes and treatments. 3. Barriers to entry that limit the supply of providers, including the need to attend selective medical schools and state licensing standards that include educational and training requirements. These barriers can be imposed by the government (licensing) or by private parties who often have a financial interest in limiting the supply of their service (limited admission to medical school, residency and fellowship training programs and specialty board society certification which is often needed to obtain hospital privileges). Arrow (1963, 947) pointed out that these features lead to inefficient markets, and “when the market fails to achieve an optimal state, society will, to some extent at least, recognize the gap, and nonmarket social institutions will arise attempting to bridge it. . . . The medical-care industry, with its variety of special institutions, some ancient, some modern, exemplifies this tendency.” Enabling Choice and Competition in Healthcare Markets | 199 This section discusses the healthcare features that Arrow pointed out, the adaptations to them that can create problems of their own, and additional factors that some claim justify government intervention, either through public financing or public production in healthcare. We find that many of the arguments for the value of intervention have been exaggerated and that the costs of market failures in healthcare are often lower than the costs of government interventions undertaken to remedy them. Uncertainty, Third-Party Payments, and the Problem of Moral Hazard The primary institution that has arisen in response to the uncertainty inherent in healthcare is private healthcare insurance or third-party payments. Insurance mitigates the financial risk of getting sick and allows risk-averse individuals to pool the risk. This pooling of risk across the population enhances welfare by reducing the financial risk of uncertain illness events for each individual. Nevertheless, some have argued that the widespread adoption of third-party insurance in healthcare creates its own problems that warrant government intervention. It has long been recognized that there is a trade-off between risk reduction through insurance and appropriate incentives at the time of care (Zeckhauser 1970). Payment after the time of service via third parties, such as private or public insurance plans, mutes the incentives of patients to shop based on quality and price, and therefore negates market mechanisms, which leads to the problem of overconsumption relative to production costs or moral hazard. Normally, the risk against which insurance is purchased should be out of the individual’s control. In healthcare, costs largely depend on the choice of a doctor and the willingness of this doctor and the patient to use medical services. Health insurance can increase the risk that is insured against: medical costs. Moreover, because medical insurance limits considerations of cost as services are consumed, “widespread medical insurance increases the demand for medical care” (Arrow 1963, 961). By inserting third-party control over payments, “insurance removes the incentive on the part of individuals, patients, and physicians to shop around for better prices for hospitalization and surgical care” (Arrow 1963, 962). Healthcare insurance reduces the price that an individual faces to zero or, if there is a copay or coinsurance, to greater than zero, but still less than the cost of the service as reflected by the market price. This is a recipe for wasteful spending and a welfare loss to society. The primary way insurers deal with moral hazard is through cost sharing—deductibles, copayments, and coinsurance—to discourage overutilization by moving consumers up the demand curve. The Rand Health Insurance Experiment of the 1970s and early 1980s randomly assigned patients to health plans with different levels of cost sharing. It showed that higher consumer 200 | Chapter 4 cost-sharing leads to lower utilization, with little discernible impact on health (Newhouse 1993). Cost-sharing provisions have become far more common and burdensome for patients over the past few years. Nevertheless, unless cost sharing is quite high, it cannot eliminate moral hazard. However, moral hazard is less of a problem than it at first appears to be, and it has important lessons to impart about the proper role of healthcare insurance. Although seeking extra medical care because of insurance is rational economic behavior for an insured individual who gets to spread the cost over all other insured people, the presence of moral hazard suggests that “some uncertain medical care expenses will not and should not be insured in an optimal situation” (Pauly 1968, 537). The problem presented by moral hazard only clearly applies to items where we would expect zero (or very low) prices to lead to overuse—things like “routine physician’s visits, prescriptions, dental care, and the like”—but not necessarily to serious illnesses (Pauly 1983, 83). In the case of invasive surgeries, painful treatments and tests, and medications with serious side effects, patients would be unlikely to overutilize them, regardless of how low the costs were (Nyman 2004). No one would have their gallbladder or pancreas removed, undergo chemotherapy, or endure a bowel preparation for a colonoscopy simply because the services were free—they would only utilize these services to treat or diagnose serious illnesses.2 In other words, moral hazard is predominantly a problem when insurance covers routine or nonessential, discretionary services (e.g., cosmetic surgery) that most economists think should not be covered by insurance. It is not a problem for medical expenditures for the serious, costly, and unpredictable illnesses and treatments that most economists would agree should be covered by health insurance. For serious illnesses, insurance may promote additional spending that is likely to enhance welfare because the patient would have purchased it himself or herself if insurance had given them cash instead of directly paying for the service (Nyman 2004). The interposition of third-party payment seems less problematic when we consider that insurers must compete to attract enrollees. In the process, they will act as agents for those enrollees in selecting and contracting with high-quality providers through networks or other means and negotiating favorable prices with these providers. The rigors of the market, perforce, help align private, third-party payers’ actions with buyers’ preferences. But the same cannot be said for third-party public payers. Unlike private insurers, which must compete on price, public payers do not need to compete. This makes private payers more likely than public payers to act as agents for patients. 2 The incidence of disease may respond to costs in the long run (see the comparisons of short- and long-run factors below). For example, the price of treating a disease may affect people’s behaviors or treatment of antecedent conditions so that the incidence of the disease ultimately changes. Enabling Choice and Competition in Healthcare Markets | 201 Asymmetric Information A common argument for the government’s intervention in healthcare markets is that there is asymmetric information—that is, sellers know more than buyers about the nature and quality of the service that is being sold. Although this is true in virtually any market, in industries ranging from legal services to automobile repair, academics and policymakers often single out healthcare for government intervention. This is despite the fact that market and nonmarket mechanisms have developed to deal with such information issues, usually at far lower costs than government alternatives. A nonmarket institution that Arrow (1963) identified as developing to deal with information asymmetry was professional medical ethics and the trust that physicians would be more motivated by fiduciary obligations to their patients than by profits. Trust is particularly important because patients are prone to rely on their physician’s advice regarding what care is needed and where to obtain it (Chernew et al. 2018). Whether ethical and professional standards always succeed is a matter of debate, but it is likely that they—in combination with legal obligations to the patient—do alleviate the problem of information asymmetry. The advent of the Internet as a readily available information source—and the push for healthcare providers to provide medical information to patients through the now-universal legal requirement for informed consent—has decreased the asymmetry problem since Arrow (1963) wrote 56 years ago. In addition, because 90 percent of healthcare spending is on patients with chronic conditions (Buttorff, Ruder, and Bauman 2017), these patients have the opportunity to gain knowledge from experience and to be highly informed relative to other markets. They learn which treatments work best for them and which have intolerable side effects, which providers are most knowledgeable and responsive, and where care can be most readily and cheaply obtained. Moreover, most people care deeply about healthcare. They are far more likely to seek out and utilize knowledge about healthcare than about, for instance, buying a vacuum cleaner. The information asymmetry problem has also been mitigated by the fact that third-party payers, rather than patients, are often the real buyers of healthcare. Employers in their roles as insurers and purchasers of healthcare and third-party insurers pay for most of the care received, and they are far more informed than buyers in most markets. Indeed, they often know as much as the sellers about the set of products or services they are considering buying. Many payers explicitly quantify the costs and benefits of what they buy before actually paying for it—for example, through so called cost-effectiveness analysis. These buyers act as agents for patients by excluding providers or products that do not meet quantitative cost-benefit criteria from networks or 202 | Chapter 4 formularies. The utilization of quantitative purchasing metrics creates a more informed demand side in healthcare. Barriers to Market Entry Arrow (1963, 966) posited that though trust and delegation “are the social institutions designed to obviate the problems of informational inequality,” licensing and educational certification standards were developed to reduce consumers’ uncertainty “as to the quality of the product insofar as this is possible.” Arrow acknowledged that this adaptation to market imperfection creates its own problems for the efficient function of the healthcare market— barriers to entry, which, among other problems, inefficiently limit the supply of healthcare providers. Licensing and educational certification standards are not unique to healthcare; our society is awash in licensing and education requirements, from those for lawyers to those for hairdressers, that restrict market entry. What makes healthcare unique is the pervasiveness of these requirements and the fact that they are imposed by both public (licensing) and private parties, which often have a financial interest in limiting the supply of their service. Medical schools and residency training programs, run by physicians and medical institutions, select their enrollees; and graduation is a prerequisite for licensing. Moreover, certification by privately run, specialty board societies is often needed to obtain hospital privileges. Licensing and minimum-quality standards can control entry, can assure quality in markets where there is information asymmetry between providers that know the quality of their service and consumers who do not, or can entail some combination of both (Stigler 1971; Leland 1979). Although they undoubtedly interfere with market efficiency, licensing and quality standards seem far more reasonable in medicine than they do for hairdressers. The reason is that trial and error works well when you can recover from the errors, but not when the provider’s errors can result in irrevocable harm. Arrow (1963) suggested that there are three approaches to dealing with uncertainty about a provider’s qualifications and licensing: (1) Allow licensing and exclude nonqualified entrants; (2) certify or label entrants as qualified without compulsory exclusion; and (3) do nothing and allow consumers to make their own choices. In an often–incorrectly cited statement about these alternatives for licensing—not, as some have mistakenly maintained (Reinhardt 2010), a statement about the need for government-provided health insurance—Arrow (1963, 967) wrote, “It is the general social consensus, clearly, that the laissez-faire solution for medicine is intolerable.” Enabling Choice and Competition in Healthcare Markets | 203 The Inelastic Demand for Healthcare Some argue that the importance of medical care and the often-emergent nature of the care make it impossible for healthcare markets to work efficiently. Patients have neither the time nor the inclination to shop on the basis of price and quality. In circumstances like a trip to the emergency room after a car accident or a heart attack, choice is often impossible. Patients may also have little choice when, after their initial choice of hospital and physician for elective procedures, they become captive to a host of other services and providers that they cannot effectively choose. Thus, the context in which the service is provided, rather than the nature of the service itself, often determines whether consumers have the opportunity to make choices. For example, a computerized axial tomography (CT) scan of the head as part of a workup for an ongoing neurological problem allows making a choice between different service providers, but a similar CT scan for an acute head trauma does not. Or patients considering surgery for an aortic aneurysm can consider which surgeon and hospital best suits their needs but do not have the luxury of choice when their aneurysm is rupturing. This issue is reflected in the price elasticity of demand for healthcare services—how much the quantity demanded changes in response to changes in price. Although the range of estimates for the price elasticity of demand for healthcare is relatively wide, it tends to center on –0.17, meaning that it is relatively price inelastic (Ringel et al. 2002). Studies of the price elasticity of demand for medical services, however, suggest that cheaper, more routine purchases—for example, preventive care and pharmacy benefits—have larger price elasticities than expensive, emergent care. Similarly, the demand for outpatient services is more price sensitive than the demand for hospital stays (elasticities, respectively, of –0.31 and –0.14); and unlike the situation for adults, price changes have no effect on the quantity of inpatient services demanded for children. It is reasonable to assume that treatment for serious or emergency care—for example, treatment for a trauma or for newly diagnosed cancer—is very inelastic. This is consistent with the basic economic observation that the price elasticity of demand becomes more elastic over time. In the short run (e.g., in an emergency), demand may be relatively inelastic because there may be few substitutes and consumers do not have time to look for alternatives. But elasticity increases in the longer term, as substitutes become available and consumers have time to shop. A related way to assess the possibility of healthcare choice and competition is to determine whether healthcare services are “shoppable”—that is, whether patients can schedule when they will receive care, compare and choose between multiple providers based on price and quality, and determine where they will receive services. Despite the issues presented by emergency care, people can shop for most healthcare services. A study of people under 65 with employer-provided 204 | Chapter 4 insurance found that 43 percent of healthcare services are potentially shoppable by consumers (Frost and Newman 2016). But the study failed to include spending on prescription drugs, which are generally shoppable as well. When the 11 percent of healthcare spending that goes to prescription drugs is added in, a majority of healthcare spending (43 + 11 = 54 percent) is shoppable. In a study of 2011 claims by auto workers, shoppable services were reported as accounting for 35 percent of total healthcare spending, with inpatient shoppable services accounting for 8 percent of total spending and outpatient shoppable services accounting for 27 percent of total costs (White and Eguchi 2014). Yet this study, like the one cited above, also counted prescription drugs as part of total spending but did not include them in the shoppable category. When drugs are added in, shoppable goods and services accounted for 56 percent of healthcare spending. The study found that shoppable services are common and constitute a high percentage of the inpatient services provided, even though inpatient care is considered less shoppable than outpatient care. Of the 100 highest-spending diagnosis-related groups (i.e., categories of medical problems that determine payment for hospital stays) for inpatient care, 73 percent were shoppable; of the 300 highest-spending diagnosis-related groups for outpatient care, 90 percent were shoppable. The implication is that nonshoppable services, though a minority of services provided, are much more expensive and therefore represent a larger percentage of spending. The literature is mixed on whether patients consider information on price and quality in making healthcare choices. Many reports find that patients do not utilize current price information tools to shop for healthcare. In a recent study of one shoppable service (lower-limb magnetic resonance imaging scans, MRIs), few patients consulted a free price transparency guide (less than 1 percent), and they did not select their provider based on overall prices or their out-of-pocket costs (Chernew et al. 2018). This is consistent with other studies showing that though a majority of plans now provide pricing information to their enrollees, only 2 to 3.5 percent of enrollees look at it (Frakt 2016). A study of employee behavior in the year before and after an online price transparency toll was introduced at two large companies operating in multiple market areas found that only a small percentage of employees used the tool, and it was not associated with a decrease in healthcare spending (Desai et al. 2016). Nevertheless, a study of enrollees in Medicare Part D prescription drug plans indicates that they will respond to a choice of low-cost options by switching from expensive to less expensive plans (Ketchum, Lucarelli, and Powers 2015). Experiments with reference pricing—a system of payment where an employer or insurer pays with usual coinsurance and copay provisions up to a maximum “reference” price for a nonemergency health service, and patients are responsible for all costs above that price—have found that consumers will shift to lower-price providers (Robinson, Brown, and Whaley 2017). Enabling Choice and Competition in Healthcare Markets | 205 Similarly, a systematic review of the literature found limited evidence about the effect of quality information on patient choice and concluded that current attempts to provide comparative data have a limited impact (Faber et al. 2009). Nevertheless, there is evidence—based on a study of three conditions (heart attacks, heart failure, and pneumonia) and two common surgical procedures (hip and knee replacements) that together account for a fifth of Medicare hospitalizations and hospital spending—that higher-quality hospitals (as measured by rates of risk-adjusted survival, readmissions, and adherence to practice guidelines), attract a greater market share at a point in time and also grow more over time (Chandra et al. 2016a). This positive correlation between hospital quality and market share was strongest for patients who were not emergency admissions and therefore had more scope for choice. The reported failure of patients to consider available price and quality information may reflect the quality and ease of access of the information tools assessed rather than the willingness of patients to shop based on price and quality. A confounding factor in assessing healthcare shoppability is the way healthcare consumers shop. After selecting their physician, they are prone to rely on his or her advice regarding what care is needed and where to obtain it. In the study of lower-limb MRIs described above, the referring physician was the primary determinant of where patients received their MRI, and most physicians referred to a narrow group of providers—each orthopedist sent, on average, 79 percent of their referrals to a single radiologist (Chernew et al. 2018). This referral pattern could be problematic in the current wave of health system consolidation, particularly in vertical integration. Referring physicians who work for hospitals within vertically integrated networks were far more likely to refer to providers within that hospital network, and the MRIs performed by hospital-based providers are generally more expensive than MRIs performed by out-of-hospital providers. Having a vertically integrated referring physician raised the cost of an MRI by 36.5 percent and the amount paid by the patient by 31.9 percent. Concentration in provider markets leads to market power that interferes with patients’ ability to shop for insurance and medical services. It is standard economic theory that monopolies and oligopolies lead to an inefficient allocation of resources and to waste. But the government has standard approaches for dealing with this problem, like antitrust enforcement and regulatory changes, that encourage competition and discourage unfair business advantages. These methods are the appropriate solution for the concentration of market power in healthcare markets, not government financing or a takeover. The Administration’s report, Reforming America’s Healthcare System Through Choice and Competition (HHS 2018), discusses the important role played by the antitrust divisions of the Federal Trade Commission and the Department of Justice. 206 | Chapter 4 Healthcare Is Not Exceptional Healthcare is not unique in having features that lead to the departures from market efficiency that Arrow outlined 56 years ago, or that others have since espoused. Most people know far less about the workings of their car than their auto mechanic does. And there is uncertainty about when a person will have an accident or suffer a car breakdown and whether the mechanic’s intervention will successfully restore the car’s functions. Barriers to market entry in the form of licensing and education requirements cover hundreds of different professions and service providers, often with little demonstrable gain. And healthcare is not the only market where there is relatively inelastic demand. The question for healthcare, as well as for every sector of the economy, is: What is the optimal way to deal with market inefficiencies? Government intervention is not the only, or even an obvious, answer, and it can be as inefficient and costly as private market failures—often even more so. Market failure is ubiquitous, in the sense that all the conditions for perfect competition are rarely achieved, so failure occurs to a greater or lesser extent throughout the economy. Various types of failures can be thought of as externalities—that is, as “nonmonetary effects not taken into account in the decisionmaking process”—when parties engage in transactions (Zerbe and McCurdy 1999, 561). The question then becomes how to minimize the transaction costs to eliminate or minimize the externalities. The relationship between hospital quality and market share described above suggests that competition and market forces—which would normally exert pressure on low-productivity firms to become more efficient, shrink, or exit the market—are playing a role in healthcare services (Chandra et al. 2016a). Another study found that, despite the conventional wisdom that idiosyncratic features of the healthcare sector—like consumer ignorance of quality, and the lack of price sensitivity resulting from health insurance—would lead to wide variation in healthcare productivity, the dispersion of productivity across hospitals treating heart attacks is similar to or smaller than the productivity dispersion across a large number of U.S. manufacturing industries (Chandra et al. 2016b). Because productivity dispersion has been shown both theoretically and empirically to decrease with greater competition, this suggests that healthcare may not be more insulated from demand-side competitive pressures than other sectors. Taken together, these studies “suggest that, contrary to the long tradition of ‘healthcare exceptionalism’ in health economics, the healthcare sector may have more in common with ‘traditional’ sectors subject to standard market forces than is often assumed” (Chandra et al. 2016b, 102). Redistribution and Merit Goods Although less often discussed by economists, a legitimate justification for the government’s intervention in healthcare is that healthcare is a merit good Enabling Choice and Competition in Healthcare Markets | 207 whose consumption is not only valued by patients who consume it but also by the third parties that finance this consumption. Broadly speaking, a merit good is one for which society has made a judgment that the merits (or demerits) of a particular good or service require superseding consumer sovereignty with an alternative norm (Durlauf and Blume 2008). This occurs when society makes a judgment that the good will be underconsumed in a free market economy because of a divergence between the private benefits individuals take into account and the actual benefits to the public. Such goods should be subsidized so that consumption does not entirely depend on ability and willingness to pay.3 Virtually every high-income country, including the United States, has made a collective judgment that healthcare and health insurance provide greater utility than some consumers can afford. American society, through the political process, has therefore been willing to redistribute income to subsidize healthcare for low-income people, with the efficient level of distribution determined by the preferences of the population. Under such merit motives, providing healthcare in kind through programs like Medicaid, rather than through cash transfers to people who make purchases based on their own preferences, is optimal and efficiency enhancing. This creates the reverse situation from the moral hazard problem, where pricing below cost decreases efficiency by inducing beneficiaries to consume more healthcare than they normally would. “Under merit motives such pricing below cost does not create moral hazard and, indeed, enhances efficiency” (Mulligan and Philipson 2000, 22). However, this sort of paternalistically motivated merit good transfer program may be far less progressive than a conventional analysis of lump sum income transfers would suggest. Despite international agreement that governments have a role in funding, to a greater or lesser extent, health insurance, few countries (the United Kingdom being the notable exception) actually pay for and provide healthcare for all. And a survey of 19 countries, including both developed and developing ones (i.e., China and India), shows that they all allow private funding and provisions of healthcare and private health insurance (Mossialos et al. 2017). Budgetary constraints and societal priorities and preferences for how to utilize limited resources impose a practical limit on merit motives. Several States— after enacting legislation (Vermont, in 2014), having failed ballot initiatives 3 Merit goods should not be confused with public goods, which must be provided by the government because the private market will not supply them. Public goods differ from private goods (including merit goods) because they are nonexcludable—i.e., the supplier of the good cannot prevent people who do not pay for it from consuming it—and they are nonrival—i.e., consumption by one person does not make the good unavailable to others (Durlauf and Blume 2008). The classic example is national defense. In protecting the Nation from attack for one person, we cannot easily exclude others from being protected, even if they are unwilling to pay. One person’s consumption of protection does not lessen the amount of protection others can consume. Healthcare, in contrast, is both excludable and rival. 208 | Chapter 4 (Colorado, in 2016), or experiencing stalled legislation (California, in 2017)— have not followed through on single-payer healthcare initiatives because of financing concerns (Weiner, Rosenquist, Hartman 2018). Current Proposals That Decrease Choice and Competition This section discusses current proposals to increase the government’s involvement in healthcare that are partly motivated by the view that competition and free choice cannot work in healthcare. Here, we assess the proposals by many members of Congress for “Medicare for All” that would nationalize payments for the healthcare sector, which makes up more than a sixth of the U.S. economy. Some claim that only the government can take advantage of economies of scale in healthcare and that a government healthcare monopoly will be more productive by avoiding “waste” on administrative costs, advertising costs, and profits and by using its bargaining power to obtain (i.e., dictate) better deals from healthcare providers. A recent proposal sponsored or cosponsored by 141 members of Congress (S. 1804; H.R. 676), titled “Medicare for All” (M4A), would distribute healthcare for “free” (i.e., without cost sharing) through a monopoly government health insurer that would centrally set all prices paid to suppliers such as doctors and hospitals. This proposal would make it unlawful for a private business to sell health insurance or for a private employer to offer health insurance to its employees. Although President Obama promised, contrary to fact, that consumers could keep their health insurance plan under the ACA, M4A takes the opposite approach: All private health insurance plans will be prohibited after a four-year transition period. Instead of relying on competition and individual choice to control prices, M4A would lower them by fiat. M4A’s ban on private competition would be even more restrictive than healthcare plans in other countries and other government programs in the United States. For example, the government does not ban private schools, even though it collects taxes to run a public school system. Education providers—a.k.a. teachers—can still work at private schools, and parents can forgo free public education and pay private school tuition. Under the M4A bill, patients would have no insurance alternatives. Health providers, though not government employees, would have no choice but to receive their income and instructions from the Federal government or from the relatively few people who could afford to purchase expensive medical services without insurance. A major issue for M4A is the low productivity of government programs in translating tax revenues into outputs valued by participants, such as improved health. This problem is common with in-kind programs like government-provided healthcare, where beneficiaries often do not value the healthcare that is Enabling Choice and Competition in Healthcare Markets | 209 provided as much as the money that is spent on it. According to the Centers for Medicare & Medicaid Services (CMS 2017), in 2016 about $7,590 was spent per U.S. Medicaid beneficiary. If Medicaid beneficiaries were given this spending to allocate as they see best, most would not spend it all on health insurance. In the Oregon Medicaid expansion experiment, Finkelstein, Hendren, and Luttmer (2015) found that Medicaid enrollees only valued each additional $1 of government Medicaid spending at $0.20 to $0.40 (also see Gallen 2015). Similarly, a study of Medicaid-like coverage provided through Massachusetts’ low-income health insurance exchange found that most enrollees valued their coverage at less than half its cost (Finkelstein, Mahoney, and Notowidigdo 2017). A second issue is inefficient financing. The price paid to this government monopoly in health insurance, the analogue to the revenue received by private plans, would be determined through tax policy. M4A will be neither more efficient nor cheaper than the current system, and it could adversely affect health. As we show below, evidence on the productivity and effectiveness of single-payer systems suggests that M4A would reduce longevity and health, particularly among the elderly, while only minimally increasing the fraction of the population with health insurance. In the near term, it would lead to shortages and decreased access to care. And in the long-run, M4A could decrease quality by decreasing innovation. A smaller economy would be another likely adverse effect, due to M4A’s disincentives to work and earn. The CEA has calculated that if M4A were financed solely through higher taxes, it would reduce long-run gross domestic product (GDP) by 9 percent and household incomes after taxes and health expenditures by 19 percent (see chapter 8 of this Report for further discussion). Implications for the Value of the Program and Health Outcomes M4A would replace the existing private and public system for financing healthcare insurance—which includes private, group insurance for about half the population; government insurance for lower-income households, with essentially zero out-of-pocket expenses, in the Medicaid program covering 21 percent of the population; Medicare for the elderly and nonelderly disabled covering 14 percent of the population, including traditional Medicare that has cost sharing in the form of deductibles and coinsurance, privately run Medicare Advantage plans that compete against other advantage plans and traditional Medicare for enrollees and insure about a third of Medicare recipients, and privately run Medicare Part D plans for prescription drug coverage; and the individual, nongroup market covering 7 percent of the population, consisting of the ACA exchanges and nonexchange plans (Kaiser Family Foundation 2017a, 2017b). The existing system also provides uncompensated emergency care—because the 1986 Emergency Medical Treatment and Labor Act requires hospitals to treat anyone coming to their emergency departments, regardless 210 | Chapter 4 of their insurance status or ability to pay—and uncompensated nonemergency care delivered by various providers. Therefore, changing the financing of health would leave limited room to improve health among U.S. citizens by expanding insurance coverage. The current system includes some non–Medicaid eligible citizens who remain uninsured, but by all estimates they are healthy people, which is why they choose not to purchase an ACA plan (CBO 2017). M4A would determine quality and productivity through centrally planned rules and regulations. As opposed to a market with competition, if a patient did not like the tax charged or the quality of the care provided by the government monopoly, he or she would have no other insurance options. In addition, price competition in healthcare itself, as opposed to health insurance, would be eliminated because all the prices paid to providers and suppliers of healthcare would be set centrally by the single payer. Despite its name of “Medicare for All”, the proposed plan differs from the currently popular Medicare program by eliminating cost sharing; by preventing private health plans from competing, as in the Medicare Advantage and Part D programs; by preventing private markets from supplementing the public program; and, according to the bill in the House of Representatives, by prohibiting provider institutions from participating in the program unless they are public or not-for-profit entities. Moreover, even if M4A made no changes to Medicare operations, it still would have the problem of taking a program that functions reasonably well for about a sixth of the population and making it work on a vastly larger scale. Under the existing system, the primary financial limits on healthcare utilization are copayments, coinsurance, and deductibles, which keep premiums lower by discouraging overconsumption of free healthcare at the time of service. M4A would eliminate these out-of-pocket expenses for everyone. If the aggregate supply of healthcare were held unchanged, M4A would reduce health and longevity by reallocating healthcare from high-value uses to lowervalue ones. In addition, M4A would reduce the aggregate supply of healthcare by reducing payments to providers, by discouraging innovation, and by using a centralized bureaucracy to allocate resources. We expect that healthcare for the elderly people who are currently covered by Medicare would be especially adversely affected by decreased access to care and decreased longevity. Here, we illustrate the evidence for the relationship between singlepayer programs, healthcare, and health outcomes, including short-run effects, assuming that it has no impact on medical innovation, as well as long-run effects that incorporate changes in incentives for innovation and the resulting impact on future health. Economies of Scale and Administrative Costs in Insurance Many M4A advocates argue that the major benefit of adopting single-payer healthcare would be that the costs of producing health insurance by a state Enabling Choice and Competition in Healthcare Markets | 211 monopoly would be lower than under competition. Some evidence on this comes from the literature on the so-called administrative costs of health insurance that do not directly go toward paying for care for beneficiaries. In order to hold regulation constant, Sood and others (2008) analyzed administrative costs within a single State, California. They considered administrative costs and profit levels as the residual of the premium revenue spent directly on beneficiaries’ healthcare. They found that in 2006, private plans spent about 12 percent on administrative costs and had profit levels that were significantly below the average for all Standard & Poor’s 500 companies (5 vs. 7.5 percent), which, given the existence of government plans, makes profits of only 2 or 3 percent of overall health spending. The CBO (2016) found that private plans spent 13 percent of their premium revenues on administrative expenses and that 2 percent were profits. In contrast, Sood and others (2008) found that Medicare costs were 5 percent, plus the administrative costs of intermediaries that collect premiums and process Medicare claims. However, the putative efficiency of Medicare administration by the CMS compared with private insurers may simply be a product of inadequate accounting. Medicare patients—the elderly, the disabled, and patients with end-stage renal disease—are sicker and costlier than the younger enrollees in private plans. Medicare’s administrative costs as a share of medical spending are smaller mainly because medical spending is higher for the Medicare population compared with the population below 65 that is privately insured—nearly two and a half times higher per person (Book 2009). In addition, insurers’ administrative costs do not rise proportionally with total health claim costs— most administrative expenses are fixed per program or are incurred on a perbeneficiary basis, and claims processing costs represent a very small share of administrative costs. If we look at administrative costs per enrollee, we find that Medicare is more inefficient than private insurers (Kessler 2017). Sood and others (2008) found that Medicare spends $471 per enrollee on administrative costs, close to the $493 in for-profit plans, and actually above the $427 spent across all California health plans. Similarly, Book (2009) found that as a proportion of total costs in 2005, Medicare’s administrative costs were 5.8 percent, compared with 13.2 percent for private insurance; but Medicare’s administrative costs per person were $509, compared with $453 for private insurance. An additional reason that administrative spending by private insurers artificially appears higher than that by the CMS for Medicare is that private insurers’ administrative costs include State premium taxes, from which the CMS is exempt, and directly provided medical services—such as disease management services and nurse consultation telephone lines—that are not counted as paid medical claims (Book 2009). Philipson (2013) found that the focus on administrative costs omits other important costs, and forgone opportunities, of the state monopoly approach. Under a government monopoly health insurer, the plan is financed with taxes 212 | Chapter 4 rather than voluntarily paid premiums. As is discussed below, the economic cost of taxes is not merely the revenue that arrives in the Treasury but also the distortions of household and business decisions induced by taxes. This applies to administrative costs as well, so that $1.00 in administrative costs in the private sector is equivalent to about $1.50 in administrative costs in the public sector. In addition, claims of Medicare superiority ignore the vital role that private “administrative” expenses—such as marketing, profits, and utilization controls—play in driving competition and innovation in the marketplace. Administrative costs also help prevent fraud and improper payments, which are estimated to be about 8 and 10 percent of Medicare and Medicaid spending, respectively (HHS 2018).4 Furthermore, private plans reduce overall costs by aggressively reviewing healthcare utilization. As a result of competition among plans, lower overall expenses are passed on to consumers as lower premiums, even though a greater percentage of those expenses may be administrative. In contrast, a public program does not engage in premium competition. Beneficiaries, workers, and shareholders of private plans would not tolerate the higher premiums or lower wages or dividends that would be the result of lax utilization controls or high levels of fraud. Healthcare providers, as distinct from health plans, also spend significant time and resources on administrative costs (Woolhandler, Campbell, and Himmelstein 2003; Himmelstein 2014). Some of these costs serve the economic functions noted above, such as controlling fraud and overutilization; but others are specifically related to billing. It has been asserted (Weisbart 2012) that a single-payer system would eliminate many billing-related expenses, but these savings may not materialize, because providers would likely need to struggle with voluminous new Federal regulations issued to deal with the myriad different circumstances that could arise among the 325 million people who would be on the single government plan. It is unlikely that a government-run monopoly’s efforts to lower healthcare costs by eliminating profits and marketing would be any more effective than government monopoly efforts in other sectors of the economy. In many other industries, economists have generally found that production costs under a monopoly are higher than with competition. Monopolies that are owned in whole or in part by the government incur higher costs than private corporations that operate competitively. The seminal research by Boardman and Vining (1989) found robust evidence that government-owned and mixed enterprises are less efficient than private corporations. More recent work has examined the inefficiencies and higher costs incurred by public monopolies in the education 4 Overpayments were about $32 billion in Medicare Fee-for-Service and $36 billion in Medicaid. Underpayments only accounted for 3 percent and 1 percent respectively of the programs. The Medicaid Fraud Control Unit reports (Murrin 2018) that over the last five years, fraud has accounted for nearly 75 percent of all its convictions. Enabling Choice and Competition in Healthcare Markets | 213 and corrections sectors (Hoxby 2014; Gaes 2008). Once these factors are taken into account, Medicare’s efficiency advantage becomes illusory, even if abnormal profits and marketing were eliminated from the private sector. Cross-Country Evidence on the Effects of Universal Healthcare on Health Outcomes and the Elderly Proponents of M4A often refer to European-style programs of socialized medicine as their role model, but the European programs appear to deliver less healthcare to the elderly and result in worse health outcomes for them.5 Many of these programs ration older patients’ access to expensive procedures directly or through waiting times (Cullis, Jones, and Propper 2000). Such age discrimination in coverage occurs because there is no competition between plans under a monopoly. If there were, presumably private plans—which would be outlawed under M4A—would emerge to offer the care not adequately covered by the government monopoly. Current Medicare beneficiaries would likely be hurt by M4A’s expansion of the size of the eligible program population. The evidence for a trade-off between universal and senior healthcare is supported by both the European single-payer experience that limits care for the elderly compared with the U.S., along with the recent domestic U.S. reforms under the ACA that reduced projected Medicare spending by $802 billion to help fund expansions for younger age groups (CBO 2015). The United States’ all-cause mortality rates relative to those of other developed countries improve dramatically after the age of 75 years. In 1960— before Medicare—the U.S. ranked below most EU countries for longevity among those age 50–74, yet above them among for those age 75 and higher. This pattern persists today. Ho and Preston (2010) argue that a higher deployment of life-saving technologies for older patients in the U.S. compared with other developed countries leads to better diagnosis and treatment of diseases of older people and greater longevity. The availability and utilization of healthcare are particularly important for cancer longevity. Cancer is the leading cause of death in many developed countries, especially among older individuals, and it constitutes an important component of overall U.S. healthcare spending. Philipson and others (2012) found that U.S. cancer patients live longer than cancer patients in 10 EU countries, after the same diagnosis, due to the additional spending on higherquality cancer care in the U.S. Figure 4-1 shows the results for life expectancy after diagnosis.6 Ho and Preston (2010) point out that in Europe, where the proportion of surgically treated patients declines with age, five-year survival 5 Note that a number of European countries—including Belgium, Germany, and Switzerland—have universal healthcare without having a single-payer system. 6 Between the two continents, difference not attributable to a different propensity to screen for cancer in the U.S. 214 | Chapter 4 Figure 4-1. Average Survival from a Cancer Diagnosis, 1983–99 United States Life expectancy postdiagnosis (years) European Union 12 11.1 11 10.2 9.7 10 9.3 9.2 9 8.5 8.4 7.9 8 7.6 7.1 7 1983–5 1986–8 1989–91 1992–4 1995–9 Source: Philipson et al. (2012). Note: The results are standardized by age, gender, and cancer site. EU countries for which survival data were consistently available over the analysis period are included: Finland, France, Germany, Iceland, Norway, Slovakia, Slovenia, Scotland, Sweden, and Wales. rates for colorectal cancer are lower for elderly patients than younger patients. But in the United States, where utilization of surgery does not decline with age, colorectal cancer survival rates do not decline for elderly patients. This effect is not confined to cancer treatment. For ischemic heart disease—the world’s leading cause of death—the use of cardiac catheterization, percutaneous coronary angioplasty, and coronary artery bypass grafting declines with patients’ age, but declines more steeply in other developed countries than in the United States. Compared with these developed countries, the U.S. has a lower case fatality rate for acute myocardial infarction (the acute manifestation of ischemic disease) for older persons but not for younger persons age 40 to 64 (Ho and Preston 2010). This disease-specific evidence is more informative about the benefits of healthcare than often-discussed cross-country comparisons of nationally aggregated outcomes, such as overall population longevity and aggregate healthcare spending. There are many determinants of overall population health other than healthcare—such as diet, exercise, genes, and violence—that differ across countries (CEA 2018b). These factors may lead to lower U.S. longevity even while U.S. healthcare is of higher quality. The fact that many wealthy foreigners who could afford to obtain care anywhere in the world come to the U.S. for specialized care is perhaps the strongest indication of its superior quality. The general pattern of medical tourism is that the United States exports highquality care while importing low-cost care (Woodman 2015). Enabling Choice and Competition in Healthcare Markets | 215 The Lower Quality of Universal Coverage, in Terms of Reduced Availability Another major quality attribute of healthcare is how long one must wait to receive it. The highest-quality care may be ineffective if there are delays in diagnosis or treatment. For example, delays in diagnosing or treating cancer will cause decreased survival and increased suffering, regardless of how good the care is. This major dimension of the quality of care may fall with government expansions of care as they generate excess demand, and thereby may induce queues with waiting times to access care. Because it is “free” at the time of service, the single-payer, universalcoverage system gives consumers more reason to consume healthcare (Arrow 1963; Pauly 1968). The Rand Health Insurance Experiment documented that as the amount of coinsurance decreased, utilization of medical care rose (Newhouse 1993; Brook et. al. 2006). M4A cuts the out-of-pocket expenses that people in private insurance and the current Medicare system pay (about 70 percent of the insured population) to zero (Kaiser Family Foundation 2017a). In addition, when it cuts provider reimbursement rates, a single-payer system gives the healthcare industry less reason to supply it.7 Something must determine who gets the scarce provider resources, and quality degradation is the typical way that markets make this determination when prices are unable to do so (Mulligan and Tsui 2016). The quality degradation may take the form of shorter appointment times, longer patient travel times, or longer waiting times to receive care. Waiting times for nonemergency or elective surgery were shorter for adults (18 and older) in the U.S. than in 10 other developed countries, especially those with a single-payer system. Table 4-1 shows that 61 percent of Americans waited less than 1 month after being advised that they needed surgery. The comparable figures for Canada and the United Kingdom, two countries frequently cited as models by M4A advocates, were 34.8 percent and 43.4 percent, respectively. Similarly, table 4-2 shows that only two countries (Germany, at 71.2 percent; and Switzerland, at 73.2 percent) had a slightly higher percentage of patients able to see a specialist within 4 weeks of referral than the U.S. (69.9 percent), and neither of these countries has a single-payer system (Mossialos et al. 2017). The figure for Canada was 38.0 percent, and that for the U.K. was 48.6 percent. In a recent report, the CEA (2018c) pointed out that waiting times for seniors to see a specialist in the U.S. were shorter than in single-payer countries (figure 4-2). Some argue that this shows that Medicare, and thus its distant cousin “Medicare for All”, works and should be extended to everyone. This is a misinterpretation. 7 M4A reduces payments to providers (subtitle B of Title VI of the Senate “Medicare for All” Act of 2017). 216 | Chapter 4 Table 4-1. Adult Waiting Times for Nonemergency or Elective Surgery, 2016 Four or more months (percent) Do not know or decline to answer (percent) Total (count) 28.3 8.4 6.6 683 44.0 18.2 3.0 557 51.4 47.0 1.6 0.0 173 39.0 58.1 0.0 2.9 124 Netherlands 48.9 39.8 4.5 6.9 99 New Zealand 43.3 38.6 14.9 3.2 141 Norway 37.0 41.9 15.3 5.8 208 Sweden 37.3 46.8 11.8 4.1 1,015 Switzerland 59.3 32.8 6.5 1.5 219 United Kingdom 43.4 31.8 12.0 12.8 87 United States 61.0 31.7 3.6 3.7 268 Country Less than one month (percent) Between one and four months (percent) Australia 56.8 Canada 34.8 France Germany Source: Commonwealth Fund Survey. Note: Respondents answered the survey question, “After you were advised that you needed surgery, how many weeks did you have to wait for the non-emergency or elective surgery?” Table 4-2. Adult Waiting Times for Specialist Appointments, 2016 Less than four At least four weeks weeks (percent) (percent) Country Australia 54.7 39.3 Do not know or decline to answer (percent) Total (count) 6.1 2,156 Canada 38.0 58.5 3.5 2,228 France 60.2 39.8 0.0 639 Germany 71.2 27.4 1.4 459 Netherlands 64.0 28.9 7.1 580 New Zealand 49.3 47.3 3.3 404 Norway 36.9 55.5 7.7 605 Sweden 48.1 44.7 7.2 3,251 Switzerland 73.2 25.9 0.9 810 United Kingdom 48.6 42.5 8.9 371 United States 69.9 25.3 4.8 1,019 Source: Commonwealth Fund Survey. Note: Respondents answered the survey question, “After you were advised to see or decided to see a doctor in specialist health care/specialist (or consultant), how many weeks did you have to wait for an appointment?” Enabling Choice and Competition in Healthcare Markets | 217 Figure 4-2. Seniors Who Waited at Least Four Weeks to See a Specialist during the Past Two Years, 2017 Single payer Non–single payer 59 Canada Australia Norway United Kingdom New Zealand Sweden France CMWF average Germany Netherlands Switzerland United States 55 54 51 45 45 42 37 41 24 22 21 0 10 20 30 Percent 40 50 60 Sources: Canadian Institute for Health Information; Ghanta (2013); Commonwealth Fund survey. Note: Single-payer systems were compiled by Ghanta (2013) from World Health Organization sources. CMWF average refers to the average of the 11 countries in the Commonwealth Fund survey. Results exclude respondents who never attempted to get an appointment. All that figure 4-2 shows is that the current Medicare system, which mixes public and private elements—including competition between hundreds of Medicare Advantage plans and between hundreds of Medicare Part D drug plans and public and private financing—is superior to foreign, single-payer systems (see chapter 8 for more discussion). It does not indicate that Medicare is superior to the insurance currently available for the non-Medicare U.S. population. And it has little bearing on what to expect from M4A. M4A is not simply an expansion of Medicare. It is a completely different program that bans private insurance and competition, and that anticipates a system-wide lowering of reimbursement levels below private insurance rates. According to the CMS Actuary, lowering private provider rates to current Medicare rates would lead to a drop of about 40 percent for hospitals’ reimbursements and 30 percent for physicians’ reimbursements by 2022, decreases that are scheduled to grow even greater over time, due to statutory Medicare payment restraints enacted as part of the ACA and the Medicare Access and CHIP Reauthorization Act of 2015 (CMS 2018; Blahous 2018b). These lower reimbursement rates will undoubtedly prolong waiting times and worsen access to care because providers respond to reimbursement levels. In a study of Medicaid fees, every $10 change up or down led to a 1.7 percent change in the same direction in the proportion of patients who could secure an appointment with a new doctor (Candon et al. 2017). Even more worrisome, Medicare’s hospital payment rates are, on average, so far below hospitals’ reported costs of providing services 218 | Chapter 4 that the CMS Actuary projects that by 2019, over 80 percent of hospitals will lose money treating Medicare patients. If this projection is correct, M4A would force 80 percent of hospitals to lose money when treating all their patients (Blahous 2018b). One does not need to go abroad to see the problems with single-payer medicine. The Veterans Health Administration (VHA) is a publicly funded, single-payer system to provide care to military veterans. Its governmentemployed providers, particularly medical specialists, are underpaid compared with the private market and lack the motivation to provide the care that market competition to produce profits generates. In 2014, it was widely reported that the Phoenix VHA facility, along with several other facilities, had kept large numbers of veterans waiting inordinate amounts of time to receive treatment and that some had died while waiting (Farmer, Hosek, and Adamson 2016). Many of the facilities had falsified records in order to meet the VHA’s target of providing appointments within 14 days. Using the VHA’s own data, outside researchers found tremendous variation in waiting times across VHA facilities. Although most veterans get care within 2 weeks of their preferred appointment dates, a significant number wait more than 60 days, and only half reported getting care “as soon as needed” (Farmer, Hosek, and Adamson 2016, 9). The Veterans Access, Choice, and Accountability Act of 2014 created a temporary plan—the Choice Program—to give veterans the option of receiving care from a private, community-based provider when timely care is unavailable from a VHA facility. Unfortunately, the program had limited success—veterans were still experiencing lengthy actual waiting times for appointments in 2016 (GAO 2018). In June 2018, President Trump signed the VA MISSION Act of 2018 to extend funding for the Choice Program and to improve it by consolidating it over the next year with six other programs offering community-based care into the single Veterans Community Care program. This statute aims to minimize the inconsistent experience that veterans receive by requiring the VHA to standardize access to care, assess the system’s capacity to provide the care required, establish a high-performing national network of providers to offset capability gaps, and transition the VHA to an integrated healthcare system. A U.S. Single-Payer System Would Have Adverse Long-Run Effects on Global Health through Reduced Innovation There has been much theoretical and empirical economic analysis concluding that lowering prices for innovative industries often has short-run benefits that are dominated by long-run costs. Lowering prices by having a single payer for innovative healthcare technologies is analogous to reducing patent terms, for both reduce the return to medical research-and-development (R&D) investments. Both have short-term benefits, lowering prices for existing technologies—but at the cost of reducing the flow of new technologies that ultimately lower the real price of healthcare. Enabling Choice and Competition in Healthcare Markets | 219 The value of healthcare generated by innovation over time exceeds its additional costs (Cutler 2004). The lower premiums of the 1970s bought lowerquality care than is available today—no one today would settle for a 1970s level of care. Forty years of innovations have raised prices, but they have raised the value of healthcare even more. Some innovations are very expensive—for example, today’s specialty drugs—and others are relative bargains—such as antibiotics, new treatments for heart attacks that cost $10,000 in real terms but add a year of life expectancy (Cutler and McClellan 2001), and new cancer treatments in the 1980s and 1990s that cost an average of only $8,670 per year of life gained (Philipson et al. 2012). Other innovations add little value. Though it is often impossible to know in advance which innovation will be a good value, it is imperative to preserve the incentive to innovate so there will continue to be new, high-value innovations. Because worldwide innovation relies so heavily on the U.S. market to support it, adopting an M4A program would likely adversely affect innovation because the global market for new innovations would shrink. A large body of literature looks at the effects of market size on innovation. For example, using the passage of the Medicare Prescription Drug, Improvement, and Modernization Act of 2003 as a source of variation, Blume-Kohout and Sood (2013) find an elasticity with respect to market size of between 2.4 to 4.7 for Phase 1 clinical trials. These estimates are well within the range of the work of Acemoglu and Linn (2004), who find an elasticity of 3.5 for approved new molecular entities. Moreover, these results are consistent with evidence on the impact of public policy on market size.8 Although these long-run effects on a reduced pace of innovation are more difficult to quantify, they may well be more important than the short-run effects of spending less on elder care. U.S. patients and taxpayers alike have financed the returns on R&D investments to innovators. Unlike other developed countries with single-payer systems, which nearly all impose some sort of price controls, the U.S. market has less public sector financing and is therefore more open to market forces. In a free market, prices of products reflect their value, as opposed to prices in government-controlled markets, which reflect political trade-offs. Among the nations that belong to the Organization for Economic Cooperation and Development, more than 70 percent of patented pharmaceutical profits come from sales to U.S. patients, even though the United States only represents 34 percent of the organization’s GDP at purchasing power parity (CEA 2018a). Empirical research on pharmaceutical innovation and other industries has shown that R&D investments are positively related to market size. For 8 For example, Finkelstein (2004) finds a 2.5-fold increase in the number of new vaccine clinical trials for affected diseases following the adoption of three public health policies aimed at raising vaccination rates, and Yin (2008) finds that the introduction of the Orphan Drug Act raised the flow of new clinical trials for rare diseases by 182 percent in the three years following the passage of the policy. 220 | Chapter 4 Figure 4-3. Effect of U.S. Drug Price Controls on Global Longevity, Among Those Age 55–59, 2010–60 U.S. population EU population Change in life expectancy (years) 0.0 -0.2 -0.10 -0.13 -0.19 -0.4 -0.49 -0.6 -0.40 -0.44 -0.50 -0.57 -0.70 -0.8 2010 2030 2040 2050 -0.67 2060 Source: Lakdawalla et al. (2009). Note: Data were estimated by the author based on the Global Pharmaceutical Policy Model. the case of medical innovation, evidence suggests that a 1 percent reduction in market size reduces innovation—defined as the number of new drugs launched—by as much as 4 percent (Acemoglu and Linn 2004). Given that future profitability drives investment in this way, Lakdawalla and others (2009) examined the impact on medical innovation of the U.S. adopting European-style price controls. The study examined patients over the age of 55 and considered the reduction in R&D and new drugs approved that these price controls would cause. The paper found resulting increases in mortality due to heart disease, hypertension, diabetes, cancer, lung disease, stroke, and mental illness. Given that innovations are financed by world returns that are mostly earned in the U.S., the mortality effects on health were substantial, both in the U.S. and in Europe (figure 4-3). If M4A would lead to the same below-market pharmaceutical prices that other countries have imposed through government price controls, it would reduce the world market size and thereby medical innovation, and ultimately mean that future patients would forgo the health gains that would have come from these forgone innovations. Financing “Medicare for All” Apart from M4A’s effects on the amount and quality of healthcare provided, there is the issue of how it would be financed and what impact this decision would have on the overall economy. The CMS, which administers most Enabling Choice and Competition in Healthcare Markets | 221 government-financed healthcare, projects that in 2022 the private sector will spend $1.47 trillion on private health insurance and $0.46 trillion in outof-pocket health expenses, in an economy with a total GDP of $24.35 trillion (National Health Expenditure Accounts projections; CEA 2018c). Because healthcare is free at the time of service to users under M4A, and otherwise would not be “free” for those not enrolled in government programs, M4A would increase healthcare utilization at the Federal government’s expense. Blahous (2018a) predicts that there would be extra utilization of $0.44 trillion in 2022. Adding this figure to the private health insurance and out-ofpocket expenses it would replace would lead to a total addition to Federal spending of $2.37 trillion in 2022. Without M4A, $2.37 trillion would be 9.7 percent of GDP, or 11.7 percent of consumption, or an average about $18,000 per household (CEA 2018c). An even larger amount of Federal health spending would occur if the most comprehensive list of covered services were adopted in reconciling the Senate and House M4A bills. The CEA (2018c) found that paying for M4A solely with uniform spending cuts across all existing Federal programs would require 53 percent across-theboard cuts in 2022. Without additional taxes, all other Federal programs would need to be cut by more than half. This would imply cuts to Social Security of about $0.7 trillion, to (the existing part of) Medicare of about $0.4 trillion, and to the Defense Department’s budget of about $0.4 trillion. If Medicare were exempted, 79 percent of Social Security (about $1.0 trillion per year) would need to be cut, and annual Defense cuts would need to be about $0.6 trillion. Alternatively, M4A could be financed solely with taxes. Some argue that the population would be no worse off because these new taxes would simply replace the cost of premiums paid to private sector insurers. This argument ignores the fact that taxation distorts economic activity so that the cost of tax revenues is larger than the revenues. The excess burden, or “deadweight loss,” reflects the decreased economic efficiency and product output that exceeds the tax revenue collected. To illustrate, if the government imposed a per-passenger tax of $100,000 on air travel, it would collect virtually no revenue because almost no one would fly, but it would impose a large burden on the population in excess of the revenue collected by replacing air travel with less efficient cars and other types of ground transportation. The existing empirical literature finds that this burden is about 50 cents on the dollar, so that the cost of collecting the taxes to fund M4A in a year would be about 1.5 times the additional revenue needed to fund the larger program (Feldstein 1999; Saez, Slemrod, and Giertz 2012; Weber 2014).9 Between the two extreme funding scenarios—funding M4A entirely by cuts in spending or entirely by tax increases—lies a middle ground of using 9 The excess burden rate is larger, and potentially infinite, when considered particularly large increases in revenue, as with M4A. Also see chapter 8 of this Report for additional perspective on the excess burden of M4A. 222 | Chapter 4 a combination of both spending cuts and tax increases. This approach was followed in the recent Federal healthcare expansion under the ACA, whereby funding was split between tax increases and spending cuts to Federal healthcare programs (CBO 2009). It is unclear whether sufficient tax revenue could be collected for the much larger proposed M4A program, given the existence of tax avoidance behavior, particularly by the higher-income populations that provide the largest share of total Federal tax revenues. If the amount of maximum revenue collected, the height of the so-called Laffer curve, were below what would be required in new funding, then spending cuts would be required, regardless of whether lawmakers would prefer to finance the entire program with taxes. The Administration’s Actions to Increase Choice and Competition in Health Insurance In contrast to policies curtailing market forces advocated in “Medicare for All” proposals, this section details the Trump Administration’s efforts so far to improve choice and competition in health insurance markets in order to help them better serve lower- and middle-income people. As part of its broader policy agenda to deregulate markets, the Trump Administration has completed three deregulatory reforms that expand consumers’ health insurance options: (1) reducing, through the Tax Cuts and Jobs Act of 2017, the ACA’s individual mandate penalty to zero; (2) a June 2018 rule expanding the ability of small businesses to form association health plans (AHPs) to provide low-cost group health insurance to their employees; and (3) an August 2018 rule expanding the term, renewability, and usefulness of shortterm, limited duration insurance (STLDI) plans. As discussed above, several market failures are relevant to health insurance. Taking the relevant market failures into account, we use the standard methods of welfare economics to assess the potential efficiency gains to affected consumers and taxpayers. We find that these deregulatory actions will generate benefits to Americans worth about $453 billion over the next 10 years (CEA 2019). The reforms will benefit lower- and middle income consumers and all taxpayers, but leave small premium increases on some middle- and higher-income consumers. The benefits of giving a large group of consumers more insurance options far outweighs the projected costs imposed on the smaller group that will pay higher premiums. These reforms do not sabotage the ACA; they provide a more efficient focus of tax-funded care to those in need. In this section, we examine in depth the most productive of the reforms, elimination of the individual mandate penalty, which will benefit Americans by $19 billion, including the deadweight cost of taxation in 2021 (when the markets will have largely adjusted to the reform) and $204 billion between Enabling Choice and Competition in Healthcare Markets | 223 2019 and 2029. Though we will briefly mention the other two reforms, AHPs and STLDIs, they are discussed at length in chapter 2. The Stability of the Nongroup Health Insurance Market The ACA’s proponents argued that three key components of the statute were essential and had to work together for the act to be economically viable—the so-called three-legged stool (see Gruber 2010). The first leg of the stool is guaranteed issue and community rating, whereby consumers must be offered coverage without the premium varying because of preexisting condition or health status.10 The second leg of the stool is the individual mandate penalty on the remaining uninsured population, so that healthy consumers do not wait until they are ill to sign up. The third leg of the stool is a system of subsidies, so that lower- and middle-income consumers can afford coverage. Under this view, deregulatory reforms that expand health insurance options beyond the ACA’s insurance markets risk destabilizing the ACA insurance markets. The relatively healthy consumers who might best respond to expanded options are seen as critical sources of ACA insurance-market revenue because their premiums are expected to exceed their healthcare claims.11 However, several features of the insurance market undermine this argument. Most important, the claim that the individual mandate is indispensable is flawed, due to the large ACA premium subsidies that most ACA exchange enrollees receive. The view that deregulation sabotages the ACA is based on the assumption that the premiums paid by unsubsidized healthy consumers are a critical source of exchange revenue.12 Federal subsidies are far more important. Figure 4-4 displays the annual premiums on the exchanges as a function of family income and composition. Only consumers who are ineligible for premium subsidies—those with incomes above 400 percent of the Federal poverty line on the exchanges and everyone with ACA-compliant coverage off the exchanges—actually pay the entire premium. There were 14.4 million people in the nongroup market in the first quarter of 2018, 10.6 million on the exchanges, and only 3.8 off the exchanges in both ACA-compliant and noncompliant plans (Kaiser Family Foundation 2018). In 2018, only 13 percent of consumers (1.4 million) who purchased insurance on the ACA exchanges 10 Premiums are allowed to vary within a narrow range based upon age (3:1 adjustment) and smoking status. 11 When it adopted the ACA, Congress itself evidently believed that the individual mandate was necessary to a regulatory system that included guaranteed issue and community rating. Congress expressly found that the individual mandate was “essential to creating effective health insurance markets in which improved health insurance products that are guaranteed issue and do not exclude coverage of preexisting conditions can be sold” and that “the absence of the [individual mandate] would undercut Federal regulation of the health insurance market” (42 U.S.C. § 18091). 12 This is closely related to “adverse selection”: The departure of a healthy person from a risk pool is purported to be adverse in terms of reducing plan premium revenue more than it reduces claims. Due to the ACA subsidies, adverse selection will operate differently, in that subsidized healthy persons will have less incentive to leave the ACA exchanges. 224 | Chapter 4 did not receive subsidies and therefore paid the full premium.13 The other 87 percent of exchange consumers (9.2 million) received subsidies through the ACA’s premium tax credits and so paid just a fraction of the full premium. Many of these subsidized people also received cost-sharing reduction subsidies to reduce their out-of-pocket costs if their income was between 100 and 250 percent of the Federal poverty line and they purchased a Silver exchange plan. ACA-compliant coverage is sold both on and off the ACA’s exchanges, but subsidies are only available for coverage purchased on the exchanges. Including the two types of ACA-compliant individual market coverage (on and off exchanges) that share a common risk pool and have the same premiums, about 38 percent of consumers who purchased ACA-compliant, individual insurance paid the full premium in 2017. The percentage of unsubsidized consumers in the individual market has fallen every year from 2015 to the present as premiums have risen. The regulatory reforms expand insurance options. To the extent that the consumers who leave the ACA exchanges for these options are healthier than average, their departure will somewhat raise gross premiums for those who remain on the exchanges. But for subsidized consumers who remain on the exchanges, the premium increases will be mainly paid by taxpayers, not the consumers themselves. Although the CBO projects that setting the individual mandate tax penalty to zero will encourage healthier-than-average enrollees to leave the ACA exchanges, the CBO also projects that their departure will reduce Federal expenditures on ACA premium subsidies from 2018 through 2027 by $185 billion (CBO 2017; Gruber 2010).14 Of course, the CBO’s projections of Federal expenditures are uncertain. But figure 4-4 shows the origin of these projections: For consumers with family incomes less than 400 percent of the Federal poverty line, the individual mandate penalty taxes them for turning down large amounts of government assistance. The role of the ACA premium subsidies in stabilizing the exchanges has been acknowledged by others, including the previous Administration (CEA 2017; Sacks 2018; Collins and Gunja 2018). The premium subsidies’ stabilizing role is consistent with the experience of the past few years, in which rising premiums did not curtail demand. ACA exchange premiums have almost doubled in just a few years (figure 4-5), though there has been hardly any change in 13 “Grandfathered” plans that were in effect when the ACA was passed are exempt from some of the ACA’s provisions. The fraction of workers with employer-sponsored insurance enrolled in grandfathered plans decreased from 56 percent in 2011 to 16 percent in 2018 (Kaiser Family Foundation 2018). During the transitional period, another set of “grandmothered” plans have also been exempt from certain ACA provisions. 14 Taking into account all the effects of setting the individual mandate penalty to zero, the CBO projects a $338 billion reduction in Federal expenditures from 2018 through 2027, $179 billion of which will be a reduction in Federal expenditures on Medicaid (CBO 2017). Enabling Choice and Competition in Healthcare Markets | 225 Figure 4-4. Premium Costs as a Function of Household Income, 2018 Premium (dollars) 25,000 Unsubsidized premium 20,000 15,000 10,000 Subsidized premium 5,000 0 25,000 45,000 65,000 Income (dollars) 85,000 105,000 Source: Kaiser Family Foundation Subsidy Calculator. Note: Data represent the national average premium for a family of four with two 50-yearold adults and two teenagers with no tobacco use. exchange enrollment.15 Figure 4-5 demonstrates that the U.S. Treasury (i.e., taxpayers) shouldered almost the entire premium increase for ACA plans. Even though gross premiums almost doubled between 2014 and 2018, lower- and some middle-income consumers were insulated from the effects of these increases by the subsidies. Although there may also have been other factors at work, these trends are consistent with the CBO’s (2017, 2018) projections that further increases in the full exchange premiums (usually referred to as “gross” premiums) will not destabilize the ACA exchange markets. Between 2018 and 2019, benchmark ACA premiums dropped by 1.5 percent. The individual mandate penalty adds an unnecessary leg to the ACA stool, resulting in economic inefficiencies. Comprehensive insurance, particularly with extremely low cost sharing, could cause patients to overconsume healthcare that provides little benefit relative to the cost—the moral hazard problem discussed above.16 The significant decline in premium subsidies as income rises also distorts labor markets by taxing income and some types of 15 Figure 4-5 does not include cost-sharing reduction payments or reinsurance payments. Fiedler (2018) calculates that cost-sharing reduction payments were equivalent to about 9 percent of average exchange premiums in 2017. Part of the premium increase between 2017 and 2018 was attributable to the nonpayment of cost-sharing reduction payments in 2018. 16 The 2018 Economic Report of the President (CEA 2018b) discusses the large body of evidence that health insurance coverage, and presumably the additional healthcare consumed by consumers as a result of it, provides little health benefit. 226 | Chapter 4 Figure 4-5. Nominal Gross Premiums per Member per Year for Subsidized Enrollees, 2014–18 Dollars 9,000 Premium tax credit Out-of-pocket premium cost 8,000 7,000 6,000 5,000 6,520 4,000 3,000 4,521 3,128 3,149 3,504 1,447 1,480 1,509 1,527 1,529 2014 2015 2016 Plan Year 2017 2018 2,000 1,000 0 Source: Kaiser Family Foundation Subsidy Calculator. Note: Data represent the average national premium for a single, nonsmoking 50-year-old at 200 percent of the Federal poverty line with no children. full-time employment and introduces another marriage penalty in the tax code (Mulligan 2015). Consumers have heterogeneous preferences for risk, smooth cash flow, and range of coverage. As such, it is wasteful to use a tax penalty to coerce people to purchase insurance that does not meet their needs (Mulligan and Philipson 2004). Many “health insurance simulation models” ignore moral hazard and any effect of health insurance policy on labor market equilibrium. Those simulations therefore rule out by assumption many of the benefits of allowing consumers to voluntarily leave ACA-compliant plans (Gallen and Mulligan 2018). In sum, the three-legged-stool justification for the individual mandate tax penalty is not consistent with the basic facts of how the ACA works in practice. The penalty and other restrictions on consumer choice are not needed to support the guaranteed issue of community-rated health insurance to all consumers, including those with preexisting conditions. The ACA premium subsidies stabilize the exchanges. Setting the Individual Tax Mandate Penalty to Zero The ACA’s individual mandate imposed a monetary penalty on nonexempt consumers who did not have ACA-compliant coverage. The Tax Cuts and Jobs Act of 2017 involved a tax cut on the uninsured as well as on people purchasing noncompliant ACA coverage by setting the individual mandate penalty to zero, Enabling Choice and Competition in Healthcare Markets | 227 Table 4-3. IRS Reporting of Individual Mandate Payments, 2014–16 Tax year 2014 Returns paying IM penalty (millions) IM revenue (billions of dollars) Mean penalty paid (dollars) Minimum penalty (dollars) Exemptions (millions) 8.1 1.69 210 95 12.4 2015 6.7 3.11 465 325 12.7 2016 4.0 2.83 708 695 10.7 Sources: Internal Revenue Service (IRS); Busch and Houchens (2018); CEA calculations. Note: IM = individual mandate. The minimum penalty is the minimum statutory penalty per personyear. The uninsured per penalty paid is the uninsured person-years per return paying penalty. effective in the 2019 tax year (131 Stat. 2054). Part of our analysis is the amount of penalty revenue that would have been collected over the next 10 years if the act had not set the penalty to zero. We took the revenue projections from the CBO, and noted their consistency with the actual collections for tax year 2016, which was the first year when the ACA put the full penalty in place. In that year, about 4 million Federal tax returns included individual mandate payments, down from 6.7 million for tax year 2015 (table 4-3). The average 2016 penalty paid per household return was $708. The mandate tax penalty is a regressive tax that falls more heavily on relatively low-income people—the majority of those who paid the tax penalty in 2015 were lower- and middle-income consumers with incomes less than 400 percent of the Federal poverty line. Analyses of removing the individual mandate penalty provided a range of estimates of the impact on the number of insured consumers and on gross ACA premiums. The estimates refer to increases in the full ACA premiums (gross of subsidies), not the out-of-pocket (net) premiums enrollees pay after taking into account the premium subsidies they receive. The CBO (2017) has projected that setting the mandate tax penalty to zero will result in 3 million fewer consumers with ACA-compliant nongroup insurance coverage in 2019, 4 million fewer in 2020, and 5 million fewer each year from 2021 through 2027.17 Because the enrollees who leave ACA-compliant individual coverage are projected to be healthier than those remaining, the CBO has also projected that gross premiums would rise by an average of 10 percent. Nevertheless, the CBO (2017) projects that the 2018–27 budgetary impact of setting the mandate penalty to zero will be to reduce the Federal deficit by $338 billion, which includes a $185 billion reduction in Federal expenditures on ACA premium subsidies. A Commonwealth Fund study analyzed the impact of setting the individual mandate penalty to zero under 10 scenarios (Eibner and Nowak 2018). Each scenario reflected different assumptions about how people respond to financial and nonfinancial factors. In this study’s baseline scenario, 17 The CBO also projects voluntary reductions in Medicaid enrollment and enrollment in employment-based coverage. The CEA is still studying these effects, which were not included in the analysis. 228 | Chapter 4 setting the mandate penalty to zero was estimated to reduce enrollment in the nongroup market by 3.4 million in 2020 and increase the gross premium for bronze plans on the ACA exchanges by 7 percent. We use the CBO’s estimates, which involve a larger change in enrollment (5 million fewer enrollees) and a larger increase in premiums (10 percent) than the baseline scenario that the Commonwealth Fund estimates. A Cost-Benefit Analysis of Setting the Individual Mandate Tax Penalty to Zero Setting the ACA’s individual mandate penalty to zero benefits society by allowing people to choose not to have ACA-compliant health coverage without facing a tax penalty, and by saving taxpayers money if fewer consumers purchase subsidized ACA coverage. We estimate that in 2021, when the CBO (2017, 2018) projects that markets will have largely adjusted to the changes, setting the mandate penalty to zero will yield net benefits worth $19 billion, including the excess burdens of taxation. The total net benefit of the reform over the period 2019 through 2029 comes to $204 billion. The benefits grow over time, so the benefits in 2021 are estimated to be lower than average annual benefits over the 10-year horizon. Without the tax penalty, consumers will likely reduce their ACA-compliant coverage, which refers in this section to coverage purchased on the ACA exchanges and coverage obtained outside the exchanges as long as it complies with the provisions of the ACA. Our analysis recognized that consumers place some value on the ACA-compliant coverage they give up.18 To the extent that these consumers are healthier than average, including them in the insurance pool also benefits others in the pool by reducing the premium needed to cover the pool’s average healthcare expenditures. At the same time, society incurs costs to provide health insurance coverage. Providing insurance to those who value it most highly nets large social benefits. Insuring more and more of the population nets progressively smaller social benefits, because it covers enrollees who do not value the coverage as highly. When insuring even more of the population requires providing insurance to enrollees who value the insurance at less than what it costs society, on net the social benefits become negative. This is captured in figure 4-6 by the downward-sloping net marginal social benefits (MSB) schedule, which shows that as enrollment increases, net social benefits decline and eventually become negative. The MSB schedule is the 18 In keeping with much of the cost-benefit literature, the CEA used the Kaldor-Hicks criterion, which means that all citizens’ benefits and costs are measured in dollars, with all citizens’ totals getting the same weight. In accord with this focus on Kaldor-Hicks economic efficiency, our analysis estimated the value of health insurance coverage to the consumers themselves. Enabling Choice and Competition in Healthcare Markets | 229 Figure 4-6. Benefits of Setting the Individual Mandate Penalty to Zero Net social benefit (dollars) 5 million 0 –2,083 A Marginal social benefit –861 Enrollees whose coverage is compliant with the Affordable Care Act cumulative distribution of net social benefits; for illustrative purposes only, the MSB schedule in figure 4-6 is linear.19 Our cost-benefit analysis, summarized by the MSB schedule portrayed in figure 4-6, uses the standard methods of welfare economics. Consumers’ decisions about whether to have ACA-compliant coverage reveal the value consumers place on this coverage. The value consumers place on insurance reflects their expected healthcare expenditures and the value they place on reducing their financial risk. Some consumers who choose not to have ACA-compliant coverage might have higher healthcare expenditures than they expected and lack coverage. This would not necessarily mean that these consumers were unwise in their choice of insurance; they were unfortunate. Although the MSB schedule shown in figure 4-6 reflects the value that consumers place on their own health insurance, our analysis took into account all the benefits and costs, including the costs imposed on third parties. First, some consumers who lack insurance coverage and then fall ill or have an accident receive uncompensated care from providers. The providers might bear some or all of the costs of uncompensated care; or they might pass some costs along to third parties, such as privately insured patients, through higher prices. Garthwaite, Gross, and Notowidigdo (2018) analyzed confidential hospital 19 As noted below, our triangle analysis assumes that the MSB schedule is approximately linear in the portion of the distribution that responds to the removal of the tax penalty. We also assume zero economic profits for insurers, in that premium revenues are exhausted by claims and loads. Loads, in turn, reflect competitive payments to labor and capital employed in the insurance industry. 230 | Chapter 4 financial data and concluded that, on average, each additional uninsured person costs hospitals about $800 each year. We use this result to estimate the third-party effects of uncompensated care provided to consumers who do not have ACA-compliant coverage. Second, to the extent that the enrollees who leave the market are healthier than average, their health insurance decisions will increase insurance premiums charged to those who remain in ACAcompliant coverage. The CBO (2017) projects the zero tax penalty will increase premiums in the nongroup market by about 10 percent. This 10 percent forecast is likely to be too high, because the CBO did not expect the decline in benchmark premiums that occurred from 2018 to 2019. Nevertheless, our analysis used the 10 percent estimate and accounts for the third-party effects on Federal expenditures for premium subsidies and on premiums paid by nonsubsidized enrollees. Most of the enrollees who remain in ACA-compliant coverage receive premium subsidies, which means that the increased premiums will be largely financed by increased Federal subsidy expenditures. A subset of enrollees who do not receive subsidies will pay higher premiums. Our empirical implementation of the MSB schedule incorporates the third-party effects on uncompensated care, on Federal expenditures for premium subsidies, and on premiums paid by nonsubsidized enrollees. We concluded that setting the individual mandate penalty to zero benefits society by reducing inefficient coverage in the market for ACA-compliant health insurance. The ACA premium subsidies are the first source of inefficiency. The premium subsidies make health coverage more affordable to lower- and middle-income consumers; but on net, the subsidies reduce the social benefits from health insurance because they result in many enrollees who value the insurance at less than its cost. Pauly, Leive, and Harrington (2018) also estimated that many uninsured consumers experience financial losses due to ACA coverage.20 The tax penalties that enforced the individual mandate are the second source of inefficiency and exacerbate the inefficiency due to the premium subsidies. Setting the individual mandate penalty to zero may reduce some ACA premium subsidy payments and, if it does, will generate a social gain. In costbenefit analyses, a reduction in subsidy payments is often merely a transfer that leaves social benefits unchanged—the benefits to taxpayers are exactly offset by the costs to the recipients who lose the subsidy. When comparing the ACA with premium subsidies to a hypothetical ACA without subsidies, the ACA premium subsidy is properly treated as a transfer. But the purpose of this analysis is to evaluate the effect of relaxing restrictions on consumer choice, 20Some might question the judgment of consumers for whom a large subsidy is not enough by itself to induce them to purchase ACA-compliant insurance. Features of the ACA exchanges— administrative loading fees, price controls, moral hazard, premium subsidies that distort labor markets, and heterogeneous preferences—make it reasonable, and consistent with economic efficiency, for a risk-averse person to remain uninsured when his or her risk is low enough. Enabling Choice and Competition in Healthcare Markets | 231 not changing the ACA premium subsidy rules. The subset of individuals who may only have subsidized ACA coverage due to the mandate penalty is shown in figure 4-6. To illustrate: If (as we calculate below) the average net subsidy in 2021 would be about $2,083 and the average penalty about $861, an individual who voluntarily gives up his or her $2,083 subsidy when the $861 penalty is removed is not harmed by losing the Treasury subsidy. Instead, the individual has received a benefit by no longer being constrained by a penalty at the same time that taxpayers benefit by no longer having to finance the subsidy. The CEA’s application of standard welfare economics to this situation is proper but unfamiliar because of the complicated design of the ACA and its related regulations.21 The CBO (2017, 2018) projected that setting the tax penalty to zero would decrease enrollment in ACA-compliant coverage in 2021 by 5 million enrollees. We estimated that after accounting for the average premium assistance received and the other third-party effects, each of these 5 million enrollees reduces third-party expenditures by $2,083 (CEA 2019). If it had not been set to zero, the average tax penalty would have been $861 in 2021.22 As a result of these two market frictions, we estimated that each of these enrollees valued their coverage by $2,514 less than what it cost society, a figure arrived at by adding the deadweight loss per person induced to take coverage by the penalty to the subsidy amount (CEA 2019).23 In figure 4-6, the social benefits of repealing the mandate are given by the base of area A (5 million) multiplied by its average height, which measures the value gap ($2,514). Aggregated over the 5 million enrollees, setting the individual mandate tax penalty to zero will yield social benefits of about $13 billion in 2021, plus reducing the excess burden of taxation by another $6 billion.24 (See box 4-1 for overviews of two important additional deregulatory healthcare reforms.) 21 Following Goulder and Williams (2003), our analysis accounts for important general equilibrium interactions between the deregulatory reforms and preexisting distortions created by the premium subsidies and labor market taxation. The reduction in the subsidy payments are part of the social benefits created by the tax penalty repeal. 22 From table 4-1, the average tax penalty paid in 2016 was $708. We assume that the tax penalty would have grown at an annual rate of 4 percent. 23 The tax penalty averages $861 per enrollee, so the triangular area of deadweight loss per person induced to take compliant coverage equals half of $861, which is $431. This is added to the $2,083 net subsidy to arrive at an average gap of $2,514. 24 One aspect of the projected benefits of the Administration’s deregulatory reforms is that they reduce Federal expenditures on ACA premium subsidies and reduce the deficit. Generally, eliminating taxes and subsidies has larger welfare effects beyond government revenues due to the excess burden of such measures. 232 | Chapter 4 Box 4-1. Additional Regulatory Reforms The Trump administration published new rules establishing two important deregulatory healthcare reforms that will generate tens of billions in benefits to Americans over the next 10 years. The deregulatory reforms expand options in health insurance markets within the existing statutory frameworks, including the ACA. These are more fully discussed in chapter 2 of this Report and are briefly described here. Association health plans. Most uninsured Americans today are nonelderly, employed adults (U.S. Census Bureau 2017). Many work for small businesses or are self-employed in unincorporated businesses where the uninsured rate has historically been and remains high, double the uninsured rate of the general population (Chase and Arensmeyer 2018). The ACA subjected health coverage by small businesses to mandated coverage of essential health benefits and price controls that are not required for large businesses. The June 21, 2018, association health plan rule expands small businesses’ ability to group together to form AHPs to offer their employees more affordable health insurance. AHPs can self-insure or purchase large group insurance, free of the ACA benefit and pricing mandates, thereby lowering premiums and decreasing administrative costs through economies of scale. The AHP rule also broadens plan participation eligibility to sole proprietors without other employees. New AHPs can form by industry or geographic area (e.g., metropolitan area, state). This rule is still too new to be sure about its impact. The CBO (2018) has projected that after the rule is fully phased in, there will be 4 million additional enrollees in AHPs, including 400,000 people who were previously uninsured. Based on the CBO’s projections, we estimate that the AHP rule will cause premiums in the ACA-compliant individual market to increase by slightly more than 1 percent (see chapter 2). We estimate that taking into account both the benefits and costs, the AHP rule will yield $7.4 billion in net benefits in 2021, plus an additional reduction in excess burden worth $3.7 billion. Short-term, limited-duration insurance. In late 2016, shortly before leaving office, the Obama Administration issued a rule shortened the allowed total duration of short-term, limited-duration insurance contracts from 12 to 3 months, thereby limiting the appeal and utility of these STLDI plans. The 2016 rule was not required by the ACA or other laws. The Trump Administration’s August 3, 2018, STLDI rule extends the allowed term length of initial STLDI contracts from 3 to 12 months and allows for the renewal of the initial insurance contract for up to 36 months, which is the same as the maximum coverage term required under COBRA continuation coverage (U.S. Congress 1985). (The 1985 Consolidated Omnibus Budget Reconciliation Act, COBRA, provides for the continuation of employer health coverage that would be otherwise canceled due to job separation or other qualifying events.) Because STLDI plans are not considered to be individual health insurance coverage under the Health Insurance Portability and Accountability Enabling Choice and Competition in Healthcare Markets | 233 Act and the Public Health Service Act, STLDI coverage is exempt from all ACA restrictions on insurance plan design and pricing. This allows STLDI plans to offer a form of alternative coverage for those who do not choose ACAcompliant individual coverage. The STLDI rule requires that STLDI policies must provide a notice to consumers that these plans may differ from ACAcompliant plans in the individual market and, among other differences, may have limits on preexisting conditions and health benefits, and have annual or lifetime limits. The STLDI rule is also too new to be sure of its impact. The CBO (2018) has projected that the STLDI regulatory reform will result in an additional 2 million consumers in STLDI plans by 2023. Based on CBO projections, we estimate that the STLDI rule will increase gross premiums in the ACA-compliant individual market by slightly more than 1 percent in the same time frame (see chapter 2). Taking into account both benefits and costs, we estimate that the rule will yield benefits worth $7.3 billion in 2021, plus an additional reduction in excess burden worth $3.7 billion. Improving Competition to Lower Prescription Drug Prices High pharmaceutical drug prices are a major concern of many Americans and the Trump Administration. Part of the problem results from the U.S. system of patent law, in which, in exchange for innovation, inventors are granted exclusive rights to market and distribute their inventions—in this case, drugs—for a period of time during which they can collect monopoly profits. But high prices also stem from Federal statutes and the regulations of the Food and Drug Administration (FDA), which are intended to guarantee safety and efficacy, but which create barriers to market entry and hinder price competition. Under the current regulatory regime, researching, developing and gaining the FDA approval needed to bring a new drug to market can take about a decade and cost an estimated $2.6 billion (DiMasi, Grabowski, and Hansen 2016). The evidence suggests that patients’ improved health and savings resulting from faster FDA regulatory processes and earlier access to drugs exceed potential associated safety risks (Philipson and Sun 2008; Philipson et al. 2008). The approval and entry of new generic drugs into the market to compete with brand name drugs lowers drug prices. Similarly, the approval and entry of new branded drugs creates competition with other drugs in the same therapeutic class and enhances patients’ and their physicians’ choices of treatment options. Under the Trump Administration, the FDA has launched a series of reforms to facilitate new pharmaceutical drug entry while ensuring the efficacy and safety of the drug supply. These reforms are already helping consumers 234 | Chapter 4 by speeding up generic drug approvals, resulting in savings from new generic entrants totaling $26 billion over the first year and a half of the Administration Price inflation for prescription drugs has slowed. Figure 4-7 shows that the price of drugs relative to other goods decreased during the Trump Administration compared with the trend of the previous Administration (dotted line). After 20 months of zero or slightly negative relative inflation, as of August 2018 the relative price of prescription drugs was lower than it was in December 2016. In addition, due to the way price inflation for drugs is measured, the actual reduction in inflation after January 2017 may be larger.25 As of August 2018, the slower price inflation for prescription drugs under President Trump implies annual savings of $20.1 billion.26 Even if the relative price inflation of prescription drugs were to return to the higher trend that prevailed before this Administration, the 2017–18 level effect would yield savings of $170 to $191 billion over 10 years.27 Data from the Bureau of Labor Statistics through the end of 2018 show that, for the first time in 46 years, the Consumer Price Index for prescription drugs fell in nominal terms—and even more in real terms—during a calendar year.28 This section first discusses how the approval and market entry of new drugs leads to price competition and lower prices. Then, it outlines the Administration’s FDA reforms to safely speed drug approvals. It subsequently outlines our estimates of the value generated by faster generic drug market entry. Finally, it discusses the value of the increased entry of new, innovative drugs. Lowering Prices through Competition Brand name drugs can command high prices because the drugmaker’s exclusive sales right confers market power over prices. Once the brand name drug’s patent expires, however, generic versions of the drug can enter the market, and the resulting competition drives down market prices and leads to substantial savings for patients and the healthcare system. Roughly 9 out of every 10 prescriptions in the United States are for generic drugs; but because they are so much cheaper than their brand name counterparts, they constitute only about 25 Two factors contribute to this. First, the Bureau of Labor Statistics has a six-month lag for incorporating generics, so any generic entry since March 2018 is not included. Second, in 2016 the bureau changed its index from geometric to Laspeyres, and the latter has higher inflation. 26 This was calculated by multiplying actual nominal personal consumption expenditures on prescription drugs (at a seasonally adjusted annual rate) in August 2018 by the percentage difference between the actual three-month, centered moving average relative price ratio in August 2018 and that projected by the linear trend estimated over January 2013 through December 2016. 27 This is dependent on a real discount rate between 0.9 and 3.2 percent. The lower bound is implied by the rate on 20-year Treasury Inflation-Protected Securities and the upper bound by Shiller’s cyclically adjusted earnings-to-price ratio for the Standard & Poor’s 500, respectively. 28 The Consumer Price Index for prescription drugs is the primary series used by the Bureau of Economic Analysis to construct the Personal Consumption Expenditures price index for prescription drugs that appears in figure 4-7. Enabling Choice and Competition in Healthcare Markets | 235 Figure 4-7. Price of Prescription Drugs Relative to PCE, 2013–18 Ratio 1.20 Aug-18 1.15 Trend 1.10 1.05 1.00 0.95 Jan. 2013 Jan. 2014 Jan. 2015 Jan. 2016 Jan. 2017 Jan. 2018 Sources: Bureau of Economic Analysis, CEA calculations. Note: PCE = Personal Consumption Expenditures Price Index. The relative price ratio of prescription drugs is computed as the index for prescription drug prices relative to the index of overall consumption prices, as measured in the National Income and Product Accounts for the PCE. Data represent a centered 3-month centered moving average. The trend is calculated from 2013 to 2017. 23 percent of prescription drug spending (AAM 2018), reflecting the enormous savings made available to consumers. Generic drugs. Substantial evidence shows that pharmaceutical drug prices fall dramatically when a generic drug enters the market, offering great savings to consumers (Aitken et al. 2013; Berndt, McGuire, and Newhouse 2011; Caves et al. 1991). Prices continue to decline substantially as the number of generic competitors increases. One analysis of the effect of generic entry on drug prices in the 1980s found that generic drug prices were 70 percent of brand name drug prices after the first generic entrant, 50 percent of the brand name price when four generic drugs were on the market, and 30 percent of the brand name price with 12 generic drugs (Frank and Salkever 1997). A more recent analysis using data from 2005 to 2009 found price reductions following a similar pattern (Berndt and Aitken 2011). Other analyses have confirmed this general finding. The estimates shown in figure 4-8 illustrate prices declining substantially as the number of generic market competitors increases. (For further discussion, see HHS 2010.) The brand name drug market share, in addition to prices, falls dramatically with generic competition. Brand name drugs. Market entry of new branded drugs can also reduce the prices of other branded drugs through increased price competition. In many cases, a particular condition is treatable with several different brand 236 | Chapter 4 Figure 4-8. Generic Drug Price Relative to Brand Name Price, 1999–2004 Average relative price per dose (percent) 100 80 60 40 20 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 Number of generic manufacturers Sources: Food and Drug Administration; IMS Health. name drugs, which are partial but not perfect substitutes for one another, and are known as a therapeutic class or category (FDA 2018c). Some of these drugs will have similar pharmacologic modes of actions. Others will have different mechanisms of action but will also be effective for the same condition. When the market evolves from a monopoly with one unique brand name product to a new stage of therapeutic competition, or oligopoly, market pricing will improve with one or more brand name competitors. Though these brand name products are not perfect substitutes for one another the way generics are, the evidence suggests that therapeutic competition between brand name drugs affects innovative drugmakers’ returns at least as much as competition from generic entry (Lichtenberg and Philipson 2002). New drugs often enter the market at lower prices than the dominant existing drug in a particular therapeutic class, putting pressure on the dominant drug to lower prices to maintain market share (DeMasi 2000; Lee 2004). Although the literature is limited on the systematic effect that therapeutic competition has on prices, there are numerous therapeutic classes in which new brand name drugs have led to vigorous price competition. A recent notable example was the introduction of new, highly effective treatments for the liver infection hepatitis C. A major breakthrough brand name drug was approved for sale in the United States in 2013. Unlike previously available therapies, it essentially offered a cure for many hepatitis C patients, albeit at an $84,000 price for a course of treatment. Within a few years, competing drugs from Enabling Choice and Competition in Healthcare Markets | 237 multiple companies came to market and drove down prices (Toich 2017).29 The most recently approved drug covers all six genotypes of the hepatitis C virus, which not all previous drugs did; has a shorter course of treatment; and had a list price of $26,400 for a course of treatment (Andrews 2017), less than the discounted prices of the earlier drugs. It quickly outpaced other hepatitis C drugs and has captured a 50 percent market share (Pagliarulo 2018). Another example of price competition within a therapeutic class is the case of the cholesterol-lowering drugs known as statins. The first statin was introduced in 1987. Since then, multiple new statins with higher potency and fewer side effects have come to market. Each new introduction has led to price competition with new drugs, which are often priced at a discount relative to the old ones (Alpert 2004). Prices have tumbled as these drugs have gone off patent and cheaper generic competition has entered the market (Aitken, Berndt, and Cutler 2008). The Administration’s Efforts to Enhance Generic and Innovator Competition The Administration’s deregulatory agenda includes streamlining the FDA’s review process to facilitate price competition by reducing market entry barriers while securing a supply of safe and effective drugs. This includes prioritizing the approval of more generic drugs (FDA 2018b). In August 2017, the President signed into law the Food and Drug Administration Reauthorization Act, a five-year reauthorization of the Generic Drug User Fee Amendments, which empower the FDA to collect user fees for generic drug applications and to process applications in a timely manner. Last year, the FDA announced the Drug Competition Action Plan to expand access to safe and effective generic drugs. This plan’s efforts focus on three key priorities to encourage generic drug competition: (1) preventing branded companies from keeping generics out of the market, (2) mitigating scientific and regulatory obstacles to approval, and (3) streamlining the generic review process. The FDA has already released guidance for companies and FDA staff members that outlines specific steps to reduce the number of review cycles and shorten the approval process. These reforms have successfully increased the number of generics approved and have slowed drug price growth. In fiscal year (FY) 2018, the FDA approved a record 971 generic drug approvals and tentative approvals—exceeding the 937 in FY 2017 and the 835 in FY 2016 (FDA 2016, 2017, 2018a). The FDA approves generics based on a determination that they are bioequivalent to an approved innovator drug for which exclusive sales rights have expired. Generic drug entry is quicker to respond to regulatory changes than brand name drug entry, which involves a longer process for review and development. Figure 4-9 shows the 12-month moving average number of generic drug final and tentative approvals starting in January 2013. The dotted blue 29 For a brief discussion of recent price competition in this market, see Walker (2018). 238 | Chapter 4 Figure 4-9. New Generic Drug Applications Approved, 2013–18 Number of approvals 80 Aug-18 60 Trend 40 20 Jan. 2013 Jan. 2014 Jan. 2015 Jan. 2016 Jan. 2017 Jan. 2018 Sources: Food and Drug Administration; CEA calculations. Note: The data include final generic drug approvals, and represent a 12-month moving average. Data preceding October 2013 are a truncated moving average, with data beginning in October 2012. The trend is calculated from 2013 to 2017. line represents an estimated time trend from January 2013 through December 2016 projected through August 2018, the most recent observation available. Since December 2016, the number of generic drug approvals has outpaced the trend. We found that 17 percent more generic drugs have been approved each month (a monthly average of 81), during the first 20 months of the Trump Administration than were approved during the previous 20-month period (a monthly average of 69). This increase in approvals occurred despite the fact that the number of brand name drug patent expirations—necessary precursors for generic entry—declined during this period. The FDA’s 2018 Strategic Policy Roadmap addresses the entire spectrum of FDA-regulated pharmaceutical products—from small molecules to complex products and biologics—given each of their critical roles in advancing the health of patients (FDA 2018b). The roadmap includes the launch of the Medical Innovation Access Plan, Drug Competition Action Plan, Biosimilars Action Plan, and Advanced Manufacturing Strategy Roadmap. These plans are designed to: 1. Modernize the FDA’s programs and increase administrative efficiencies for reviewing applications for brand name and generic products. 2. Provide product- and technology-specific guidance to increase regulatory and scientific clarity for sponsors to ensure efficient product development programs. Enabling Choice and Competition in Healthcare Markets | 239 3. Reduce anticompetitive behavior by firms attempting to game FDA regulations or statutory authorities to delay competition from generic or biosimilar products. The increase in new drug approvals has been as impressive as the improvement in generics. In the first 20 months of the Trump Administration, there were 11 drug approvals per month, on average, compared with 8.5 drug approvals per month during the preceding 20 month period. A new, brand name drug can be marketed only after its New Drug Application (NDA) has been approved; for biologic drug products, the corresponding approval is for a Biologic License Application (BLA). Figure 4-10 shows the number of approved NDAs and BLAs since January 2013, reported as a 12-month moving average to smooth intermonth volatility. Notably, the 12-month average line shows a substantial and sustained rise in approvals starting in about January 2017. These new approvals reflect the emergence of many valuable new drug therapies that will add to competitive market pressures on prices for existing drugs and bring new benefits to patients. During the sample period from January 2013 through December 2016, we estimated a linear time trend for the 12-month, moving-average sum of NDAs and BLAs approved. We then projected this trend through December 2017, the most recent observation available. As reported in figure 4-10, after falling below the trend in 2016, in 2017 actual applications approved climbed above the trend, and by the end of 2017 were 15 percent above the trend projection.30 It is noteworthy that the approval rate began to rise rapidly a few months into the Trump Administration. Although the FDA approves a wide array of biological products and new drugs, only some are novel, innovative products that are being introduced into clinical practice for the first time. Novel drugs can be classified as new molecular entities (NMEs), as an active molecule with no prior FDA approval, or as novel biologics. These new entities are the most meaningful NDAs and BLAs approved because they provide previously unavailable options to patients seeking therapies. Approvals of new molecular entities and novel biologics, meanwhile, more than doubled in the years 2017–18, relative to 2015–16. In 2015 and 2016, NMEs and novel biologics approvals averaged just 1.8 per month. From January 2017 through October 2018, approvals averaged 4.1 per month, with 9 approved in August 2018 alone. Given the lengthy clinical development process for new drugs, these trends do not solely reflect the actions of the Administration, but they are nevertheless influenced by this Administration’s emphasis on accelerating the NDA and BLA processes. 30 To test whether this outperformance of the trend was statistically significant, we regressed NDAs and BLAs approved on a linear time trend fully interacted with a post–December 2016 binary variable. The estimated coefficient on the interaction term was positive and significant at the 0.01 level, meaning that we can reject the null hypothesis of no trend break with 99 percent confidence. 240 | Chapter 4 Figure 4-10. New Drug Applications and Biologics License Applications Approved, 2013–17 Number of approvals 12 Dec-17 10 Trend 8 6 Jan. 2013 Jan. 2014 Jan. 2015 Jan. 2016 Jan. 2017 Sources: Food and Drug Administration, CEA calculations. Note: The data represent a 12-month moving average, and data preceding July 2013 are a truncated moving average, with data beginning in July 2012. The trend is calculated from 2013 to 2017. Estimated Reductions in Pharmaceutical Drug Costs from Generic Drug Entry The effects of increased competition through patent expirations and generic drug entry reflect not just a fall in market prices but also a drop in overall quantity consumed, because brand name drug manufacturers often stop advertising their product, which reduces overall demand for the chemical entity (Lakdawalla and Philipson 2012). Therefore, the change in consumer welfare resulting from a patent expiration does not just involve a movement downward along a demand curve, but also an inward shift in the demand curve. The analysis that follows represents a lower bound on the value of generic entry focusing on savings alone. We estimated the savings made available to consumers from generic drugs entering the market from January 2017 through June 2018 (CEA 2018a). The analysis represents an update of a similar analysis published by the FDA (Conrad et al. 2018). We found that generic drug approvals generated savings of about $26 billion through July 2018.31 31 The data on generic drug approvals represent the period from January 2017 through June 2018; these are the most recent approvals data available. Estimates of savings from this set of generic entrants represent sales through July 2018, based on the most recent available sales data. Enabling Choice and Competition in Healthcare Markets | 241 Figure 4-11. Price Decline Due to Generic Drug Entry Price P* PBefore PAfter Demand QBefore QAfter Quantity The baseline price before an entry (PBefore) used in this analysis is determined for each compound by aggregating sales across all drug products with the same active ingredient and dosage form for up to six months before the 2017 approval of abbreviated new drug applications, and dividing by the quantity of all drug products with the same active ingredient and dosage form that were sold (QBefore). In some cases, a generic entrant is the first to compete with its brand name counterpart; in others, a generic entrant follows one or more other generic entrants. Determination of baseline prices addresses this as follows: When a brand name drug is facing its first generic entrant, the baseline price is determined using solely the brand name drug’s sales; when a brand name drug already faces one or more generic competitors, the baseline price reflects both brand name and generic sales, weighted accordingly. The market price following entry of the generic drug (PAfter) is estimated by dividing the aggregate sales volume in the market by the aggregate quantity sold, per month. Monthly savings from generic entry are then estimated for the period as Monthly Savings = (PBefore – PAfter) * QBefore Total savings are the sum of all monthly savings estimates. Figure 4-11 shows the consumer benefit from the lower prices enabled by generic entry. Note that the savings estimate does not reflect the full trapezoid shown in figure 4-11. This is because the onset of generic competition, as mentioned above, is often accompanied by a cessation of marketing by 242 | Chapter 4 the innovator drugmaker, which causes the demand curve to shift inward. We therefore limit the savings estimated to the preentry quantities observed. We estimate that the total savings from the generic drugs that entered the market from January 2017 through June 2018 was $26 billion, in January 2018 dollars. We expect consumers to benefit further from lower drug prices in the years to come as more generic drugs are approved for sale and price competition becomes even more robust. Estimates of the Value of Price Reductions from New Drugs For new, innovator pharmaceutical drugs, high initial market prices give a misleading picture that overstates price growth. This is because before a new drug enters the market, it is unavailable at any price, making such a drug equivalent to one with a price so high that there is no demand for it. Economists generally interpret innovations as price reductions from the price at which the product would not sell at all due to its observed price when marketed. For instance, before the development of drugs to treat HIV in the mid-1990s, the price of a longer life for an HIV-positive individual was inaccessibly high—it could not be bought at any price anywhere in the world. But once new HIV drugs were approved, the price of a longer and healthier life for HIV-positive individuals decreased dramatically, falling from prohibitively expensive to the finite market price of the new, brand name, patented drugs. Prices fell further when these brand name drugs faced therapeutic competitors and further still when the brand name drugs lost their sales exclusivity and faced generic competition. Using the appropriate empirical methodology to measure such price declines for new drugs marketed since January 2017, we find that they have generated annualized gains to consumers of $43 billion in 2018, though lower-bound estimates of the price elasticity of demand for brand name drugs suggest that the gains could be much larger. This way of conceptualizing the initial price change of a newly approved innovation is illustrated in figure 4-12. The price, P*, is the prohibitively high price at which there is zero demand for the drug because it is too expensive. However, if no one is buying the drug, this is equivalent to its not yet having been discovered; in both cases, no one uses it. An innovation can be interpreted as simply reducing the price from this high level to the price at which it is marketed, PBrand in the figure, resulting in quantity QBrand of drugs being bought. Therefore, the value of the new innovation to patients is simply the consumer surplus generated when the price is lowered from P* to PBrand, indicated by the shaded area in figure 4-12. We used two methods to calculate this consumer surplus. The first applied empirical estimates of the producer surplus (profits) as a share of the social surplus arising from new NDA and BLA drugs approved since January 2017. Grabowski and others (2012), Goldman and others (2010), Jena and Philipson (2008), and Philipson and Jena (2006) estimated that the producer Enabling Choice and Competition in Healthcare Markets | 243 Figure 4-12. Price Reductions from Brand Name Entry Price P* PBrand Demand QBrand Quantity surplus is generally between 5 and 25 percent of the social surplus, with Jena and Philipson (2008) observing a median level of 15 percent, which implies that the consumer surplus is about 5.7 times the producer surplus. We applied these estimates to 2018 revenue data for the new NDAs and BLAs that were approved by netting out the variable costs of production from sales. These costs were assumed to be 16 percent of sales for brand name drugs, based on estimated differences in drug prices before and after patent expiration (Caves et al. 1991; Grabowski and Vernon 1992; Berndt and Aitken 2011; CEA 2018a). The second approach used price and quantity data along with empirical estimates of the price elasticity of demand for pharmaceutical drug products to generate a demand schedule and to calculate the consumer surplus that arises from lowering the price from P* to PBrand, as shown in figure 4-12—in other words, calculating the shaded area of the figure as the integral of the demand curve above PBrand from Q = 0 to Q = QBrand. Across 150 common drugs, Einav, Finkelstein, and Polyakova (2018) estimated an average elasticity of demand of –0.24; and across 100 common therapeutic classes, they estimated an average elasticity of –0.15. Goldman and others (2006, 2010), meanwhile, estimated elasticities of between –0.01 and –0.21. For price and quantity in both methods, we used IQVIA National Sales Perspectives data on pharmacy and hospital acquisition costs, based on invoice prices, for new molecule entities and novel biologics approved from January 2017 through July 2018. We then averaged the estimated consumer surplus gain—calculated, first, assuming the median estimate of the producer 244 | Chapter 4 appropriation from Jena and Philipson (2008); and, second, assuming the mean elasticity of demand for common therapeutic classes of –0.15 from Einav, Finkelstein, and Polyakova (2018).32 Averaging the results of the two approaches indicates that the price reductions induced by the new drugs approved after January 2017 increased the total consumer surplus in 2018 by $43 billion. Conclusion The U.S. economy generally relies on free markets to maximize benefits for U.S. citizens. The hallmarks of any free market are consumer choice and competition. Although some have claimed that healthcare is an exceptional case that cannot be produced and allocated through the market, we argue that these claims are exaggerated and that the costs of market failure are often lower than the costs of government failure. Deviations from perfect market conditions are present in healthcare and many other markets, but promoting choice and competition is the appropriate way to maximize efficiency and consumer welfare. The recent push in Congress to enact a highly restrictive “Medicare for All” proposal would have the opposite effect—it would decrease competition and choice. The CEA’s analysis finds that, if enacted, this legislation would reduce longevity and health in the United States, decrease long-run global health by reducing medical innovation, and adversely affect the U.S. economy through the tax burden involved. The Trump Administration has instead concentrated on deregulatory reforms that will increase choice and competition in the health insurance markets and pharmaceutical drug markets. Bringing the ACA’s individual mandate penalty down to zero will allow consumers to choose how much health insurance they desire. Expanding the availability of association health plans and the duration and renewability of short-term, limited-duration health plans will increase consumers’ options and spur competition. Finally, the FDA’s initiatives to speed drug approvals have already had tangible benefits in record numbers of drug approvals and increased pharmaceutical competition. All these reforms are expected to bring down prices, encourage continuing innovation, and maximize consumer welfare. 32 Because the Goldman et al. (2006) upper-bound estimated elasticity of –0.01 generates implausibly large consumer surplus gains when applied to all newly approved drugs, for the second method we assume an upper bound of –0.15. Enabling Choice and Competition in Healthcare Markets | 245 x Chapter 5 Unleashing the Power of American Energy Taking advantage of America’s abundant energy resources is a key tenet of the Trump Administration’s plan to increase long-term economic growth and national security. This is best achieved by recognizing how prices and technological change underpin growth in the production of renewable and nonrenewable energy sources. By promoting domestic energy production and expanding U.S. energy exports, the Administration seeks to improve the relationship the U.S. economy has historically had with global energy markets. Since the President took office, the U.S. fossil fuels sector has set production records, led by all-time highs in both oil and natural gas. The energy content of fossil fuel production is at this apex thanks to petroleum’s high energy content. The surge in petroleum production is a surprise, and is attributable to a confluence of technological improvements and relatively high prices. Natural gas production has also continued to increase, following a long-running trend. Coal production stabilized in 2017 and 2018, after a period of contraction in 2015 and 2016. Increased production allows the United States to alter historic trade patterns by decreasing its net imports. The U.S. is now a net exporter of natural gas for the first time in 60 years, and petroleum exports are increasing at a pace such that the United States is projected to be a net exporter of energy by 2020. Reducing its net import position for energy products helps the United States by making its economy less sensitive to the price swings that have disrupted it in the past. Greater economic resilience at home is coupled with greater diplomatic influence and flexibility abroad as U.S. prominence in global energy markets grows. 247 Technological and regulatory changes are forcing the U.S. energy system to further adapt. This is especially true for the electricity sector, which is adapting to the changing slate of generation assets and to economic pressures from restructured wholesale markets. Recognizing and embracing the innovations that have helped spur these changes in the U.S. energy system, and ensuring that distorting policies do not interfere, can help all Americans and people around the world—which is why the Administration is focusing on policies supporting these priorities. L everaging American energy abundance is a central tenet of the President’s economic vision. This is best achieved by recognizing how prices and technological change underpin growth in the production of renewable and nonrenewable energy sources. In 2018, this sector of the economy yielded historic results. U.S. fossil fuels production is booming, led by all-time highs in oil and natural gas. This increase in production has helped support economic growth and allowed the United States to change historic trade patterns. Yet technological and regulatory changes are forcing the energy system to further adapt. Recognizing and embracing the innovations that have helped spur these changes in the U.S. energy system, and ensuring that distorting policies do not interfere, can help all Americans and people around the world. The Administration focuses on policies supporting these priorities. Although proposals for a policy of energy independence have a history in the United States dating back to at least 1973, the Trump Administration’s energy policy goes further by emphasizing two elements. The first is to maximize the value of U.S. production at market-determined prices. Fossil fuels, which provide 80 percent of the Nation’s energy needs, loom large in this regard. Energy is useful insofar as it can ultimately provide the power, light, and work that are important economic inputs for the production of goods and services that benefit Americans. These inputs can be generated in a number of ways. For example, electricity can provide light, and electricity can be generated from a variety of sources—by burning fossil fuels like coal and natural gas, or by using renewable methods like wind and solar generation, or other means like nuclear generation. The United States also has extensive energy resources—fossil fuel reserves; renewable resources like hydroelectric, solar, and wind; and perhaps most valuable of all, world-class engineering and research complexes that constantly innovate and improve the efficiency of both the U.S. and global energy systems. The Administration’s policy of fostering maximum production 248 | Chapter 5 embraces all these sources, with their diverse characteristics and economic applications. The various sectors of the U.S. economy rely on different forms and sources of energy; for example, in 2017, the U.S. economy relied on petroleum for 92 percent of its transportation needs, using 72 percent of all petroleum consumed domestically. Other countries around the world satisfy their energy needs with different mixes than the United States. Because countries have different endowments of energy resources, and different energy policies, the varying demands and supplies of energy provide the opportunity for trade in power and fuels. The importance of energy trade is underscored by the prominence of a single commodity—crude oil—which in recent years has accounted for an average of over 6 percent of global trade value (United Nations 2018). The United States can use its increased energy production to take a greater role in global energy markets, particularly those for fossil fuels. Reducing its net import position for energy products helps the United States by making its economy less sensitive to the price swings that have disrupted it in the past. Greater economic resilience at home is coupled with greater diplomatic influence and flexibility abroad as the United States’ prominence in global energy markets grows. Finally, more global competition in energy supply may moderate global prices and price volatility. This chapter outlines the key economic contours of the Trump Administration’s energy agenda. The first section documents and contextualizes recent developments in U.S. fossil fuels production. The second section considers the United States’ ability to engage with global energy markets through increased trade. And the third section examines specific policy issues that remain and pose challenges for the future. U.S. Fuel Production Reached Record Levels in 2018 The United States is fortunate to have many useful energy resources—oil, natural gas, coal, solar, wind, geothermal, and more. American success in promoting fuels production is broad-based, as overall fossil fuel production has increased. In 2018, U.S. fossil fuel production set an all-time record for total energy content, as shown in figure 5-1. This record continues the recent trend—which was only interrupted by a dip in 2016, when lower prices failed to support oil and natural gas production enough to offset falling coal production. Since then, the growth in petroleum production has more than made up for lower coal production, relying on the greater energy density of crude oil to make up the difference. Unleashing the Power of American Energy | 249 U.S. Oil Production Is At an All-Time High Reports of the demise of U.S. oil production (Bentley 2002; Hirsch, Bezdek, and Wendling 2005; EIA 2006) appear to have been premature. Thanks to a confluence of technological proficiency in available geology and world price patterns, in 2018 U.S. oil production reached an all-time high. In November 2017, U.S. oil production surpassed a monthly production record set in 1970, with oil production reaching a monthly average of 10.1 million barrels per day (MMbpd). This trend continued into 2018, as the monthly average production for the year’s first three quarters was 10.7 MMbpd. Resurgent U.S. production relies on unconventional resources once deemed too diffuse and costly to exploit. However, advanced seismography, hydraulic fracturing, directional drilling, and related technologies have changed this situation by effectively lowering the cost of accessing oil and gas trapped underground. The technical innovations pioneered and perfected in the United States (Zuckerman 2014; Gold 2014) are now paying dividends in the form of increasing production. The dividends have been paid quickly, with U.S. production increasing by 6 MMbpd in eight years—the largest increase of any country in history. Technology that was pioneered in parts of Texas and in the western States is now applied across the country, boosting production everywhere from the historically productive Permian Basin in Texas and New Mexico to new provinces like the Eagle Ford Shale in Texas and the Bakken Shale in 250 | Chapter 5 North Dakota and Montana. Production in Texas increased by 11.1 percent from December 2017 levels through the first half of 2018, while the monthly average production through October 2018 was 291 percent higher than annual production in the state 10 years ago (figure 5-2). This increase more than offsets declining production in other important regions, including Alaska and the shallow-water areas of the Gulf of Mexico. Crude oil prices in 2018 exhibit three general characteristics. First, from the perspective of U.S. producers, price levels remained higher than the previous three years, on average. Second, price volatility was modest compared with the period 2014–16.1 Together, these high and stable prices provided a strong incentive for producers. And third, the price discount for the main landlocked U.S. benchmark, West Texas Intermediate (WTI) crude oil—relative to the nearest waterborne benchmark, Brent crude oil—has increased to the highest level since 2013. Although both grades are priced higher due to attractive refining properties, the differential between these two close substitutes indicates that the U.S. market is separated from the global market. Many market observers take this as evidence of the existence of infrastructure constraints that require U.S. production to incur somewhat higher transportation costs that erode its value at inland pricing points. McRae (2017) documents how price basis differentials stemming from pipeline bottlenecks represent a transfer from producers to refiners and shippers, but are not transmitted to consumer prices. This is consistent with earlier work by Borenstein and Kellogg (2014), who found that the marginal barrel of gasoline is priced to Brent, leaving the consumer unaffected by a Brent–WTI basis differential. In addition to setting a domestic production record, in 2018 the United States became the world’s leading producer of crude oil after years of leading the world in combined oil and natural gas production.2 Figure 5-3 shows the recent increase that has returned the United States to global leadership after 43 years. The production comes from a different resource base than conventional deposits in Russia and Saudi Arabia, because U.S. production, and especially production growth, rely on unconventional resources that were once considered subeconomic. However, a combination of technological innovation, market incentives, and millions of private mineral owners willing to take risks with new techniques have helped the U.S. oil and gas sector launch a new era of production. U.S. production now largely comes from geological formations like low-permeability sandstones and shales that are not developed in most other countries. An added benefit is that much of the production is 1 Although less volatile than the preceding years, this period’s volatility remains higher than that of many historical periods and may be an important concern for producers (McNally 2017). 2 Oil and gas producers bring a cocktail of hydrocarbons to the surface, including crude oil, lease condensate, natural gas, and natural gas liquids. The exact proportions vary across different geologies. After they are brought to the surface together, the products are separated and sold through different channels for different uses. Unleashing the Power of American Energy | 251 252 | Chapter 5 lighter and lower-sulfur grades of crude oil that command a price premium and give refiners considerable flexibility in processing because they are less costly to refine than heavier grades. The economic implications of the technological innovations that have facilitated these changes have been a long time coming (CEA 2006, 2012, 2013, 2015, 2016a, 2017). So why was 2018 the year to break production records? In the not-too-distant past, it seemed that increased U.S. production required high prices, further reducing U.S. influence in the global marketplace. Technological innovations have increased both economically feasible and technically recoverable reserves. Innovations in directional drilling and hydraulic fracturing helped lower the breakeven costs of shale oil, while improved deepwater extraction efficiency has increased interest in offshore drilling as well. The assumption was that all these methods required fairly high breakeven prices. The threat posed by unconventional U.S. production to other global producers compelled the Organization of the Petroleum Exporting Countries (OPEC) to allow prices to fall in late 2014, in an effort to protect global market share and long-run revenues. This strategy of defending market share against new entrants is historically well-known to OPEC members, and it may or may not deliver higher revenues (Adelman 1996). Although the subsequent price drop was traumatic for U.S. producers, the ultimate result was that the marginal cost of unconventional production fell, making U.S. oil more competitive in the global marketplace (Kleinberg et al. 2016). The combination of relatively high and stable prices, accumulated cost-reducing technological improvements, and the massive endowment of unconventional resources has allowed production to expand rapidly. Some observers have taken America’s world-leading production and decreased net imports as evidence that the United States has greater influence in the global oil market, but the empirical evidence suggests more work is needed to achieve this goal.3 The responsiveness of onshore oil production to price shocks remains limited inside the continental United States. Estimates by Newell and Prest (2017) indicate that although the response of U.S. supply to price changes is larger than before the dawn of shale oil, the U.S. remains slower to react than a traditional “swing producer” (i.e., a producer that can bring additional capacity online quickly in response to demand), such as Saudi Arabia. Newell and Prest (2017) also find that the U.S. response takes several months to come online, which is substantially less timely than the 30 to 90 days associated with typical swing production. So while the United States now enjoys more production and more responsive production than it has historically done, it has not yet reached a point that would provide it with the market power associated with being a global swing producer. The United States’ 3 During the week ending November 30, 2018, the United States had negative net imports of petroleum for the first time since at least 1973 (data from the U.S. Energy Information Administration). Unleashing the Power of American Energy | 253 lack of spare capacity implies that other countries, notably the members of OPEC, hold the key to modulating prices by being able and willing to adjust production. As OPEC settled into a regime of production cuts that helped support prices in 2017 and 2018, geopolitical uncertainty in key oil-producing countries also boosted prices and helped bolster U.S. production (see box 5-1). Compared with the production levels of OPEC members in 2016, supply reductions in Venezuela and other countries subtracted 492,000 barrels per day on average from OPEC’s production during the period between January 2017 and August 2018. Cuts by Venezuela accounted for 75.2 percent of gross output reductions by OPEC’s producers between January 2017 and August 2018. The unexpected resurgence of U.S. production over the past decade provides evidence that is hard to square with central predictions of popular models of resource scarcity. A prominent example is the “peak oil” literature, which recognizes the physical limit on the endowment of oil to predict a date of maximum extraction rate, after which production monotonically declines.4 Growing reliance on petroleum as a fuel has been matched by episodic concerns about its continued availability. A monotonic production decline is viewed as problematic for an economy that previously had consumed increasing amounts of oil. The paper by Hubbert (1956) was the original technical contribution to the peak oil literature, which later blossomed into a broader following (Deffeyes 2001, 2006). Hubbert’s central insight was that there is a finite amount of oil to be found, and the pace of discoveries could not accelerate indefinitely, as it had for the preceding decades. Hubbert established an initial estimate for total U.S. oil reserves of 200 billion to 250 billion barrels. Conditional on U.S. oil reserves of 200 billion barrels and the historical trajectory of discoveries and extraction, Hubbert predicted peak production in 1970, with a subsequent decline. This forecast was remarkably accurate for the lower 48 States, through about 2010; production peaked in 1970, and appeared to enter a steady decline in the following years (figure 5-4). Even when considering the massive discovery in Alaska, and the effect that Alaskan oil had on aggregate U.S. production, Hubbert’s simple model predicted a peak that was only off by a couple of years and seemed to encapsulate the inherent limit to oil production. However, Hubbert’s model ignored the role of prices in promoting exploration and production, and of technological innovation in expanding proven reserves. Higher prices and technological improvements allowed access to offshore and unconventional reserves, leading to an unpredicted peak increase 4 Highlighting the economic significance of physical limits follows a long tradition dating back to at least 1798, when Thomas Malthus published “An Essay on the Principle of Population,” which expressed the fear that growth in population would outpace growth in food production. As the Industrial Revolution made coal an essential economic input, William Jevons (1865) translated this same argument to a nonrenewable resource, which was the first “peak fossil fuel” argument. 254 | Chapter 5 Box 5-1. OPEC’s Oil Production Cuts OPEC has 13 member states located in the Middle East, Africa, and South America. As of October 2018, OPEC producers enjoyed a 39 percent share of the global petroleum market, down from a post-2000 peak of 44 percent in September 2008 (OPEC 2018b; EIA 2018i). However, they collectively control 74 percent of world oil reserves, and most of the lowest extraction cost reserves (EIA 2018b). Since its formation in 1960, OPEC has alternated between strategies of maximizing market share and maintaining high prices. The oil price collapse in 2014 is attributed to OPEC protecting its market share at the expense of prices. Since then, OPEC has changed strategies and cut production to enjoy the resulting higher prices. (In 2018, OPEC had 14 oil-producing members, along with the Republic of Congo. The Qatari state petroleum company announced on December 2, 2018, that it was leaving OPEC, effective January 1, 2019. Qatar is a substantial natural gas producer, but it only accounted for 1.9 percent of OPEC’s oil production—less than 1 percent of global production.) Through late 2015 and much of 2016, OPEC members discussed a targeted cut to help support prices. These discussions also expanded to include key non-OPEC producers, including Russia. The OPEC meeting on November 30, 2016, announced a target reduction of 1.2 MMbpd for the 12 cooperating members of OPEC—Libya and Nigeria are exempt—effective January 1, 2017, Figure 5-i. OPEC Crude Oil Production versus Production Targets, 2016–18 Barrels per day (millions) 36 Total OPEC-14 production 32 OPEC-12 production OPEC-12 target 28 24 Oct-18 Cooperating countries’ production Russia and other cooperating countries’ target 20 16 Jan. 2016 Jul. 2016 Jan. 2017 Jul. 2017 Jan. 2018 Jul. 2018 Sources: Organization of the Petroleum Exporting Countries (OPEC); U.S. Energy Information Administration; CEA calculations. Note: The OPEC-12 are the 14 OPEC member states, except for Libya and Nigeria, which are exempt from production cuts. The “other cooperating countries” are Russia, Mexico, and Kazakhstan, which, along with several other smaller producers, collaborate with OPEC’s production schedule. Unleashing the Power of American Energy | 255 bringing its allocated production to 29.8 MMbpd and its ceiling for the OPEC14 to 32.5 MMbpd. Subsequent OPEC meetings in May and November 2017 extended these cuts in allocations through the whole of 2018, in addition to allowing for the accession of Equatorial Guinea into OPEC with an allocation of 178,000 barrels per day (OPEC 2018a). Cooperation by several other countries—notably Russia, Mexico, and Kazakhstan—has helped leverage the cuts by including more production share in the group agreeing to cuts. Adding the production of these cooperating countries to the OPEC-12, the global market share of the countries cooperating with OPEC’s cuts increased to 68 percent of crude production in 2017. Compared with average production in 2016, the OPEC-12 and its collaborators have cut production by 1.33 MMbpd, or about 1.8 percent of global production. According to data from the U.S. Energy Information Administration (EIA) on secondary reporting of OPEC’s oil production, the target of 29.982 MMbpd for the OPEC-12 was met for only 1 of 12 months in 2017, with the largest monthly overage being 0.59 MMbpd (1.9 percent over target production). During the first six months of 2018, the OPEC-12 came in below the target by an average of 0.2 percent each month, or 57,000 barrels per day (EIA 2018i). See figure 5-i. in domestic production. This experience contrasts sharply with forecasts influenced by peak oil theory, especially those that were ascendant 15 years ago, most of which expected peak oil extraction by 2010 (Laherrère 1999; Campbell 2003; Skrebowski 2004; Bakhtiari 2004). The prediction of decreasing U.S. oil production has been proved wrong by increased domestic production in 9 of the past 10 years (BP 2018), shattering Hubbert’s (1956) original prediction shown in figure 5-4. Though there is a finite quantity of oil resources that can be discovered and extracted, ignoring the incentive for exploration and innovation created by high prices, and the impact that successful innovations have had on expanding the economic reserve base and reducing production costs, the physical limits do not circumscribe economic potential, as some analysts have hypothesized. Although the physical endowment of oil is smaller than it once was, some context is useful. In 1956, as Hubbert was making his original forecast, U.S. proven reserves of crude oil were 30 billion barrels. At the end of 2017 proven reserves were 39.2 billion barrels. From 1957 to 2017, total U.S. crude oil production was 167.0 billion barrels. In addition to the 55.2 billion barrels extracted before 1957, Hubbert’s estimate of the size of reserves was not unreasonable. However, what it did not anticipate was that today different kinds of resources would be considered reserves (known resources that can be profitably extracted with current technology at current prices). This 256 | Chapter 5 Figure 5-4. U.S. Lower-48 Production versus Hubbert's 1956 Peak Oil Prediction, 1920-2018 Barrels per day (millions) 12 10 8 Jun-18 U.S. field production of crude oil 6 4 Hubbert high-peak oil estimate 2 0 1920 1930 1940 1950 1960 1970 1980 1990 2000 Sources: Energy Information Administration; Hubbert (1956); CEA calculations. Note: Data represent a 3-month moving average. The Hubbert (1956) estimate was constructed using a stepwise logit function. 2010 2020 observation is not new to the economics literature (Boyce 2013), but the recent empirical record suggests that peak oil models will need to consider prices and technology to be reliable in the future. Geologists—like Hubbert—woke up every morning and looked for oil; but they expected the pace of discoveries to eventually slow down, and then production would have to decline.5 The policy environment had no bearing; nor did prices or technology. The point of Hubbert’s paper was to emphasize the need for future energy transitions; he expected nuclear power to be more widely used. Nuclear power has its own inherent trade-offs, some of which are discussed below. Recent experience in the United States underscores the imprudence of relying upon geological forecasts alone. The incentives of prices and the role of technological innovation—which is funded by the price incentive in a market economy like the United States—are critical for understanding the production of even a nonrenewable natural resource like petroleum. The Natural Gas Revolution Rolls On Before technology helped U.S. oil production reach record highs, natural gas was the focus, and the “natural gas revolution” changed the national energy landscape (Deutch 2011). Hydraulic fracturing receives much of the credit. This 5 Hubbert’s earliest paper, describing single-peaked growth with a decline to zero, came in a 1934 publication for the Technocracy, a social and political movement of the 1930s that advocated replacing the price system with management by technocrats (Inman 2016). Unleashing the Power of American Energy | 257 technique was originally developed in 1948 to improve flow from oil wells, and it evolved in the 1990s toward greater volumes of water and sand injected to fracture rocks saturated with natural gas. This breakthrough depended on a fundamentally sound understanding of the relevant geophysics, the basis of which was pioneered in 1956 by none other than the same Hubbert of peak oil fame. Production of natural gas in the United States has continued to grow to record levels, reducing reliance on imports and expanding exports globally. For the 9th time in the past 11 years, in 2017 the United States withdrew a record amount of natural gas. Gross natural gas withdrawals in the United States have increased by more than 50 percent over the past 10 years, rising to 3,267 billion cubic feet (Bcf) in October 2018. This growth has relied on technological advances, including hydraulic fracturing and directional drilling, that have made the development of shale gas resources economic. The Appalachian, Permian, and Haynesville basins have led U.S. production growth. The growth of U.S. natural gas production, led by shale and other unconventional resources, has been driven by the rise in nonassociated gas production. Nonassociated gas is produced from reservoirs where the gas is not found with substantial amounts of crude oil, whereas associated gas is jointly produced with crude oil. Nonassociated gas production in the United States grew by 29 percent between 2007 and 2017. The rise in nonassociated gas production has been centered in the Appalachian Basin, which stretches across New York, Pennsylvania, Ohio, and West Virginia to include the Marcellus and Utica shale plays, where total gas production grew from 1.3 Bcf per day (Bcfd) in January of 2007 to 31.5 Bcfd in January of 2019 (EIA 2018f). Unlike the other states, New York has effectively banned development of its shale resources (see box 5-2). Associated gas production is rising again with shale oil production. This has created an infrastructure challenge, given that two types of infrastructure are needed—for oil and for natural gas. Oil has more transportation substitutes than natural gas, which depends on specific investments in pipeline capacity to move efficiently. In comparison, oil can move by rail or even truck where necessary, until pipeline capacity catches up with production (Covert and Kellogg 2017). As a result, the flaring of associated natural gas has increased. Firms that are unwilling to wait to extract oil have a choice between completing natural gas pipeline projects and seeking accommodation from regulators to allow more flaring. In the short run, the latter might be less expensive. Total natural gas proven reserves (wet after lease separation) increased by over 87 percent between 2007 and 2017. In 2017, total proven natural gas (wet after lease separation) reserves stood at 464,292 Bcf, which corresponds to over 17 times the total U.S. consumption in the same year. The reserves are there; the technology is there. Two factors limit production. The first is finding uses for more gas at current prices; the second is building out infrastructure to move gas from where it is produced to where it is consumed. Continued 258 | Chapter 5 Box 5-2. The Important Economic Effects of State Regulation on Energy Production Differences in States’ regulation of hydraulic fracturing (“fracking”) have important economic effects. Nowhere is the contrast as stark as between Pennsylvania and New York State, which have taken divergent regulatory tacks—Pennsylvania has been accommodating and thus has seen widespread development of its underlying shale gas resources, but more restrictive New York has elected to effectively prevent development. Pennsylvania’s natural gas production expanded over 30 times between 2006 and 2017, and went from making up under 1 percent to constituting 20 percent of U.S. dry gas production. In contrast, New York placed a de facto moratorium on hydraulic fracturing in 2008 that ossified into an outright ban in 2014. In light of this regulatory ban, New York received far less investment, and its natural gas production fell by 80 percent from 2006 to 2017. Counties along the New York–Pennsylvania border that are otherwise similar provide an ideal laboratory for understanding some effects of regulation. Boslett, Guilfoos, and Lang (2015) examined the effects of the New York moratorium, and found that among those counties most likely to experience shale gas development, residential property values declined by 23.1 percent due to the shale gas moratorium. This result is also true for rural land values; Weber and Hitaj (2015) found a 44.2 percent greater appreciation in Pennsylvania’s border counties relative to New York’s border counties. Cosgrove and others (2015) used a differences-in-differences approach to examine the effects of increased shale gas production between 2001 and 2013, finding that after 2008 Pennsylvania counties experienced significant increases in both industry employment and wages compared with New York counties. Komarek (2016) used New York’s border counties to compare with counties in the Marcellus region that were developed, and found that developed counties had a 2.8 percent increase in employment, a 6.6 percent increase in earnings, and a 3.3 percent increase in earnings per worker. export growth is one method by which to capitalize on vast proven natural gas reserves; but as figure 5-5 shows, current exports are quite small relative to annual production. Infrastructure investments require an expectation of production and sales over a sufficiently long time horizon to amortize the fixed costs. U.S. consumption of natural gas by consumers has increased alongside production, thanks to low and stable prices. For 7 of the last 11 years, the United States has recorded record natural gas consumption. This increase is led by electricity generation, on pace for a 10 percent increase over the previous peak in 2016. Although natural gas consumption has increased substantially for electric power consumers and natural gas vehicles, the main Unleashing the Power of American Energy | 259 Figure 5-5. U.S. Natural Gas Trade and Withdrawals, 1940–2018 Cubic feet per year (trillions) 40 2018 30 U.S. gross natural gas withdrawals 20 10 U.S. exports 0 U.S. imports -10 1940 1950 1960 1970 1980 1990 2000 2010 2020 Source: Energy Information Administration. Note: Trade data preceding 1973 are derived from EIA’s tracking of international deliveries and receipts. Annual data for 2018 were reported as monthly data through October at a seasonally adjusted annual rate. sources of natural gas demand are electric power generation, industrial uses, and residential uses. The shift toward using natural gas for electricity generation is a global trend—2016 and 2017 were the first two years on record in which natural gas–fired electricity generation made up a greater share than coal-fired generation in countries belonging to the Organization for Economic Cooperation and Development (OECD) (BP 2018). Electricity generation is an important component of creating enough demand to capitalize on American abundance and supporting production.6 The dramatic rise in unconventional natural gas production since 2007 has enabled the United States to become a net exporter, starting in 2017, for the first time since 1957. In total, U.S. exports of natural gas increased by 341.5 percent between January 2007 and October 2018 (figure 5-6). As a net exporter of natural gas, the United States occupies a strategic position to provide this resource, both to its Western Hemisphere neighbors and to its allies and trading partners around the world. Natural gas exports by pipeline to its neighbors make up the largest share of U.S. exports, with pipeline exports to Canada and Mexico accounting for 21.4 and 49.2 percent, respectively, of total U.S. natural 6 An alternative interpretation is that the lower energy density of natural gas frees up other, higher-density fuels for export. Substituting an inferior product for local consumption to capture a premium in export markets is recognized in economics as the “shipping the good apples out” principle (Alchian and Allen 1964; Hummels and Skiba 2004). 260 | Chapter 5 Figure 5-6. U.S. Monthly Trade in Natural Gas, 2001–18 LNG imports U.S. pipeline exports to Canada LNG exports Cubic feet per month (billions) 400 U.S. pipeline imports U.S. pipeline exports to Mexico Oct-18 200 0 -200 -400 -600 2001 2004 2007 2010 2013 2016 Sources: Energy Information Administration; CEA calculations. Note: LNG = liquefied natural gas. gas exports in October 2018. Total U.S. natural gas imports have fallen by 44.9 percent since January 2007; Canadian pipeline imports make up 97.2 percent of total imports. Coal Production Is Recovering after the 2015–16 Slump U.S. coal production has recovered after facing difficult market conditions between 2012 and 2016. After averaging roughly 1.1 billion short tons of production annually from 2000 to 2009, coal production and related employment began to slip in mid-2011. By 2016, production had dropped to 728 million short tons, 65.2 percent of the average level between 2001 and 2010 (EIA 2018c). However, production rebounded in 2017, rising 6.3 percent from the preceding year to 774 million short tons. This trend continued through the first half of 2018, as production remained 10.3 percent higher than its secular low in the first half of 2016. Increased production required a small boost in coal mining employment, which has grown by 2,900 since the President’s election in 2016. This increase has been helped by the production increase in relatively labor-intensive eastern regions. Different grades and characteristics of coal make these mines competitive despite requiring much more labor per unit of output. Higher exports are a welcome fillip to an industry that has been battered by declining demand for domestic steam coal used to fire electric generation Unleashing the Power of American Energy | 261 that stems from low natural gas prices (figure 5-7). The portfolio of electric generation technologies has expanded with greater penetration of renewables like solar and wind, and inexpensive natural gas has expanded its market share at the expense of coal. Coal producers have also been affected by increased costs from new regulatory requirements; for example, coal plant retirements in 2015 and 2016 were affected by the Mercury and Air Toxics Standards (known as MATS), which made a shutdown an attractive alternative to compliance for many plants, even those receiving a one-year waiver. Although the past two years show that these trends have slowed and coal production has stabilized at a new, lower level, a reversal of the trends that return coal to the dominant position it enjoyed for decades appears improbable. The private market is showing signs of trouble as insurers and underwriters are shying away from coal projects. The U.S. coal industry has evolved over the decades toward western and surface production. Underground and surface operations west of the Mississippi River have shifted from under 10 percent of total production 50 years ago to well over half of all production in 2018 (EIA 2012, 2018c). Western coal production has focused on steam coal. Part of the change was influenced by Federal environmental policy, which led companies to switch inputs to low-sulfur western coal rather than reducing output or changing technology (Carlson et al. 2000). This substitution was costly, however, and railroads 262 | Chapter 5 $"0- фҊчѵ#)" .$)/0-'.*- ./Ѷспрп1er.usспрч ". *' спрп ". спрч ". *' сптф спсф спрч спрч спсф сптф ". *' *0- .ѷ) -"4 )!*-(/$*)($)$./-/$*)Ѹ'0'/$*).ѵ */ ѷ#$!/.- !' /$!! - ) .$)!*- ./.!-*(спрп)спрч$..0 .*!/# ))0') -"40/'**& җ спрпѶспрч Ҙѵ managed to capture some of the surplus (Busse and Keohane 2007; Gerking and Hamilton 2008). Productivity gains help account for the relatively modest employment gains; high levels of productivity in the West North Central Region spanning Kansas, Missouri, and North Dakota have led to moderate gains in employment over time, while the Mountain Region’s nationwide eminence in productivity has allowed it to sustain employment levels roughly equal to those during the early 2000s (EIA 2018c; MSHA 2018). The contrast between the outlook for natural gas and coal is captured in figure 5-8. The EIA’s (2018b) Annual Energy Outlook shows a substantial revision between the 2010 and 2018 forecasts for natural gas, consistent with an anticipated shift of the supply curve out and down, implying more and cheaper natural gas. This is shown in the left panel of figure 5-8 with a shift in the supply curve from red to blue. Over time, the demand for natural gas shifts out to accommodate growth in energy demand. Hausman and Kellogg (2015) derived the welfare implications of contemporaneous supply and demand shocks. In contrast, a sector without technological change like coal does not get a supply shift, and even faces the prospect of declining demand because cheaper natural gas is an attractive substitute. U.S. Fuels in the Global Marketplace Trade is crucial for energy markets. Fuel commodities constituted more than a 9 percent share of global trade in 2017, on a value basis (United Nations 2018). The supply shift that the United States’ oil and natural gas producers have experienced thanks to technology, along with its world-leading coal reserves, put it in an excellent position to trade energy products. The gains from trade are especially large in primary commodities like fossil fuels (Fally and Sayre 2018), for which the United States has a comparative advantage (CEA 2018). Unleashing the Power of American Energy | 263 U.S. Oil Exports Are At an Unprecedented High The unexpected increase in domestic oil and natural gas production has bought the United States a new degree of leeway in energy markets, especially for transportation fuels that are particularly reliant on petroleum. Domestic production offsets the demand for imported petroleum, which has contributed to rebalancing in the global market. U.S. net imports of crude oil and petroleum products averaged 2.7 MMbpd in 2018, down from the average of 12.5 MMbpd in 2005. World production of crude oil and other petroleum liquids continued to grow through 2018 and is expected to average over 100 MMbpd (EIA 2018i). The change in the U.S. net import position for crude oil in 2018 year was about 0.74 MMbpd, equal to about half of OPEC’s estimated spare production capacity in 2018 (EIA 2018i; BP 2018). This shift also significantly affects the U.S. international position in the market for crude oil and petroleum products. The U.S. petroleum trade balance was -$199 billion after seasonal adjustment in 2017, which is less than half of the –$495 billion (inflated to $2017) 10 years earlier, and is down over $300 billion from the all-time low in 2005 (U.S. Census Bureau 2018). Although the overall trade balance deteriorated over the concurrent period, increased production has undoubtedly served as a boon to the American position internationally as well as a buffer for American consumers’ sensitivity to oil prices. Petroleum exports. The United States has witnessed a renaissance in exports of crude oil since December 2015, thanks to the lifting of a 40-year ban on crude oil exports. Through September 2018, U.S. exports of crude oil were more than triple the annual levels in 2016, the first full year of exports after the lifting of the export ban. In May 2018, exports of crude topped 2.0 MMbpd for the first time in U.S. history (figure 5-9). Melek and Ojeda (2017) found that the ban was binding during the period 2013–15, but that when general equilibrium effects are taken into account, the macroeconomic effects of removal are negligible because of adjustments in the types of crude oil refined in the United States. This suggests that crude oil exports alone do not increase U.S. GDP because crude oil and refined product prices adjust. However, in trade that does boost GDP, the United States imports a large volume of oil and capitalizes on the large and complex refining sector to produce refined products that are exported. Crude oil and refined petroleum product exports rose by 17.5 percent in the first 10 months of 2018 from the average level in 2017, driven by increased exports to Latin American nations (figure 5-10). The silver lining is the nondurable manufacturing jobs that are supported by imported oil. There is room for further gains in this direction. Despite increasing oil-refining capacity, U.S. consumption of petroleum exceeds domestic refining capacity. 264 | Chapter 5 Macroeconomic effects. Abundant crude oil has other important spillovers, notably to the macroeconomy. Trends in crude oil exports and shrinking net imports have implications for the economy’s responsiveness to oil price shocks. The petroleum share of the U.S. trade balance is at historic lows; the petroleum share of the deficit was 15.8 percent in 2018, down 44.3 percentage points from secular highs of over 60 percent in 2009. The petroleum trade balance has narrowed steadily since its all-time high of $44.3 billion in November 2005 (figure 5-11). Because petroleum prices are determined in a global market and are volatile, reducing net imports of a product with inelastic demand allows domestic producers to capture windfall gains from higher prices that otherwise would be transferred to foreign producers. Oil price spikes have historically been correlated with negative growth effects for oil-importing economies (Hamilton 1996). Exogenous oil price shocks have significant contractionary effects on GDP growth for the United States and also for most other developed economies (Jiménez-Rodríguez and Sánchez 2004). A large body of literature finds that oil price volatility imposes substantial costs on the economy, affecting consumers directly and creating uncertainty that disrupts business investment (Jaffe and Soligo 2002; Parry and Darmstadter 2003; Kilian 2008; Baumeister and Gertsman 2013; Brown and Huntington 2015). Separating the role of oil price shocks in measuring effects on real GDP growth has traditionally been a difficult empirical task. Efforts Unleashing the Power of American Energy | 265 266 | Chapter 5 to tease out the effects of oil price changes are impaired by the endogenous effects of monetary tightening and other countercyclical policies aimed at correcting these trends (Hoover and Perez 1994; Barsky and Kilian 2002). As the United States continues to expand its position as an exporter in global oil markets, it better insulates itself from the adverse welfare and GDP consequences of high oil prices and price spikes. Although the United States remains a net importer of petroleum products, its smaller net import share leaves it with less exposure to oil price shocks. For example, between 2008 and 2009 the average landed cost of imported crude oil decreased from $93.33 to $60.23 per barrel, contributing to a $136 billion lower oil import bill. Because of lower imports, a similar price difference during the first three quarters of 2018 would have only saved $72 billion. In a stunning reversal, if the United States becomes an annual net exporter, it may view supply restrictions elsewhere in the world as an opportunity rather than a threat. The speed with which this transition has taken place is unprecedented. A second effect of the changing U.S. net petroleum position is that it may increase protection from the business cycle that is exacerbated by high oil prices. Kilian and Vigfusson (2017) observe that in the period since 1974, U.S. economic recessions have been universally preceded by increases in the price of oil. However, as the authors note, increases in real oil prices do not always predict an economic contraction in a subsequent period. One metric for defining sustained increases in the price of oil is the cumulative net oil price increase over three years (Hamilton 2003). Figure 5-12 displays the apparent correlation between persistent, upward pressure on the price of oil and recessions between 1974 and 2018. After 60 Years, the U.S. Is Again a Net Exporter of Natural Gas Domestic production, proven reserves, and export capacity have all increased for U.S. natural gas. The supply shock for gas has created a question of where gas should flow to balance the market. Domestic consumption has increased, led by electricity generation. Petrochemical investments are up, contributing to a strong domestic chemical manufacturing base with ethane crackers along the Gulf Coast and in Pennsylvania. That leaves two main options for outlets: domestic transportation, and foreign markets. With greater export capacity, natural gas will play an important role as a strategic resource provided by the United States to countries around the world, in addition to the trade balance in goods. The implications of exporting U.S.produced natural gas include higher prices in the United States and exposure to global natural gas market dynamics. Policy has the ability to affect either side of the trade-off between exports and domestic supply. The Administration has promoted increased export capacity, streamlining the process for approval of export facilities, and enabling a more active role in global natural gas markets. At this point, private final investment decisions are needed for fully permitted Unleashing the Power of American Energy | 267 Figure 5-12. Real U.S. Refiners’ Acquisition Costs and Recessions, 1974–2018 Real cost (dollars per barrel, 2017) 150 Aug-18 120 Refiner acquisition price 90 60 30 0 1975 Real net price increase 1980 1985 1990 1995 2000 2005 2010 2015 2020 Sources: Kilian and Vigfusson (2017); Energy Information Administration; Bureau of Labor Statistics; National Bureau of Economic Reserach; CEA calculations. Note: Real oil price is defined as the monthly average refiner acquisition cost for crude oil deflated using the CPI-U. The 3-year net oil increase measure is denoted as the end of a period in which the value of the real price of oil is greater than the price in the preceding 36-month period. Shading denotes a recession. additional export terminals. As shown below, increasing export capacity offers opportunity, but the competitive global liquefied natural gas (LNG) market must be considered before making large fixed investments. Natural gas is less fungible than petroleum, limiting trade to transportation by pipeline, or at much higher cost by chilling until it liquefies (at –260°F), which reduces its volume by 99.8 percent and allows long-distance bulk transporting by specialized tankers. In 2017, the average price of LNG was over two times the benchmark U.S. price at Henry Hub in Louisiana. However, the costs associated with cooling for transportation, plus the costs of transportation and regasifying at the destination, in addition to covering the fixed costs of specialized liquefaction and regasification trains, accumulate and reduce the economic value of expanded LNG shipments (CEA 2006). Domestic production of natural gas has increased almost 40 percent over the last decade, and the EIA estimates that production increased by a further 10.6 percent in 2018 (EIA 2018i). The estimated increase in production from 2017 to 2018 was the largest year-over-year growth on record. The growth of LNG exports helped the United States become a net exporter of natural gas in 2017, for the first time since 1957. The 2017 surplus was also driven by a capacity expansion of 3.1 billion Bcfd (39.9 percent) into Mexico (EIA 2018j). Pipeline exports are almost always cheaper, thanks to inherently lower transportation 268 | Chapter 5 costs. Increasing export volumes by either transportation mode helps support higher prices for U.S. producers. The majority of U.S. natural gas exports are by pipeline to Mexico and Canada. Delivering natural gas beyond U.S. land neighbors and U.S. domestic markets that are inaccessible by pipeline, however, requires exporting by sea after the natural gas has been liquefied. LNG has grown to be 28.6 percent of total U.S. natural gas exports by volume. The capacities for both LNG exports and pipeline exports are projected to grow over the coming two years. Currently, just three facilities in the United States have a combined capacity for LNG exports of 3.8 Bcfd. However, four additional LNG export facilities currently under construction will add 8.1 Bcfd of capacity, and a further four facilities that are approved but not yet under construction will potentially add a 6.8 Bcfd of LNG export capacity (EIA 2018h). Although less flexible than the expansion of LNG capacity, construction of more gas pipelines into Mexico could provide additional competitively-priced avenues for increasing U.S. gas exports. Capacity for planned pipelines from the United States to Mexico is projected to grow by nearly 5.6 Bcfd from 2018 through 2020. Because the centers of Mexican demand are not located near the border, complementary infrastructure investment on the Mexican side of the border is needed. In 2018, Mexico added 2.7 Bcfd, with an additional 6.9 Bcfd under construction to move imports from south and west Texas further south to population centers (Wyeno 2018). Liquefied natural gas. Not so long ago, the United States was considered to be a critical import market for LNG. Investments in domestic regasification terminals to handle these imports were seen as critical for the country’s energy future. Forecasts less than 10 years old projected that the United States would run a net deficit in LNG trade through the extent of their 20-plus-year horizons. These predictions were so bleak on the export front that in the 2010 edition of the Annual Energy Outlook (EIA 2010), the United States was forecast to import 1.38 trillion cubic feet of liquefied natural gas in 2017 and export none. The ex post scenario instead saw the U.S. run a surplus of over 600 billion cubic feet of natural gas in 2017, with exports almost 10 times the magnitude of imports. Liquefaction of natural gas is the most economical way to export natural gas to other markets that are inaccessible by pipeline, and thus the expansion of these LNG facilities has opened previously inaccessible foreign markets for deliveries of U.S. natural gas. LNG can be sent in bulk shipments using specialized tankers, or in smaller, containerized units. In spite of these developments, the U.S. still imports LNG, especially in the Northeast and noncontiguous states and territories, where pipeline constraints are the relevant impediment to domestic shipment. Although U.S. natural gas can be exported, it cannot currently be moved between U.S. points because there are no cabotage-certified LNG tankers. New tankers would need to be built to allow this trade; none have been built in the United States since 1980. Unleashing the Power of American Energy | 269 U.S. LNG export capacity is largely clustered on the Gulf Coast. Cheniere Energy opened the first export facility, and now has three liquefaction trains in operation in Sabine Pass, Louisiana. This initial investment is the first of several liquefaction trains under construction that are expected to come online in the next two years. The second U.S. LNG export terminal to open was Cove Point in Maryland, which came fully online in March 2018. Cove Point is located to take advantage of natural gas from the Appalachian Basin. After Cove Point opened, U.S. LNG export capacity at this point was 3.82 Bcfd, or about 4.8 percent of contemporary U.S. production (FERC 2018). A third export facility in Corpus Christi loaded its first precommercial cargoes in November 2018, and expects to begin commercial shipments in 2019. Three additional LNG export terminals are currently under construction: one more in Texas, one in Georgia, and one more in Louisiana. Upon completion of all three terminals, total U.S. LNG export capacity is expected to reach almost 10 Bcfd. Beyond these projects, 7.6 Bcfd in other projects are fully permitted but not under construction for lack of a final investment decision. Figure 5-13 shows how additional export capacity could increase the value of LNG exports given current price forecasts. U.S. LNG exports can be expected to be particularly competitive in markets with high natural gas demand and a limited access to local or pipelinesourced supply. China and Japan are the world’s two largest importers of LNG, and are likely to be attractive future markets in which to increase the U.S. share of LNG deliveries. Driven by antipollution government policies, Japan and China together imported an average of 16 Bcfd in 2017, nearly four times current U.S. export capacity. Many countries can supply LNG, and China has chosen to impose tariffs on U.S. LNG imports in retaliation for U.S. tariffs on imports from China. Growth in the global LNG market overwhelms this effect, because U.S. cargoes can be delivered around the world and do not rely on particular partners. European markets appear promising but may prove more difficult to penetrate for U.S. imports. European countries import most of their natural gas supply by pipeline from the Middle East and Russia, limiting demand for the more expensive U.S. LNG exports. Pipeline transportation remains less costly than LNG for international shipments departing from the United States (figure 5-14). LNG accounted for only 12.4 percent of European gas demand in 2017, while pipeline imports supplied 79.6 percent of Europe’s consumption (BP 2018). Although the EU had 21 Bcfd of regasification capacity in 2017, LNG imports totaled only 5.6 Bcfd on average (European Commission 2018). This spare capacity provides insurance against supply interruptions from Russian gas, but the higher cost of delivered LNG makes it less attractive for long-term commercial contracts. Additional barriers to expansion of U.S. LNG exports may stem from the industry’s focus on long-term contracts, which traditionally have had destination clauses that limit the flexibility of trade to price signals. Long-term 270 | Chapter 5 contracts reassure financiers providing capital for expensive investments in capacity. In such LNG contract structures, uncertainty over future prices and political or economic developments in the receiving country may weigh on U.S. LNG exports and investment abroad (Zhuravleva 2009). These uncertainty factors may be particularly relevant in Eastern European countries, which generally have weak and volatile economic, regulatory, and political conditions. In addition to long-term contracts, spot trading is needed for the market to recognize arbitrage gains; facilities that are locked into long-term contracts cannot capitalize on “cargoes of opportunity,” such as particularly high prices in distant markets. One contractual solution to the paradox of needing both long-term and spot trades is allowing brokers to bridge the gap by paying projects for capacity and marketing LNG where it is most profitable. Coal Exports One reason for greater coal demand has been overseas demand; U.S. coal exports rose by 60.9 percent in 2017. This aided both the steam and metallurgical coal sectors. Exports to Europe increased by 44.5 percent, while exports to Asia increased 109.0 percent. Asian markets, especially India and South Korea, were leading purchasers of steam coal, while European countries, led by Unleashing the Power of American Energy | 271 Ukraine and the Netherlands, purchased a plurality of U.S. metallurgical coal exports (EIA 2018c). The coal industry has seen a minor reversal of downward trends starting in the fourth quarter of 2016. The United States exported nearly 87.2 million short tons of coal in the first three quarters of 2018, up 18.4 million short tons (27 percent) from 2017 (figure 5-15). This boom in exports was primarily driven by exports of steam coal, which grew by 44 percent in the first three quarters of 2018 over 2017 levels. U.S. coal production continued to exceed domestic consumption through 2018, allowing for renewed opportunities to further expand demand through exports; U.S. production accounted for slightly less than 10 percent of global consumption in 2017 (BP 2018). The fuel costs of using coal to generate electricity remain among the lowest of any technology. However, thanks to the technology’s higher fixed costs and inherent inflexibilities, coal-fired generation has lost market share to natural gas generation (Fell and Kaffine 2018). However, coal remains the main fuel by which many countries provide electricity to their citizens (Wolak 2017). Coal-fired generation made up 46.3 percent of electricity in non-OECD countries in 2017 and was just surpassed by natural gas in OECD countries to become the second-most-widely-used source of fuel for electric generation, after natural gas (BP 2018). The increased demand for electricity in developing regions helped bolster coal prices in 2017, leading to higher U.S. exports. Price 272 | Chapter 5 increases were especially pronounced in Europe and Japan, where benchmark coal prices rose by 40.6 and 34.0 percent, respectively (BP 2018). Figure 5-15 documents the dynamic response of U.S. coal exports vis-à-vis export prices, and how rising prices in 2017 contributed to export growth. Wolak (2017) examines the potential impact on the world coal market of increasing coal export capacity from the West Coast. Due to transportation cost differentials, the net effect is to increase U.S. exports to the Pacific Basin and to reduce Chinese domestic production. Increased Chinese access to cleanerburning U.S. coal would drive up U.S. domestic coal prices and accelerate the switch to natural gas–fired generation in the United States. Projects expanding the Pacific Northwest’s’ export capacity have been proposed in Washington State, although local pressure over environmental concerns has slowed progress. Energy policy has important implications for trade policy. Greater selfreliance reduces import dependence, while growing exports strengthen links to other countries. Increased leverage might seem like an unambiguous asset, but greater trade linkages also create potential vulnerabilities as trading partners recognize that U.S. interests may be sensitive to changes in trade flows. Unleashing the Power of American Energy | 273 Strategic Value Energy trade can offer a strategic advantage to the United States. LNG exports to Europe provide an example of the strategic value of energy exports. In 2014, Lithuania received 97 percent of its natural gas from Russia. But Lithuania has begun diversifying its energy supply, building an LNG import terminal in 2014. Afterward, Russia’s share dropped to 53 percent in 2017. Although the economic value of LNG exports to Lithuania is small, the strategic value of providing allies with alternative energy supplies is relatively large, if difficult to quantify. If the United States is the source of LNG shipments, this policy provides a double dividend of strategic and trade benefits. The EU natural gas market also provides an example of the limitations of U.S. energy diplomacy. The EU has only reduced Russia’s share from 31 percent of imports in 2014 to 29 percent in 2018 (through August). The United States only provided 0.2 percent of the EU’s LNG imports in 2018. Not all fuel transactions are dominated by strategic concerns. Venezuela exported 48 percent of its crude oil to the United States in 1999, when Hugo Chávez became president; but the amount dropped to 32 percent in 2017. Oil exports represented 90 percent of Venezuela’s exports in 2017, so the Venezuelan government has a strong incentive not to disrupt this trade. The U.S. refining sector has invested in processing Venezuelan crude oil that it can buy competitively on the market. Although strategic considerations alone might suggest that the United States should substitute away from Venezuelan supplies, the advantageous economics of supply mean that the oil still flows. Because of its prominence, oil trade is a geopolitical pressure point. In 2018, the United States sanctioned oil exports from Iran, returning to a regime that was in place before the 2015 Joint Comprehensive Plan of Action. The stated goal of U.S. sanctions is to deprive the Iranian regime of oil revenue. In anticipation of implementation on November 5, 2018, global oil prices rose through October 2018. Iran exported 2.18 MMbpd in 2017. Before the November deadline, Iranian oil exports for October 2018 were already down 30 percent from their 2017 level, to 1.78 MMbpd (figure 5-16). In an effort to minimize harm to U.S. allies importing oil from Iran, the United States granted six-month waivers exempting certain volumes from the sanctions. Eight such waivers were granted, to China, India, Japan, Turkey, Italy, Greece, Taiwan, and South Korea. Spare production capacity, especially among OPEC members, has been vital in stabilizing global oil markets in response to unexpected shocks, due to factors ranging from natural disasters to geopolitical conflicts (Pierru, Smith, and Zamrik 2018). Spare production capacity can be brought online within 30 days and sustained for at least 90 days. Spare capacity among OPEC producers is projected by the EIA to be slightly over 1 MMbpd through 2019 (figure 5-17). Spare production capacity growth has been limited in recent years as Saudi 274 | Chapter 5 Figure 5-16. Iranian Crude Oil Exports, 2016–18 Barrels per day (thousands) 3,000 Nov-18 2,500 2017 average 2,000 1,500 U.S. announces that sanctions will be reimposed 1,000 500 Sanctions snap back 0 Dec. 2016 Apr. 2017 Aug. 2017 Dec. 2017 Apr. 2018 Aug. 2018 Sources: Thomson Reuters; CEA calculations. Arabia has reached capacity. Removing Iranian crude oil from the global market places additional pressure on suppliers and transfers spare capacity to Iran. Future supply interruptions may require cooperation in using spare capacity to avoid price spikes. Energy exports also create a vulnerability as other countries recognize that they can retaliate against U.S. exports. When China wanted to retaliate against the U.S. Section 301 tariffs, it imposed tariffs on LNG. In 2018, the United States exported 103 Bcf of LNG to China, or about 15 percent of all U.S. LNG exports. Following imposition of retaliatory tariffs, U.S. LNG exports to China dropped to zero. This reflects the near-perfect substitutability of commodity products like LNG and even crude oil. U.S. exports will not be shut out of the global marketplace, but the destination could be affected by foreign trade policy, much as U.S. agriculture has been targeted in the past. Energy Policy Despite the promising indications from booming fossil fuel production, and the success in improving the U.S. fossil fuels trade balance, a number of energy policy issues remain salient. In a market economy like the United States, with a competitive energy sector, opportunities to increase access to production are limited, except for perhaps on Federal land and minerals. This section discusses a number of issues facing the electricity generation sector, which are Unleashing the Power of American Energy | 275 also important for fuels production because of the large share of fuels that are destined for electric generation units—for example, 91 percent of U.S. coal is ultimately consumed by electric power generation. The electricity sector affects many important issues, including renewable and nuclear electric generation. A third issue is the general relationship of regulation to the energy sector, which has been a particular focus of deregulatory actions. Global environmental issues are an important issue facing the United States and other countries, so the discussion concludes with an assessment of U.S. energy intensity and carbon dioxide (CO2) emissions. International environmental policy potentially affects many linked markets, as impending maritime fuel regulations illustrate. Increasing Access to Production Unlike the government of any other country in the world, the U. S. Federal government directly controls only a minority of the country’s produced resources, because mineral ownership is largely in private hands. Although this unusual allocation has received credit for helping spur the technological revolution in oil and gas drilling (Hefner 2014), it limits the ability of the Federal government to simply “turn up the tap” on production. A second channel for affecting production levels is through regulation. States, not the Federal government, are the primary regulators of oil and 276 | Chapter 5 gas extraction activity. Technological change poses a challenge for regulators (Fitzgerald 2018). Only when an interstate or Federal issue is involved does the Federal government have a role (see box 5-3). Although the Obama Administration sought a more expansive Federal regulatory role, the Trump Administration has worked to reduce unnecessary Federal regulations. Electricity Generation Electric power is the single largest energy sector in the United States.7 Two major economic forces have affected the sector: changing the traditional regulatory model that provided electricity through a vertically integrated industry and moving toward a more market-based system; and technological change and its attendant price effects, which have shifted the underlying economics of alternative generation technologies. Market design has been a central concern for electricity markets, smoothing the transition from regulated vertically integrated utilities to increasing degrees of wholesale and retail competition. Fabrizio, Rose, and Wolfram (2007) documented the efficiency gains resulting from restructured electricity markets, in which firms are exposed to more market forces rather than protected under regulation. The transition has not been seamless, as Borenstein, Bushnell, and Wolak (2002) document in the case of California. The potential for market power is one of the primary motivations for utility regulation and is a key factor that should be considered in any restructuring. Market incumbents accustomed to capturing inframarginal rents may be disrupted by restructuring, or may find new opportunities. Regional transmission organizations and independent system operators coordinate generation and transmission to ultimately satisfy the demands of electric consumers, using a variety of more and less market-oriented structures. The Federal Energy Regulatory Commission (FERC) oversees these grid operators, and has considerable discretion in approving rate requests and operational plans. FERC could take a more interventionist role in addressing issues arising from the electricity grid; as an independent regulatory body it has substantial discretion. Although the regulatory structures are similar, the different physical characteristics of electricity as compared with natural gas help explain the slower buildout of electricity transmission infrastructure (Adamson 2018). Because electric generation units are long-lived investments with long payback periods, disruptive changes can lead to a premature retirement of units. As a policy issue, this problems stems from concern about the resiliency of the grid to severe weather events, cyber threats, and other sources of interruption to fuel deliveries and ultimately electricity. There is some evidence that fuel supply deficiencies lead to electricity outages. In 2017, the EIA reported 7 This includes utility-scale electric generation and combined heat and power plants. Unleashing the Power of American Energy | 277 Box 5-3. The Federal Role in Promoting Domestic Fuels Production: The Case of Alaska The 1968 discovery of the 25 billion barrel Prudhoe Bay oil field on Alaska’s North Slope remains one of the largest single discoveries in U.S. history. At its 1988 production peak, Alaska was the top-producing U.S. State, with total crude oil production of 2 MMbpd, representing nearly 25 percent of U.S. production. Since 1988, Alaskan production has declined, and by 2017 it was less than a quarter of its peak production (485,000 barrels per day) and 5.3 percent of total U.S. crude oil. Two aspects of the rise and fall in Alaskan oil production relate to Federal policy. First, infrastructure is a critical element in order to realize the value of large and remote energy reserves like Prudhoe Bay, and Federal cooperation was needed. Second, in states with large shares of Federal land ownership, access to federally owned lands and minerals can play a critical role in promoting domestic production. Prudhoe Bay, which is on the Alaskan northern plain alongside the Arctic Ocean, is the most remote and inhospitable oil and gas operating environment in the United States (figure 5-ii). It is distant from national and global consumers. Marketing the crude oil required constructing a 4-foot diameter pipeline 800 miles across the state. The construction of the TransAlaska Pipeline System (TAPS) required Congressional approval; legislation was signed into law in 1973 with the first crude oil flowing from Prudhoe Bay through the pipeline in 1977 (AOGHS 2018). Today, 97 percent of Alaska’s total oil production comes from the North Slope region and flows through TAPS, and thence by tanker to other destinations. Normal geophysical decline on the North Slope, however, threatens the continued operation of TAPS. As throughput falls to 500,000 barrels per day—a fraction of the historic peak of 2 MMbpd—corrosion, ice formation, wax deposition, water dropout, and geotechnical concerns threaten operations. Throughput below 350,000 barrels per day is projected to severely reduce the reliability of pipeline operations (EIA 2018a; Alyeska Pipeline Service Company 2011). Land and mineral ownership play an important role in production declines. Although 61.3 percent of Alaska’s land is administered by the Federal government (Argueta, Hanson, and Vincent 2017), the Prudhoe Bay discovery occurred on land owned by the State of Alaska. As the wells in Prudhoe Bay and nearby fields have matured, exploration has stretched along the coastal plain. The State land where Prudhoe Bay is situated is bordered on either side by federally administered land. To the east is the Arctic National Wildlife Refuge (ANWR) and to the west is the National Petroleum Reserve–Alaska (NPR-A) (figure 5-ii). The NPR-A is 22.1 million acres and currently has limited exploration and production activity. A 2017 study by the U.S. Geological Survey (USGS 2017b) estimated total undiscovered technically recoverable reserves in the NPR-A to be 8.73 billion barrels of oil and 24.55 trillion cubic feet of natural gas. This was a major upward revision from the previous 2010 278 | Chapter 5 $"0- фҊ$$ѵ*-/# -)'.&Ѷ/# -/$/$*)'$''$! !0" җҘѶ )/# *./''$)рппс- *0- .ѷѵѵ *'*"$'0-1 4Ѹ) -"4 )!*-(/$*)($)$./-/$*)ѵ USGS estimate of less than 1 billion barrels, and was informed in part thanks to increased exploration activity and the beginning phases of production in 2015 through 2017 (USGS 2010). The ANWR is a large tract of protected land, though only 8 percent of the refuge is of interest for oil and gas activity. This area on the coastal plain, known as the “1002 area,” covers 1.5 million acres of the 19.64 million acre reserve. As a previously protected area, only one exploratory well has been drilled in the 1002 area, which was completed in 1986 (EIA 2018a). Despite a lack of exploration and its small size relative to the NPR-A, the most recent USGS assessment of the 1002 area published in 1998 estimated mean technically recoverable reserves of 7.7 billion barrels (USGS 1998). Federal policy in 2018 prioritized expanded production on federally administered lands on Alaska’s North Slope, and 2018 was a year of breakthrough success in this long-running effort (Hahn and Passell 2010). In December 2017, the Bureau of Land Management offered the largest ever lease sale in the NPR-A, with 900 tracts that cover a total of 10.3 million acres available for bid. Previously, there were 189 authorized leases covering 1.3 million acres. The President later signed a law (as part of the Tax Cuts and Jobs Act) requiring that a competitive leasing program be established for oil and gas exploration and production in the 1002 area of ANWR. The EIA has estimated that crude oil production from the 1002 area would begin in 2031, peaking in 2041 at 880,000 barrels per day, with cumulative production in the mean case at 3.4 billion barrels between 2031 and 2050. Under the EIA’s ANWR Unleashing the Power of American Energy | 279 high resource case, total crude oil production could greatly increase and approach the 1988 peak (figure 5-iii). Policy to expand production on other federally administered lands, including the NPR-A, could further increase production forecasts. Figure 5-iii. Alaska Crude Oil Production and Arctic National Wildlife Refuge Production Forecasts, 1973–2050 Barrels per day (thousands) 2,000 2050 High-resource case 1,600 Mean-resource case 1,200 Low-resource case 800 400 0 1970 Base case 1980 1990 2000 2010 2020 2030 2040 2050 Source: Energy Information Administration. Note: 2018 through 2050 forecast values are taken from the EIA’s 2018 Annual Energy Outlook. 94 major disturbances or unusual occurrences in the electricity supply system affecting a total of 17.1 gigawatts of capacity (EIA 2018e). Fuel supply deficiency accounted for 6 percent of these events and 1.2 percent of the total lost capacity (EIA 2018g). The low incidence and relatively small impact are testaments to the overall reliability of the national grid. However, more focused studies in particular regions have found evidence of substantial vulnerabilities (NEISO 2018; Balash et al. 2018; PJM Interconnection 2017). In 2018, utility-scale electricity generation was dominated by roughly equal amounts of natural gas and coal, at about 30 percent of the total, followed by nuclear (20 percent), and renewables including hydroelectric (17 percent) (EIA 2018g). This is a substantial change in the generation mix from the preceding decade. Between 2000 and 2009, coal on average made up 49 percent of electricity generated, while natural gas made up 19 percent and all renewable energy accounted for less than 8 percent (EIA 2018g). 280 | Chapter 5 One challenge to the traditional system is the emergence of utility-scale renewable generation that operates at or near zero marginal cost. These sources are generally nondispatchable so that they enter the generation mix first, at zero cost. Renewable sources are intermittent, so the generation can fluctuate for uncontrollable reasons: such as variation in wind speeds for wind farms or in cloud cover for solar generation. The reliability of the grid therefore depends on the ability of other generation units to smooth out intermittency, or to “firm” the renewable generation into a reliable stream of power. Nuclear and large-scale coal generation units are not well suited to provide this firming service, which sometimes attracts a price premium. Natural gas–fired units, particularly open-cycle turbines, are particularly well suited for the task. The interaction of renewable capacity and natural gas generation that can firm renewables has been causally linked to reduction in coal-fired generation (Fell and Kaffine 2018). Operating costs are separable into fuel costs and operations and maintenance costs that are incurred to keep a plant available. Some plants have higher costs than others; from an economic perspective, operating the lowestcost plants provides the greatest value, all else being equal. The competitiveness of natural gas and renewable generation, especially in restructured electric markets, indicates the importance of low operating costs. Nuclear plants have the lowest mean total operating costs for any generation technology except for hydroelectric. It is worth noting that the existing nuclear reactors have been online for decades, and the substantial fixed costs required to build units have already been amortized. The recent experience with cost overruns for new-build nuclear units underscores the importance of fixed costs to the bottom line of plant operators. The revenues required to earn an economic profit may be higher than the figures listed here due to sunk costs, though these are likely to vary on a plant (or even unit) basis. Mean operating costs vary across types of generation units. If prices move in lockstep with the varying costs, margins remain the same. However, in part because of varying market structures and generation portfolios, regional wholesale and retail prices vary. In conjunction with cost differences, price variation leads to differences in operating margins. Generation costs and operation and maintenance costs are not the only relevant costs. Different generation technologies create varying amounts of emissions and waste. Coal generation emits relatively more air pollutants than other fossil fuels, and creates a second by-product in the form of ash that requires special handling for disposal. In contrast, nuclear generation is emission-free, though it raises its own particular long-lived issue in the form of nuclear waste, which also requires special disposal. Natural gas falls somewhere in-between, with lesser amounts of harmful (and greenhouse) emissions than coal. A program accounting for the economic value of emissions could provide a boost to nuclear generation, depending on the value of emissions. Unleashing the Power of American Energy | 281 For example, in selected markets, nuclear units may be eligible for zero emission credits that supplement revenue from wholesale electricity sales. Market design and efforts to dictate the dispatch order of plants must be carefully considered to avoid unintended consequences. Holding constant the stock of generation units, the likely effect of dispatching high-cost units more frequently is to reduce the cost of the marginal megawatt and reduce the market-clearing wholesale price in competitive generation markets. This means that the cost of keeping high-cost units running can be underestimated, because the gap between market-determined prices and operating costs will widen as a result of the policy. Using two-part tariffs or other mechanisms to address these concerns may provide workable solutions and regional flexibility to accommodate different grid characteristics. The strategic need for an electricity generation reserve to promote the grid’s resilience is a challenge that is analogous to many other economic problems. The entire portfolio of generation assets in the United States could be eligible to be part of a reserve, with different strategic weights placed on various types of generation—for example, nuclear or coal-fired generation might provide greater resilience benefits and therefore be preferentially selected into the reserve. Generation assets in regions of the country that are more susceptible to natural disasters or other exogenous interruptions might be more valuable to include in the reserve. Focusing the strategic needs into unit- or plant-specific weights can be accommodated in a voluntary reserve system, much like conservation programs that elicit landowner participation while minimizing public expenditures. A similar mechanism could be used to provide the strategic benefits of a generation reserve while minimizing the downstream costs to electricity consumers. In addition to minimizing the cost, such a program would retain private initiative to opt into the reserve, with the lowest qualified bids selected, rather than relying on the judgment of bureaucrats to select the most preferred units. Renewables. Renewable generation technologies like wind and solar have marginal costs that are very close to zero. The fuel costs are zero—at least when the wind is blowing or the sun is shining. However, building windmills and solar farms requires substantial capital expenditures, and the relatively high fixed costs may not outweigh the low marginal costs that come with generation. Recognizing this difference, Federal policymakers have worked to provide incentives to increase installations of renewable generation capacity and penetration of these technologies into the generation mix. The Business Investment Tax Credit (ITC) and the Production Tax Credit (PTC) are the main Federal subsidies targeting renewable electric generation.8 8 Renewables are targeted by a wide variety of State programs, including Renewable Portfolio Standards and build requirements such as those promulgated in December 2018 by the California Building Standards Commission, which will require new homes to have solar panels. 282 | Chapter 5 The ITC was established in 2005 and recently extended through 2022 under the 2018 Bipartisan Budget Act. The ITC provides accelerated depreciation schedules for renewable energy investments by providing an initial 30 percent depreciation rate in the year the infrastructure is installed, with the accelerated depreciation rate falling incrementally to 10 percent in the years after 2022. This effectively front-loads the depreciation of investments in renewable energy infrastructure for tax purposes and lowers the cost of capital. All else being equal, this reduces the private fixed costs of investing in renewable generation. Once renewable capacity is installed, the low marginal costs are relatively easy to cover. The PTC, established in 1992 and most recently renewed in 2018 (H.R. 1892, Sec. 40409), and operates a bit differently. Rather than trying to reduce the fixed costs associated with construction and installation, the PTC provides an inflation-adjusted, per-megawatt-hour (MWh) tax credit for the generation of renewable energy (wind, solar, closed biomass, and geothermal systems). For qualifying renewable generation infrastructure (facilities not claiming the ITC) constructed before 2018, the PTC provides a payment during a facility’s first 10 years of service. Only new wind facilities qualify for the PTC since January 1, 2018; in 2020, the PTC is slated to be phased out entirely. Because of the inflation adjustment, the nominal value of the PTC has grown over time. For facilities beginning construction in 2017, the PTC was $23 per MWh. Qualified wind-based generation was given a three-year phasedown period, where the generation credit is reduced by 20, 40, and finally 60 percent for facilities commencing construction in 2017, 2018, and 2019, respectively. The EIA (2018b) presents amortized values of Federal renewable energy subsidies in its annual projections of levelized costs of electricity for new generation resources. The EIA’s most recent report values the amortized tax credit for solar PV at $12.50 ($2017) per MWh for generation resources entering into service in 2022. This is smaller than the contemporary $23.33 subsidy per MWh provided by the PTC for wind facilities commencing construction before 2017, but larger than its current value of $9.48 per MWh for facilities being constructed starting in 2019 (figure 5-18). Thanks to very low marginal costs, the increased penetration of renewable generation technologies has helped lower consumer costs at the margin (Cullen 2013; Kaffine, McBee, and Lieskovsky 2013; Novan 2015). However, the displacement of existing nonrenewable generation resources is an important policy question. Bushnell and Novan (2018) focus on western electricity markets and find that the short-term response to additional renewable generation has helped lower average prices. However, the dispatch of intermittent renewable sources creates both higher and lower prices during the day, the net effect of which is to undermine the economic viability of existing baseload generators. A redesigning and rethinking of wholesale markets may be needed to accommodate low-cost renewables without sacrificing existing capability. Unleashing the Power of American Energy | 283 Figure 5-18. CEA Estimates of Federal Electricity Generation Subsidies by Fuel Type for Fiscal Year 2016 Subsidy (dollars per megawatt-hour, 2016) 25 23.33 20 15 12.24 10 5 0.46 1.04 Nuclear Coal 0 Solar PV Wind Sources: Energy Information Administration; Internal Revenue Service; CEA calculations. Note: PV = photovoltaic. Subsidy levels from the Investment Tax Credit for solar generation were unavailable for 2016. Estimates are from the EIA’s 2018 Annual Energy Outlook for new generation deflated to 2016. These estimates may understate the level of subsidy in 2016 due to the falling price of solar photovoltaic technology over time. Nuclear power. Since the days of Hubbert, nuclear power has enjoyed the status of a forward-looking technology. Today, the United States’ 99 licensed light-water commercial reactors have an uncertain outlook. Of these reactors, 98 generated some electric power in the fourth quarter of 2018 (Nuclear Regulatory Commission 2018).9 Between 2007 and 2018, 6 utility-scale nuclear power plants ceased operations. This represented a net decrease of slightly under 600 megawatts, or 0.6 percent of nuclear generation capacity. Between 2019 and 2022, 10 more nuclear generating facilities are scheduled to be shuttered, with a net loss of 9.47 gigawatts, or 9.5 percent of 2017’s year-end capacity. Two new units are scheduled to come online in 2021, adding 2.2 gigawatts of new capacity. Of the 10 plants scheduled to close, 7 are permitted by the Nuclear Regulatory Commission to operate longer—an average of 14 years. Deregulatory actions have increased efficiency and safety across a diverse mix of generation (Davis and Wolfram 2012; Hausman 2014). However, new concerns have been raised among government agencies over the resilience on the U.S. grid to disruption from natural or intentional causes. The vulnerability of nondispatchable generation, and also dispatchable generation with limited 9 The Oyster Creek Nuclear Generation Station in Forked River, New Jersey, was shut down in September 2018, but retains an active operating license with the Nuclear Regulatory Commission. 284 | Chapter 5 onsite fuel storage, have been cited as potential reliability concerns for the American power system. During the Trump Administration, FERC and the Northeastern independent system operators have borne increased scrutiny related to the resilience of the electricity infrastructure, including nuclear facilities. Nuclear power is a reliable source of generation, but reliability itself does not translate into resilience in the event of a disruption. Though reliability is measured by the ability to deliver the quantity and quality of power that consumers demand, resilience is the ability of the system to recover from an adverse shock like a weather event or an attack. The transmission and distribution systems of wires are one of the most vulnerable parts of the electric grid, as the experience of Puerto Rico since Hurricane Maria illustrates. Finding the optimal balance between lowest-cost marginal generation and more resilient baseload coverage is not a novel challenge faced by governing bodies and operators in regions with restructured wholesale markets. Efforts to identify the correct levels of emergency generation, peak capacity, and excess capacity have led regulatory agencies to implement a diverse set of systems to ensure that the grid can handle seasonal or unexpected shocks to demand. The constant baseload output associated with nuclear generation has limited its flexibility in restructured and more competitive markets. Because nuclear generators are price takers, and thus must accept the market rate for the electricity they generate rather than face the relatively large costs of a stepdown or shutdown, nuclear plants face continuing exposure to volatility in electricity prices (Davis and Hausman 2016). This situation has become especially pressing in the wake of falling natural gas prices and the implementation of more efficient combined cycle technology over time, as the availability of natural gas generation pushes down wholesale electric prices at the margin (Linn, Muehlenbachs, and Wang 2014). Jenkins (2018) tested alternative explanations for lower prices received by nuclear generators and found that cheap natural gas had the largest effect by far, though renewable penetration and stagnating electricity demand had statistically significant effects. Lower wholesale electric costs caused by falling gas prices have undercut margins in nuclear plant revenues that may otherwise have been realized by nuclear operators over the past five years due to falling costs (figure 5-19). The Nuclear Energy Institute (NEI 2018) estimates that the real costs of nuclear generation have fallen by $7.85 per MWh (19 percent) over the past five years. Some of this reduction may be due to market forces pressuring closure of noneconomic plants; however, this decrease in real costs has outpaced the rate at which real retail electricity prices have fallen over the same period. The NEI estimates that the majority of these savings to have come from the lower costs of capital, which fell by 40.8 percent between 2012 and 2017, to $6.64 per MWh of generation. Unleashing the Power of American Energy | 285 Figure 5-19. Average Total Cost for Investor-Owned Utilities by Fuel Type, 2007–17 Hydroelectric Nuclear Gas turbine and small-scale Steam coal Average total cost (dollars per megawatt-hour) 75 60 45 30 15 0 2007 2012 2017 Source: Energy Information Administration. Notes: Average expenses are weighted by net generation. The gas turbine and small-scale category includes gas turbine, internal combustion, photovoltaic, and wind plant generation. Hydroelectric consists of both conventional and pumped storage technologies. The United States’ nuclear reactors are aging, and construction of new ones has been very slow. Since 2000, only 1.2 gigawatts of nuclear generation capacity have been added, out of a total of 494 gigawatts of new capacity (EIA 2018d).10 Two nuclear units are currently under construction, but the financial struggles of these plants underscore the challenges for the civil nuclear sector. In 2013, Georgia Power—a subsidiary of Southern Company—began construction of two 1.1 gigawatt Westinghouse AP1000 reactors at its Vogtle site. Funding for these new units at the Vogtle plant is backed by two unconditional loan guarantees from the U.S. Department of Energy totaling $8.3 billion. Construction of the units is behind schedule and over budget. Moreover, construction of two similar reactors at the Summer site in South Carolina was abandoned in 2017 because of escalating costs. Construction of the new units slowed when the designer of the reactors, Westinghouse Nuclear, filed for bankruptcy in March 2017. Although some work has continued at Vogtle, the Summer project has been abandoned.11 The cost of the units has climbed with the delays; because these are the only new 10 This long-delayed project was initiated in 1973. The reactor was finished in 2015, and was put into service in 2016. 11 The outlook is somewhat improved after the first AP1000 reactor went into commercial operation at the Sanmen facility in China in September 2018 (IAEA 2018), demonstrating that the new reactor design is feasible. 286 | Chapter 5 reactors being built in the United States, the realized costs are important for setting expectations for other licensed units that have not yet begun construction. As of November 2018, the costs of completing both new reactors at Vogtle were at least $8.0 billion, with construction to hopefully be completed by the end of 2022. Such high fixed costs render nuclear uncompetitive without additional sources of revenue. The cost overruns to date on the Summer and Vogtle plants alone add $3.97 per MWh to the levelized costs of electricity from these plants, even under lifetime dispatch factors above 80 percent. The EIA currently projects levelized costs of $92.60 per MWh, leaving little headroom for these plants between costs and wholesale prices, even without including cost overruns. Deregulation A priority of the Trump Administration has been to reduce unnecessary Federal regulatory burdens. Executive Order 13783 was issued to promote energy independence and economic growth by developing energy resources and reviewing agency actions and regulations (82 FR 16093). Since the beginning of the Administration, over 300 regulatory actions have been taken, many of them reducing regulatory burdens or exempting certain activities and affecting energy production or consumption. A total of 65 regulatory actions affecting the energy sector were completed through the end of fiscal year 2018, with projected present value savings of over $5 billion. Two examples relevant to the energy sector are the Waste Prevention rule for oil and gas production and the Stream Protection Rule, which had an outsized impact on coal operations. In September 2018, the Bureau of Land Management (BLM) rescinded certain requirements of a 2016 rule pertaining to the waste prevention and management of oil and gas resources produced on Federal lands. The new rule reestablished long-standing requirements and eliminated duplicative requirements for oil and gas drilling and extraction operations on Federal and tribal lands. With respect to the flaring of associated natural gas from oil wells, the BLM will defer to State or tribal regulations in determining if flaring will be royalty-free. In many cases, this will mean waiving the obligation to pay Federal royalties on flared gas. In February 2017, Congress passed and President Trump signed a resolution pursuant to the Congressional Review Act repealing the Stream Protection Rule (81 FR 93066–445), which had taken effect on January 19, 2017. The repeal is estimated to generate an annualized $80 million in cost savings for the surface and underground coal mining industries. Another completed action was the 2017 repeal of the Federal coal leasing program moratorium. Because western coal resources make up 55.5 percent of national production and about 80 percent of western production is from federally owned minerals (GAO 2013; CEA 2016b), the rules for the leasing and production of these minerals can affect the amount of resource that is Unleashing the Power of American Energy | 287 commercially available (EIA 2018c). Passed in January of 2016, the moratorium was drafted in tandem with the BLM’s 2016 order to study how to modernize Federal coal leasing. Auctions were suspended until the analysis was completed, which was expected to be in 2019. The repeal meant that the Federal leasing program for coal reverted to the preexisting rules, although the leasing rules have undergone subsequent changes under the Trump Administration (82 FR 36934). This action is estimated to have made available for extraction an additional 17 billion short tons of federally owned coal reserves in the Powder River Basin alone (USGS 2017a; BLM 2017). The Federal government also auctions leases for oil and gas development, both for onshore minerals and in offshore areas. Onshore lease sales in 2018 were another tale of booms and busts. Although a September 11, 2018, auction in Nevada garnered zero bids, one week earlier, a sale of 142 parcels in New Mexico brought in a stupendous $972 million in revenue. Bullish expectations for growth in the Permian Basin’s outlook and pipeline capacity continue to drive increased interest in expanding production. Year-to-date sales in 2018 were over twice the previous record level, and nearly three times those made in 2017 (ONRR 2018). Two of the most economically significant deregulatory actions for energy have been proposed but not finalized. Repeal of the Clean Power Plan (CPP) and the Waters of the United States (WOTUS) rule are under way. Both these regulations were subject to legal challenges and stays that delayed implementation. Given the pending rulemakings, the expected level of future regulation has been dramatically reduced. The Obama Administration passed the CPP in October 2015, with the goal of reducing CO2 emissions from existing electric utility generating units.12 The U.S. Environmental Protection Agency (EPA) codified final emission guidelines establishing State-specific CO2 emission performance rates and implementation schedules for generating units. In February 2016, the CPP was enjoined by the U.S. Supreme Court at the request of West Virginia and 26 other states, which argued that the rule exceeded the EPA’s authority. A repeal of the CPP was first proposed in October 2017 by the Trump Administration following pushback from State governments and industry proponents concerned about costs to consumers and outsized effects on coal-fired generation. The EPA proposed the Affordable Clean Energy rule in August 2018 as a replacement for the CPP. The final regulatory impact analysis is complete, but the rule has yet to be finalized. The Obama Administration passed WOTUS in 2015, which expanded the interpretation of “navigable waters” under the Clean Water Act; this term was interpreted to include tributaries and bodies of water adjacent to Federal 12 The Carbon Pollution Emission Guidelines for Existing Stationary Sources: Electric Generating Units (commonly referred to as the Clean Power Plan, CPP) can be found at 40 CFR part 60 subpart UUUU, as promulgated October 23, 2015. 288 | Chapter 5 Figure 5-20. Energy Intensity of GDP, 1980–2015 Energy intensity (MBtu per dollar of GDP, 2010) 14 12 2015 Higher energy consumption per unit output 10 United States 8 OECD less U.S. 6 4 Lower energy consumption per unit output 2 EU-27 0 1980 1985 1990 1995 2000 2005 2010 2015 Source: Energy Information Administration; CEA calculations. Note: Thousands of British thermal units (MBtu) per 2010 dollars of GDP at purchasing power parity of dollar-denominated expenditures. The Organization for Economic Cooperation and Development includes Australia, Austria, Belgium, Canada, Chile, the Czech Republic, Denmark, Estonia, Finland, France, Germany, Greece, Hungary, Iceland, Ireland, Israel, Italy, Japan, South Korea, Latvia, Lithuania, Luxembourg, Mexico, the Netherlands, New Zealand, Norway, Poland, Portugal, Slovakia, Slovenia, Spain, Sweden, Switzerland, Turkey, the United Kingdom, and the United States. waters, including wetlands, ponds, and lakes, which critics argued was jurisdictional overreach. A proposal to formally rescind the WOTUS rule was issued in July 2017, and several public meetings on a new rule proposal took place during the fall of 2017. The executive order urges regulators to interpret “navigable waters” in a manner consistent with Supreme Court Justice Antonin Scalia’s 2006 opinion in Rapanos v. United States; Scalia argued that “navigable waters” should only include navigable waters “in fact.” Environmental Implications Energy inputs are essential to economic performance, but emissions are an increasing concern as the realities of climate change are confronted around the world. Compared with some others, the American economy has a relatively high energy intensity—meaning that more energy is used per $1 in output in the United States than in other countries. In 2015, U.S. energy intensity (measured as 1,000 British thermal units per $1 in output at purchasing power parity) was 5.56, less than half the same measure in 1980 (figure 5-20). However, the U.S. measure is 30 percent higher than the OECD ex-U.S. average and over 44 percent higher than the average of the 27 EU member countries (EIA 2018i). This relatively high dependence on energy for output helps explain why the United States has the largest negative growth effects associated with increasing oil Unleashing the Power of American Energy | 289 Figure 5-21. Annual World Carbon Dioxide Emissions, 1990–2017 CO2 emissions index (1990 = 100) 500 China 2017 400 India 300 200 World United States 100 0 1990 Europe and Eurasia 1995 2000 2005 2010 2015 2020 Source: Department of Energy; Energy Information Administration; United Nations Framework Convention on Climate Change; CEA calculations. Note: CO2 = carbon dioxide. Emissions levels are indexed to 1990 country-level CO2 emissions. Levels for 2015–17 are estimates from the EIA’s 2018 International Energy Outlook. prices in the Group of Seven (Jiménez-Rodríguez and Sánchez 2004). The continental geography of the United States may be a factor by requiring more energy in the transportation sector, which is heavily dependent on petroleum. However, over time, the energy intensity of U.S. GDP has declined as energy users have sought to be more efficient, and the decreased net petroleum import position is likely to reduce the harm from future crude oil price shocks. A second relevant measure of energy use is the total level of emissions. The United States has remained below the average global growth rate for global CO2 emissions since the multilateral ratification of the United Nations Framework Convention on Climate Change (UNFCCC) in 1992. Although the United States was among one of the eventual 84 signatories to the extension of the original 1992 UNFCCC, it never ratified the 1997 Kyoto Protocol, which was the first international agreement with binding emission abatement commitments. Under this agreement, the United States would have been obligated to reduce emissions of a number of greenhouse gases by 7 percent below the 290 | Chapter 5 1990 level, and to achieve this reduction for an average of the years 2008–12 (figure 5-21).13 One concern at the time was that other countries would not be bound by similar standards. U.S. gross CO2 emissions are estimated to have grown at an average annual rate of 0.09 percent during the period between 1990 and 2017, while emissions in China and India are estimated to have grown at average rates more than 50 times as fast (5.7 and 5.0 percent a year, respectively) (EIA 2017). The European countries and Japan committed to, respectively, 8 and 6 percent emission reductions under Kyoto, but both failed to meet these goals. Emissions during the period 2008–12 eclipsed the 1990 reduction benchmarks by 11.1 percent for the EU and 8.8 percent for Japan (EIA 2018i); see box 5-4. Although the U.S. Congress did not ratify the Kyoto Protocol, U.S. emissions markedly broke their trend after the agreement took effect in 2005. Although U.S. emissions grew by 18.8 percent between 1990 and 2004, emissions in 2016 were down 12.8 percent from 2004 levels. This inflection in U.S. emission trends was concurrent with a similar pattern in the European nations, which shrank their emissions by 16.7 percent between 2004 and 2016 after they grew by over 20 percent in the period 1990–2004. Many factors affect emissions. Although technological change has been an important driver for the United States, other countries have adopted a policy-based approach. For example, the EU Emissions Trading System has helped participating countries reduce their CO2 emissions by 16.7 percent starting in 2004, the year before the policy took effect, until 2016. The U.S. reduction of 12.8 percent during the same period was largely achieved without a Federal policy intervention. Other types of emissions reveal a similar story: emissions of six air pollutants (carbon monoxide, particulate matter, sulfur dioxide, nitrous oxides, and volatile organic compounds) have all declined since 1990. Shapiro and Walker (2018) statistically decompose the declining emissions intensity of U.S. manufacturing, and find support for increasing regulatory stringency rather than compositional shifts in manufacturing (see boxes 5-4 and 5-5). 13 Gaseous emissions into the atmosphere can cause greenhouse effects by directly absorbing radiation, or by affecting radiative forcing and cloud formation. The Intergovernmental Panel on Climate Change (IPCC) developed the Global Warming Potential (GWP) measurement to compare relative ability for anthropogenic emissions to trap heat. The GWP measures the equivalent amount of CO2 emissions that would be required to create an equal amount of radiative forcing caused by the emission of 1 ton of a given gas over a 100-year horizon. The IPCC’s accounting lists these GWPs for inventoried emissions: methane (CH4) = 25, nitrous oxide (N2O) = 298, hydrofluorocarbons (HFCs) = 124 to 14,800, perfluorocarbons (PFCs) = 7,390 to 12,200, sulfur hexafluoride (SF6) = 22,800, and nitrogen trifluoride (NF3) = 17,200. Unleashing the Power of American Energy | 291 Box 5-4. Long-Term Improvements in Environmental Quality By many measurements, air and water quality in the United States has improved dramatically in the last 30 years and additional gains continue to be seen. Since 1990, concentrations of sulfur dioxide in the air have fallen by 88 percent, nitrogen dioxide by 56 percent, lead particulates by 80 percent, and carbon monoxide by 77 percent (EPA 2018a, 2018c). Less sulfur and nitrogen in the air has meant less acid rain and healthier lakes, while lower levels of lead and carbon monoxide protect citizens from respiratory illness (Sullivan et al. 2018). Water quality has also improved markedly in other dimensions, with streams and lakes having more dissolved oxygen and less bacteria (Keiser and Shapiro 2018). The improvements stem from various sources, including innovations in the private sector and government policies. An illustrative example is the decline in sulfur dioxide emissions (see figure 5-iv, left axis). The most recent declines have come as abundant natural gas, made available through hydraulic fracturing and horizontal drilling, has encouraged the retirement of coal-fired electricity generation, as described in the section of the text on coal production. The electricity generation sector accounted for more than 70 percent of the total decline in sulfur dioxide emissions from 1990 to 2017 (the last year of available data), with large reductions since the mid-2000s, Figure 5-iv. Sulfur Dioxide Emissions and Rainwater Acidity, 1990–2017 Metric tons (thousands) 25,000 Precipitation acidity (average pH) 2017 Total sulfur dioxide emissions (left axis) 20,000 5.6 5.4 15,000 10,000 Average pH (right axis) Electric power generation emissions (left axis) 5.2 5.0 5,000 0 1990 4.8 1994 1998 2002 2006 2010 2014 2018 Sources: Environmental Protection Agency; National Atmospheric Deposition Program; CEA calculations. Note: Average national rainwater pH is calculated as the precipitation-weighted negative logarithm of the concentration of hydrogen ions in rainwater accross 157 measurement sites in the United States. 292 | Chapter 5 when natural gas production began expanding. The more recent reductions in emissions build on early reductions that occurred following the 1990 Clean Air Act Amendments and an associated Federal cap-and-trade program implemented in 1995. Less sulfur dioxide in the air has also improved water quality. When sulfur dioxide interacts with water and oxygen, it creates sulfuric acid and leads to acid rain, which makes streams and lakes more acidic and less hospitable to fish and other aquatic life. Data on the chemical properties of precipitation across the U.S. show that acidity has declined since 1990, with large improvements in the last decade. Data collected by the National Atmospheric Deposition Program (NADP 2018) from 157 measurement sites show that the acidity of rainwater fell by 40.3 percent from 1990 to 2017. Box 5-5. International Environmental Standards and Liquid Fuels Markets: IMO 2020 Under a 2016 agreement by the International Maritime Organization (IMO), an 86 percent reduction in the sulfur content in marine bunker fuel used by 94,000 ocean-going vessels will be imposed on January 1, 2020. Sulfur emissions have been regulated in the United States, primarily in the electricity generation and transportation sectors, due to sulfur dioxide’s adverse effects on public health (Burtraw and Szambelan 2009). Within 200 miles of U.S. coastlines, in waters known as Emission Control Areas (ECAs), ships must already limit the sulfur content of fuel burned to 0.1 percent (see figure 5-v). Similar ECAs exist in coastal waters off Canada and Northern Europe. Although the United States already adheres to a stricter sulfur standard, the IMO’s decision to limit sulfur content in marine fuels to 0.5 percent in the open seas could have consequences for global fuel prices and shipping costs (IEA 2018). Ships can pursue various strategies in order to comply with the new regulation, including refitting to LNG-fueled engines, the installation of scrubbers to remove sulfur from exhaust, and switching to lower-sulfur fuels. Given the high capital costs and supply constraints associated with refitting or installing scrubbers, initially, the majority of ships will likely comply by switching to a fuel compliant with the 0.5 percent sulfur limit—predominantly either distillate fuels (marine gas oil, MGO) or a lower-sulfur-content residual fuels (ultra-low or very-low-sulfur fuel oil, ULSFO and VLSFO). It is possible that some vessels will not comply initially, and the penalties are unclear at this point. The percentage of noncomplying ships will affect the amount of total high-sulfur fuel oil that will be displaced by MGO or ULSFO. Global bunker fuel demand is estimated currently to be 4–5 MMbpd. HSFO and MGO constitute most bunker fuel demand, with HSFO consumption estimated to be roughly 3–3.25 MMbpd and MGO consumption estimated to Unleashing the Power of American Energy | 293 Figure 5-v. Global Marine Fuel Sulfur Limits, 2005–21 Sulfur content (percentage by mass) 4.5 4.0 Open seas sulfur limit Jan-21 4.5% 3.5 3.5% 3.0 2.5 86% reduction in 2020 2.0 1.5 1.5% 1.0 0.5 1.0% Emission control area sulfur limit 0.0 2005 2007 2009 2011 0.5% 0.1% 2013 2015 2017 2019 2021 Source: Energy Information Administration. be 0.8–1.25 MMbpd. LNG and LSFO (including VLSFO and ULSFO) currently constitute trivial portions of bunker fuel consumption. Though global bunker fuel represents about 5 percent of total oil demand, fuel switching by ships in 2020 may cause significant disruptions in specific product markets, with consequent price movements for all users of fuel. Demand shifts to compliant fuels in January 2020 will be met by increasing refinery runs of MGO and ULSFO. Total desulfurization capacity by the global refining fleet is estimated to be 67 MMbpd. IMO 2020 will strain refiners because 1.5–2 MMbpd of HSFO will be displaced. As a result, the IEA (2018) estimates that existing ULSFO capacity will be able to cover only 0.6 MMbpd, or 30–40 percent, of initial HSFO displacement. Consequently, 60–70 percent of initial HSFO displacement will be filled by MGO for total bunker fuel demand to remain unchanged between 2019 and 2020, requiring greater diesel throughput by refiners. The IEA estimates that diesel capacity will increase by 1.0–1.5 MMbpd by 2020, though only 0.6–1.1 MMbpd of this additional capacity will go toward marine bunker fuel versus other diesel consumers. Under the IEA’s estimates of refining capacity and supply of MGO and ULSFO, this would leave a shortfall in compliant fuel to fill HSFO displacement ranging from 0.2 MMbpd (under a high-end estimate of additional diesel and ULSFO capacity) to 0.6 MMbpd (under a low-end estimate of additional diesel and ULSFO capacity) (IEA 2018). The shortfall will likely trigger higher prices, 294 | Chapter 5 though estimates of price shocks to fuels including diesel, gasoline, and jet fuel vary substantially. To meet increasing MGO and ULSFO demand in the long run, refineries will need to increase their desulfurization capacity. Meeting MGO demand will require reconfiguration to optimize distillate product capacity. Meeting ULSFO demand will require upgrading to include the addition of cokers, hydrocracker, hydrotreater, and sulfur reduction units (Imsirovic and Prior 2018). Although the United States—followed by the Middle East, Russia, and China—is projected to provide most of the incremental diesel production in 2020, ULSFO production will be driven by complex refiners. A total of 8 of the 12 most complex refiners globally, as measured by the Nelson Index, are in the United States (Bahndari et al. 2018). The U.S. refining industry is well positioned to benefit from increased global demand for both MGO and ULSFO in 2020. However, U.S. fuel consumers may pay higher prices in the medium term as a result. Conclusion America’s energy sector has bright prospects thanks to technological change and abundant resources that are already delivering record-breaking production. Improving technology has helped U.S. fossil fuel production, led by oil and natural gas, to defy projections and reach an all-time high in 2018. Investments in technology have relied on an appetite for risk-taking on the part of extraction firms and mineral owners. Successful innovation has expanded the U.S. resource base and now offers the prospect of decades of continued production. Lower expectations of the regulatory burden for extraction activities have also helped stimulate production, though the empirical magnitude of this effect has not been estimated. Domestic production will help provide energy resources to the U.S. economy that should bolster growth. The United States’ production has expanded so much that both domestic consumption and exports have increased. Natural gas consumption continues to hit all-time highs, and is increasingly penetrating electric power generation. This penetration has disrupted legacy baseload generation, including nuclear and coal. As grid operators wrestle with how to increase resilience and ensure continued reliability, the future balance between the legacy baseload and newer generators like natural gas and renewables will be struck, and this balance may differ regionally. Expanded production also yields a dividend in America’s foreign trade and its interactions with partners and allies. Growing exports of crude oil, refined petroleum products, natural gas, and coal are all evidence of greater linkages. For the first time since 1957, the United States is a net exporter of natural gas. The shrinking level of U.S. net imports of petroleum provides indirect Unleashing the Power of American Energy | 295 benefits through macroeconomic channels by reducing sensitivity to oil price shocks. If the United States becomes an annual net exporter of petroleum, higher oil prices would, on average, help the U.S. economy. In this case, the net gains for producers, and to their private partners that own mineral deposits, would outweigh the higher costs for consumers. Such a change would have a number of important policy implications. Policies focused on reducing regulatory hurdles and eliminating distorting subsidies and preferences will provide the greatest gains in cost-effectiveness and efficiency. This is especially true in electricity markets, where a dramatic increase in renewable generation capacity has threatened traditional generation assets. The restructuring of electricity markets is a deregulatory action if carried out effectively; future restructuring will need to account for renewables and to be more responsive to consumer demand, given that dynamic pricing and other strategies offer substantial efficiency gains. 296 | Chapter 5 x Chapter 6 Ensuring a Balanced Financial Regulatory Landscape Although it has been more than a decade since the financial crisis of 2008, its consequences continue to be felt. It revealed the financial sector’s vulnerability to instability. And it also exposed shortcomings in the government’s support for financial institutions that exacerbated the crisis. This experience vividly demonstrated the enormous consequences that can result from systemic financial crises if they are not properly addressed, and it revealed the need for measured reforms that could strengthen the financial system without imposing regulatory burdens that do little to enhance financial stability. Unfortunately, the reforms spelled out in the 2010 Dodd-Frank Wall Street Reform and Consumer Protection Act fell far short of these standards. In a rush to respond forcefully to the financial crisis, the Dodd-Frank Act became law in 2010 without there having been sufficient study of the factors that led to the crisis, nor of the costs and benefits of its provisions. Too many of Dodd-Frank’s provisions were redundant, unnecessarily complex, and overreaching in their application. As we argue in this chapter, the results of this flawed approach to regulatory reform were an increase in the regulatory burden and heightened uncertainty. We believe this situation exacerbated the slowest pace of economic growth in any U.S. expansion since 1950. From its start, the Trump Administration has maintained a focus on creating and implementing a more measured approach to financial regulation that can preserve stability while addressing the shortcomings of the Dodd-Frank Act. Two weeks after taking office, President Trump issued an Executive Order outlining seven “Core Principles for Regulating the United States Financial 297 System.” This Executive Order also directed the U.S. Department of the Treasury to determine the extent to which current laws, regulations, and other policies promote—or inhibit conformance to—these Core Principles. Thus far, the Treasury has released four reports on the state of regulation that have resulted in more than 300 specific policy recommendations. In addition, the Treasury has released reports dealing with the operation of the Financial Stability Oversight Council and the Orderly Liquidation Authority, the resolution facility created by the Dodd Frank Act. Action has quickly followed. On May 24, 2018, the President signed into law one of the single most important pieces of deregulation of his Administration to date: the Economic Growth, Regulatory Relief, and Consumer Protection Act, also known as S.2155. As this chapter explains, this law reduces the regulatory burden in a number of ways, but without affecting the safety and soundness of the financial system. T his chapter begins with a summary of some of the events that led up to, and marked the culmination of, the financial crisis of 2008. These events epitomize some of the policies that needed to be addressed in the wake of the crisis. The second section describes the 2010 Dodd-Frank Act and how it fell short in a number of ways in restoring the full capacity of the U.S. financial industry. The third section outlines this Administration’s approach to financial reform, which directly addresses the problem of systemic risk without undermining the banking industry’s ability to support the economy and contribute to the prosperity of the American people. The Causes and Consequences of the 2008 Systemic Crisis The sequence of events that led up to the 2008 financial crisis and accounts of how the crisis unfolded have been explored in great detail elsewhere (e.g., Financial Crisis Inquiry Commission 2011; FDIC 2017). Although many policies and practices exacerbated the crisis—government policies that focused on increasing homeownership at any cost, credit-rating agencies falling asleep at the switch, weak underwriting standards, risky mortgage structures, and a misplaced faith that the housing market would always go up, to name a few—this chapter focuses on examples of crucial regulatory failures that led to the crisis. 298 | Chapter 6 Reinhart and Rogoff (2009) define a systemic banking crisis as the result of either (1) bank runs that lead to bank failures, or (2) the failure of one or more important institutions that results in a string of additional failures. Systemic financial crises have been shown to render a nation’s banking system unable to carry out its fundamental role in the economy (Reinhart and Rogoff 2009). Because banks are critically important agents of the monetary system, systemic crises can have very large, adverse effects on real economic activity—and real people. In recent decades, banking activities have increasingly migrated to nonbank financial institutions, such as money market funds, hedge funds, and a variety of other investment vehicles. To the extent that these nonbank institutions fund themselves with short-term liabilities, they are also subject to runs that threaten the financial system’s stability. The Boom/Bust Cycle in Residential Real Estate As we discuss later in this chapter, the historic rise in U.S. home prices between the mid-1990s and the mid-2000s and the historic decline in home prices that ensued constituted a sequence of events that resulted in the financial crisis of 2008. But these were by no means exogenous events that arose outside the financial system. Instead, the rise in home prices was fueled by an ample supply of mortgage credit at favorable rates and, starting in 2004, an unprecedented relaxation of mortgage lending standards. While home prices were rising, virtually every group involved in the financial system was reaping short-term benefits. Mortgage lenders originated large volumes of loans. Homeowners saw increases in the value of their homes. Home builders saw record sales. Homebuyers were able to obtain credit on relaxed terms, with a minimum of due diligence. Housing investors were able to finance multiple homes at once, and mortgage investors earned high yields in what was otherwise a low-yield environment. This self-reinforcing cycle of optimism only lasted as long as home prices continued to rise. At the national level, home prices more than doubled between 1996 and 2006. In a number of large coastal markets, home prices increased even faster during this period, growing by an average of 207 percent among the six coastal cities included in the S&P CoreLogic Case-Shiller 10-City Composite Home Price Index (figures 6-1 and 6-2). During 2003, U.S. first-lien mortgage originations totaled $3.7 trillion, of which the vast majority ($3.3 trillion) were prime mortgages, government mortgages, or jumbo mortgages.1 These totals remain all-time highs for mortgage originations in these categories. Mortgage refinancings hit $2.8 trillion in 2003, or 76 percent of total mortgage originations, both of which were also historic highs. But as mortgage interest rates rose in 2004, originations of prime mortgages fell by more than half. In that same year, the mortgage lending 1 Jumbo mortgages are those that are generally made to prime borrowers, but exceed the conforming size limit of the government-sponsored enterprises and must be privately financed. Ensuring a Balanced Financial Regulatory Landscape | 299 $"0- хҊрѵо*- *"$ . Ҋ#$'' - *( -$ ) 3Ѷ /$*)''0 Ѷршшх–спрр Home price index (2000 = 100) срп Dec-11 ршп рцп рфп ртп ррп шп цп фп ршшф ршшч сппр сппу сппц спрп *0- ѷ/)-о**-’.ѵ */ ѷ#$)" )*/ .- ..$*)ѵ $"0- хҊсѵ )- . $)о*) 34$/4Ѷршшх–сппх *"$. Ҋ#$'' - *( -$ *.)" ' . )-)$.* )$ "* $($ .#$)"/*) #* )$3 2*-& (+ . ". *./*) )$/ // . п фп рпп рфп спп сфп Increase in home price level (percent) тпп *0- .ѷ/)-о**-’.Ѹ '0'/$*).ѵ */ ѷ/- +- . )//# $)- . $)#*( +-$ . /2 ) ( - ршшх ) ( - сппхѵ 300 | Chapter 6 business abruptly shifted to riskier subprime and Alt-A mortgages. Between 2004 and 2006, more than $2.7 trillion in subprime and Alt-A mortgages were originated—three times the dollar amount originated in the previous three years. Many of these would eventually be backed by the U.S. government and by taxpayers, who were often on the hook for losses in these portfolios (see box 6-1). When credit standards were lowered, the market became hotter, and home prices rose even faster. Home prices had been growing faster than disposable personal incomes since 1999, but they accelerated to double digits in 2004 and peaked at an annual rate of more than 14 percent in 2005. Despite the risks inherent in subprime, Alt-A, and nontraditional mortgage loans, these mortgages performed very well as long as home prices continued to rise. In 2006, subprime mortgages past due by 90 days or more made up just 3.1 percent of total balances. Average U.S. home prices peaked in February 2007. During the next five years, they would decline, on net, by 26 percent. Price declines were even more pronounced in cities where nontraditional “affordability” loans had increased the most, again, some of which were on the books of the government-sponsored enterprises (Fannie Mae and Freddie Mac), and where home prices had risen the fastest before 2007. To compete for its lost “market share,” the Federal Housing Administration of the U.S. Department of Housing and Urban Development lowered its down payment requirements and relaxed its underwriting standards. Just as all parties involved appeared to prosper in the self-reinforcing cycle of the housing boom, virtually all parties, including taxpayers, would be adversely affected by the self-reinforcing housing bust that started in 2007. By one measure, the total value of U.S. home equity declined by more than half between 2006 and 2009, trimming total household net worth by $6.3 trillion. Because subprime borrowers could not repay when their loans reset, and could not qualify to refinance when the value of their home declined, subprime mortgage performance declined sharply. By 2009, subprime mortgages past due by 90 days or more quadrupled, to 13.6 percent. The annual number of mortgage foreclosures nearly tripled, on average, from 831,000 between 2004 and 2006 to 2.4 million between 2008 and 2011. Though not all these foreclosure proceedings would result in the repossession of a home, those that did introduced deadweight costs of up to 20 percent of the value of the property (Capone 1996). Forced sale of repossessed properties played a substantial role in the self-reinforcing cycle that was driving home prices downward (figures 6-3 and 6-4). The ultimate losses to the holders of mortgage credit have been somewhat difficult to estimate. These losses accrued to federally insured banks, to thrifts and credit unions, to the government-sponsored enterprises (GSEs)—including Fannie Mae and Freddie Mac—and to holders of private mortgage-backed Ensuring a Balanced Financial Regulatory Landscape | 301 Box 6-1. Defining Subprime, Alt-A, and Nontraditional Mortgages Subprime, Alt-A, and nontraditional mortgages were categories of high-risk loans that included terms or underwriting standards that made them much riskier than prime loans. Subprime mortgages were made to households with limited or impaired credit histories. Most of them came with a relatively affordable “introductory rate” for the first two or three years and imposed heavy penalties on borrowers who chose to refinance during that introductory period. After this introductory period, the interest rate was reset to a much higher level, which the borrower could avoid only with a refinancing and by paying additional fees. “Alt-A” was the label given to a class of mortgage loans that were generally made to households with stronger credit histories. But they often eased underwriting standards, including the requirement for borrowers to document their incomes. Of the Alt-A mortgages originated in 2006, 83 percent required little or no documentation of borrower income. Nontraditional mortgages were a large subset of Alt-A loans that allowed borrowers to defer repayment of principal through interest-only, payment option, and negativeamortization structures. Both before and after the housing crisis, subprime, Alt-A and nontraditional mortgage loans were considered too risky to make. This judgment was validated by the exceptionally high default rate they incurred during the $"0- хҊ$ѵThe $. $) 0+-$( )'/Ҋ *). $)/*/# $))$'-$.$.Ѷ сппп–сппц $)" Share of total mortgage originations (percent) уп тф 32 34 тп 25 сф 19 сп рф 12 рп 9 9 10 сппр сппс сппт ф п сппп *0- ѷ ).$ *-/"" $)) ѵ 302 | Chapter 6 сппу сппф сппх сппц housing crisis. Among subprime loans made in 2007, 36.6 percent defaulted within 24 months. For Alt-A loans, cumulative defaults for that vintage were 25.1 percent, while for prime loans the default rate was 6.7 percent. Figure 6-i shows the rise in these types of mortgages in the years leading up to the financial crisis. securities (MBSs), which largely backed subprime and nontraditional mortgages. It was the private MBSs and the derivatives based on their value, which had been distributed to institutions and investors around the world, that made the toll of mortgage losses especially difficult to estimate. Nonetheless, the total losses on U.S. mortgages and mortgage-related instruments during the crisis have been projected to range into the hundreds of billions of dollars.2 Implicit Government Support That Undermined Market Discipline Mortgage finance had evolved a great deal in the half century leading up to the crisis. As recently as 1975, depository institutions (banks and savings institutions) held 74 percent of total mortgage debt outstanding. It was in the 1970s that the GSEs began to build a substantial market share in financing mortgage credit. Their share of mortgage loans outstanding hit 10 percent in 1974, 30 percent in 1985, and more than 50 percent in every year between 1994 and 2003. The growing presence of the GSEs in the mortgage market arose in part from the financial and technological innovations that favored their wholesale approach. A provision of the Tax Reform Act of 1986 defined the real estate mortgage investment conduit as a tax-preferred vehicle for funding mortgages in securitized pools, funded by a wide range of investors. The resulting division of mortgage origination, funding, and servicing has been called the “unbundling” of mortgage finance. But their ultimate competitive advantage came from their close relationship with the Federal government. The GSEs are exempt from State and local taxes and from Federal regulations on the issuance and holdings of securities. Investors perceive an implicit Federal guarantee on the MBSs they issued, and their securities have exemptions or are given another type of special status under a number of Federal regulations. These implicit guarantees and exemptions resulted in a subsidy that totaled about 40 basis points in the precrisis period, and benefited both mortgage borrowers and GSE shareholders (Ligon and Beach 2013).3 2 For example, see reports by the Financial Crisis Inquiry Commission (FCIC 2011, xvi) and the International Monetary Fund (IMF 2008, 50). 3 A Heritage Foundation study cites research by Passmore, Sherlund, and Burgess (2005) and other sources. Ensuring a Balanced Financial Regulatory Landscape | 303 $"0- хҊтѵ - . $)о*) 34$/4Ѷсппх–рр *"$. Ҋ#$'' - *( -$ *.)" ' . )-)$.* )$ "* $($ .#$)"/*) #* )$3 2*-& (+ . ". *./*) )$/ // . Ҋчп Ҋхп Ҋуп Ҋсп Decrease in home price level (percent) п *0- .ѷ/)-о**-’.Ѹ'0'/$*).ѵ */ ѷ/- +- . )//# - . $)#*( +-$ . /2 ) ( -сппх) ( - спррѵ $"0- хҊуѵ - )/age*! ѵѵ*)1 )/$*)'0+-$( *-/"" .шп4.*-*- ./0 Ѷсппс–рх Share 90 days or more past due (percent) ру рс рп ч х у с п сппс сппу сппх *0- ѷ*-/"" )& -...*$/$*)ѵ 304 | Chapter 6 сппч спрп спрс спру спрх Because of the implicit guarantee, the GSEs were able to operate with higher leverage than other financial institutions while still maintaining confidence in the strength of their MBS guarantee. Studies find that in 2007, they operated with leverage that was significantly greater than their commercial bank competitors (Baily, Litan, and Johnson 2008). These factors provided an implicit subsidy to the GSEs that enabled them to grow and that may have encouraged them to take on more risk. Besides expanding their securitization businesses, the GSEs also took advantage of their implicit guarantee and low capital requirements to issue subsidized debt to fund investments in mortgage loans that they retained on their balance sheets. Their combined debt obligations totaled $2.9 trillion at the end of 2007. According to a 2010 report by the International Monetary Fund, “[GSEs] were pivotal in developing key markets for securitized credit and hedging instruments, but their implicit guarantee and social policy mandates [exacerbated] a softening in credit discipline and a buildup of systemic risk” (IMF 2010, 10). Wallison (2011) cites the expansion of the GSEs’ affordable housing goals in the late 1990s and early 2000s as one factor that led the GSEs to lower their lending standards. He also maintains that though it was difficult to estimate year-by-year GSE purchases of subprime and Alt-A loans, they made up 37 percent of loans held or securitized by the GSEs as of June 2008.4 However, other data and research suggest that private MBSs had a large role in financing the increase in subprime and Alt-A lending starting in 2004 (Belsky and Richardson 2010). Originations of subprime and Alt-A mortgages during their peak years of 2004–6 totaled $2.7 trillion. In those same years, issuance of private MBSs backed by subprime and Alt-A loans totaled $2.1 trillion, accounting for 78 percent of originations in dollar terms (figure 6-5). The sources of risk introduced through private MBS mortgage conduits were similar to the sources of risk for the GSEs. They operated with high rates of leverage, which in this case was the small share of the mortgage pools that were backed by the subordinate tranches that were in a first-loss position. In addition, their portfolios were characterized by imperfect information that created moral hazard, or the incentive to take on risk at the expense of their investors. This imperfect information was in part the product of the overoptimistic credit ratings that were applied to private MBSs by credit-rating agencies. For example, of all the private MBSs rated by Moody’s in 2006 as investment grade (Baa or higher), 76 percent would ultimately be downgraded to junk status. The MBS downgrades by Standard and Poor’s (S&P) and Fitch were of similar magnitude (FCIC 2011). Another factor that amplified the risks were the many structured investment vehicles (SIVs) that held private MBSs and funded them with short-term, wholesale, market-based instruments. Gary Gorton (2007) is generally credited 4 Wallison (2011) cites work by Pinto (2010) that estimates the GSEs’ total exposure to subprime and Alt-A loans. Ensuring a Balanced Financial Regulatory Landscape | 305 $"0- хҊфѵ#- *!ѵѵ *( *-/"" / /*-.Ѷрш5ф–спрц '4$))$' ).0-) *(+)$ .Ѷ+ ).$*). /ѵ -$1/ )!$)) *(+)$ . .)+**'. /# - +*.$/*-4$)./$/0/$*). Share of U.S. home mortgage debt (percent) рпп спрц чп хп уп сп п ршфф ршхп ршхф ршцп ршцф ршчп ршчф ршшп ршшф сппп сппф спрп спрф *0- .ѷ -' . -1 )&*!ain/ *0$.Ѹ'0'/$*).ѵ */ ѷsۙ"*1 -)( )/Ҋ.+*).*- )/ -+-$. sѵۙ.. /Ҋ& . 0-$/4ѵ with identifying the role of the SIVs in financing subprime mortgages, their funding strategies, and how they exacerbated the financial crisis. Brunnermeier and others (2009) also examined the relationship between asset funding and systemic risk, with a focus on how financial regulations have historically failed to distinguish between short-term and long-term funding sources. With this portfolio structure, the SIVs performed the functions of maturity transformation and credit enhancement that are traditionally carried out by banks. As with banks, this transformation created value and returns, but also proved to be subject to runs during a period of financial distress. During the precrisis years, the SIVs had substantial exposures (about 25 percent) to private MBSs, and an even larger exposure (more than 40 percent) to other financial institutions (FCIC 2011). They held stable valuations and were able to obtain funding through the financial markets as long as home prices continued to rise. These stable valuations were suddenly cast into doubt in the summer of 2007, when home prices began to fall and the subprime and nontraditional mortgages that backed the private MBSs began to default in large numbers. It was then that investors in the repurchase agreements (repos) and commercial paper that funded SIVs became much more reluctant to continue doing so. They required vastly higher “haircuts” on their collateral, or simply stopped investing in SIVs altogether. When investors’ confidence collapsed, the large banks and investment companies that had created the SIVs faced significant 306 | Chapter 6 liquidity demands themselves, having provided credit and liquidity lines to the SIVs. Though they were not legally obligated to do so, these sponsors frequently stood behind the SIVs they had sponsored, because they were also heavily dependent on repo financing (figure 6-6). The rise of off-balance-sheet financing of subprime and nontraditional mortgages was a leap into the dark for financial markets. Trillions of dollars in credit were indirectly provided to U.S. homebuyers by investors from around the world. When home prices were rising, and when mortgage defaults were low, this private nonbank financing arrangement was thought by many to distribute U.S. mortgage risk in an optimal way. However, when home prices began to decline, it quickly became clear that the private MBSs that were financing subprime and nontraditional mortgage loans were much riskier than anticipated. Moreover, because private MBSs had come to play a substantial role as collateral for short-term borrowing, their downgrades created a major disruption in the overnight lending market. The “run on repo” that resulted was reminiscent of the destructive bank runs that had been associated with previous systemic crises in the United States and around the world. An Ineffective and Uncoordinated Regulatory Response Regulatory arbitrage that moved risky mortgage lending away from regulated depository institutions and toward private and governments-sponsored conduits played a role in undermining market discipline. But financial regulators also failed to detect and respond to emerging risks in the mortgage markets. At the end of 2007, regulated depository institutions still held $3.1 trillion in mortgage loans. Regulatory authority over mortgage lending by banks and thrifts was divided between four Federal regulators and 50 State regulators. This divided authority tended to undermine these regulators’ ability to prevent or respond to the emerging crisis. For example, State measures intended to reduce predatory mortgage lending during the precrisis years were overridden by the Office of Thrift Supervision and the Office of the Comptroller of the Currency (OCC), which successfully claimed that their authority preempted that of the State regulators. Even before Dodd-Frank imposed a flurry of new postcrisis regulations, regulators already had a number of authorities that could have addressed emerging risks in mortgage lending before the crisis. The 1968 Truth in Lending Act gave the Federal Reserve the authority to establish rules governing mortgage lending that would apply to any type of lender. Although the Fed did implement this authority in its 1969 Regulation Z, the rule’s enforcement was left to a multiplicity of Federal and State regulators (FCIC 2011). The 1994 Home Ownership and Equity Protection Act gave the Federal Reserve additional powers to regulate abusive and predatory lending practices that especially affected low-income borrowers. This was perhaps the farthest-reaching Federal authority to address emerging risks in mortgage lending. However, this power was Ensuring a Balanced Financial Regulatory Landscape | 307 Figure 6-6. Gross Repurchase Agreement Funding to Banks and Broker-Dealers, 1990–2010 Dollars (trillions) 3.0 2010:Q1 2.5 2.0 1.5 1.0 0.5 0.0 1990 1992 1994 1996 1998 2000 2002 2004 2006 Source: Federal Reserve. Note: Data are as of December 9, 2010, publication. Shading denotes a recession. 2008 2010 not exercised until 2008, when the housing crisis was already well under way (Lincoln 2008). Regulatory capital standards that were in place before the crisis proved to be insufficient to preserve the financial viability of a number of large, complex banks during the crisis. Moreover, the risk-weighting approach of these capital standards actually created incentives to take on more risk. The Basel I standards put in place in 1992 turned out to promote bank holdings of MBSs as opposed to holding whole mortgage loans. Under these standards, passthrough MBSs issued by the GSEs were given a low 20 percent risk weight. A 2001 amendment tied these risk weightings in part to agency credit ratings, which also generally resulted in a low risk weight for GSE obligations. These low risk weights permitted the holders of GSE bonds to hold less capital than if they had actually held the underlying mortgage loans, which had risk weights of 50 to 100 percent. Moreover, the GSEs’ 20 percent MBS risk weight also applied to private MBSs after 2001, provided that they received high ratings from the credit-rating agencies. As discussed above, the structured approach to funding private MBSs generally enabled their senior tranches to receive a AAA rating, qualifying them for the 20 percent risk weight. Wallison (2011) estimates that this disparity in risk weighting resulted in a reduction in risk-based capital requirements from 4 percent for banks holding whole mortgages to just 1.6 percent for banks holding MBSs. Although holding 308 | Chapter 6 securities as opposed to loans could enhance the liquidity of bank portfolios, their liquidity ultimately depends on the quality of these securities. As discussed above, it was the sudden illiquidity of private MBSs and the externalities this introduced in the financial markets that exacerbated the financial crisis. Like the vast majority of their private sector counterparts, most regulatory economists also did not realize the risks that were building in housing markets and mortgage finance until it was too late. One factor may have been the sudden change in mortgage lending practices that occurred in 2004. Introducing large volumes of high-risk mortgages accelerated the rate of increase in home prices, making the housing market an apparent source of strength in the economy. But to the extent that the price increases were the product of risky mortgage lending, they could not be sustained. When home prices leveled off, and then began to fall in 2006, defaults and foreclosures rose sharply. The resulting instability in the housing and mortgage markets would eventually snowball into what became a systemic financial crisis. The Consequences of the Financial Crisis The financial crisis of 2008 was an explosion of the risks that had been building in mortgage finance and the financial system as a whole. Rescues of large banks and nonbank financial companies during previous crises had helped to create the perception that the largest banking organizations would be deemed “Too Big to Fail,” and that their investors would be protected from loss in a crisis. These expectations were shattered on September 15, 2008, when Lehman Brothers, a $639 billion investment bank, did not receive such assistance and was forced to declare Chapter 11 bankruptcy. Lehman Brothers’ bankruptcy meant that many of its counterparties around the world would not be made whole, and would find their claims tied up for years, leading to large losses. Fannie Mae and Freddie Mac were placed in a government conservatorship on September 6, 2008. In combination, Freddie and Fannie held or guaranteed $5.2 trillion in mortgage debt, or about 45 percent of U.S. households’ total mortgage obligations. The GSEs continue to operate in conservatorship more than 10 years after the crisis. At the height of the crisis, three extraordinary programs of government support were implemented to restore liquidity to financial markets, solidify the capital base and the banking system’s funding, and enable financial institutions and markets to make credit available to finance an economic recovery. First, the Federal Reserve expanded greatly on its traditional lender-of-lastresort function by introducing a series of special liquidity facilities that made loans available for longer terms, to a wider range of institutions, and on a wider range of collateral than it had ever done through the discount window. Second, Congress initially authorized the sum of $700 billion for the Troubled Assets Relief Program—known as TARP—to assist financial institutions in dealing with the large volumes of impaired assets on their balance sheets. And third, in Ensuring a Balanced Financial Regulatory Landscape | 309 $"0- хҊцѵ/$*)' ) *)"Ҋ -() (+'*4( )// Ѷсппп– спрч Percent рс Ҋрч рп ) (+'*4( )/-/ ч х *)"Ҋ/ -( 0) (+'*4( )/-/ у с п сппп сппс сппу сппх сппч спрп спрс спру спрх спрч *0- .ѷ 0- 0*! *-//$./$.Ѹ'0'/$*).ѵ */ ѷ #$)" )*/ .- ..$*)ѵ October 2008 the FDIC instituted its Temporary Liquidity Guarantee Program to help stabilize the banking industry’s funding base. These three assistance programs represented an unprecedented expansion of government support for the banking system. In total, the financial commitments behind these programs has been estimated at about $14 trillion, although the programs’ net cost was a small fraction of this amount. The programs can be described as successful in addressing the immediate dangers posed by the crisis. But over the longer term, they set new precedents for government support that undermine market discipline in banking. Moreover, they violate the principle that financial institutions themselves—and not taxpayers—should be responsible for their losses. The shockwaves of the 2008 financial crisis caused enormous harm to real economic activity. From peak to trough, real GDP fell by more than 4 percent, making this the deepest U.S. recession since the 1930s. In the six months after September 2008, the industrial production index for durable materials fell by 21 percent—its largest decline in more than 50 years. The monthly unemployment rate peaked near 10 percent in October 2009, the highest rate since June 1983. Around the time of the crisis, the United States experienced the longest stretch of unemployment above 8 percent, over three years, since the Great Depression. From peak to trough, nearly 8.7 million nonfarm workers lost their jobs (figure 6-7). 310 | Chapter 6 The net economic effects of the crisis have generally been expressed as the shortfall between potential U.S. GDP and actual GDP in the wake of the crisis. Studies that have projected the long-term effect of the crisis on GDP generally arrive at estimates of forgone economic activity that exceed $10 trillion (GAO 2013; Luttrell, Atkinson, and Rosenblum 2013). The enormous scale of these effects have become an important consideration in evaluating the impact of the Dodd-Frank Act, which was passed as a response to the crisis. The Consequences of the Dodd-Frank Act After the 2008 financial crisis, there was a push to reform the regulation of the U.S. financial system. The large economic dislocations resulting from the crisis were still obvious, as were the potential benefits of policies that could reduce the likelihood and cost of a future systemic crisis. However, in the rush to implement reforms, the costs and benefits of various regulatory reforms were not properly analyzed and weighed. In 2009, the Financial Crisis Inquiry Commission (FCIC) was created to examine the causes of the crisis. Its final report was released in January 2011—six months after sweeping reforms were made under the Dodd-Frank Act. The failure to construct an appropriate framework for considering costs and benefits before passing legislation led to reforms that were often overreaching, misguided, and inefficient. This failure to analyze—fully and properly—the likely effects of new regulatory policies made the costs of the crisis greater than they needed to be. Researchers have found evidence of a number of regulatory problems that have emerged in the postcrisis period, including regulatory arbitrage, rising compliance costs, and financial market illiquidity.5 Addressing Systemic Risk The Dodd-Frank Act aimed to address key factors that had undermined market discipline and helped trigger the systemic crisis. It created new processes in an attempt to identify and respond to emerging threats to financial stability. Title I of the act created the Financial Stability Oversight Council, chaired by the Secretary of the Treasury and including as members the heads of eight financial regulatory agencies, an independent insurance expert, and five nonvoting members. The council was given detailed criteria for determining whether a company will be subject to Federal Reserve supervision and the application of enhanced prudential standards. Separately, Dodd-Frank also imposed enhanced prudential standards on all bank holding companies with assets of $50 billion or more. Title I of Dodd-Frank also required every banking company with at least $50 billion in assets and every designated nonbank financial company to hold 5 See Choi, Holcomb, and Morgan (2018); Peirce, Robinson, and Stratmann (2014); and Roberts, Sarkar, and Shachar (2018). Ensuring a Balanced Financial Regulatory Landscape | 311 more capital and liquidity to ensure their safety and soundness, and to file an annual resolution plan that could be used as a guide for their rapid and orderly resolution through bankruptcy (figures 6-8 and 6-9). Title II of Dodd-Frank established an orderly liquidation process to quickly and efficiently liquidate or otherwise resolve a large, complex financial institution that is close to failing. It established a two-part test, under which the Secretary of the Treasury establishes that the institution is in default or is in danger of default, and then evaluates the systemic risk that would be involved with such a default. Title II requires that bankruptcy first be considered as a means to resolve the failed institution. If bankruptcy is deemed unable to bring about an orderly resolution, Title II provides the FDIC with receivership powers that apply to bank holding companies or nonbank financial companies. It establishes a fixed order of claims that helps to ensure that the executives, directors, and shareholders of the institution stand last in line to receive payment. The overarching goal of the Dodd-Frank reforms—which sought to end Too Big to Fail, strengthen capital and liquidity requirements, and restore market discipline—was to prevent a future bailout by U.S. taxpayers. However, the generally one-size-fits-all approach that Dodd-Frank took in pursuing these goals turned out to be unnecessarily costly and, in some cases, counterproductive. Moreover, an overreliance on regulatory discipline as opposed to market discipline has turned out to rely too much on the judgment of bank regulators, which are not infallible (Viscusi and Gayer 2016). Dodd-Frank’s Ill-Considered Approach The economist Paul Romer once said that a crisis is a terrible thing to waste, and the Obama Administration paraphrased him in the wake of the 2008 financial crisis, placing government deeply into the markets, especially the financial ones. The complex series of events that led to the crisis called for careful study before reforms were rushed out the door. Legislation passed in May 2009 created the FCIC to examine the causes of the crisis. The FCIC’s report and conclusions, released in January 2011, did not receive bipartisan support, but they did provide first-hand accounts of a wide range of bankers, regulators, and analysts that could have been considered as reforms were being planned. Unfortunately, Congress passed, and President Obama signed, the DoddFrank Act even before the FCIC released its report. Congress rushed out this 849-page piece of legislation, which mandated 390 new rules and regulations. Dodd-Frank would stand out from previous financial legislation in the degree to which it mandates how American businesses can and cannot conduct the financial transactions that are vital to both their well-being and that of the U.S. economy. Even in the dense world of Federal regulation, Dodd-Frank stands out in its size, complexity, and redundancy. It addressed regulatory policy at a 312 | Chapter 6 $"0- хҊчѵ$ -р+$/'/$*.*! ѵѵҊ .Ѷсппр–рч $ -р+$/' $ -р*((*)+$/' Share of risk-weighted assets (percent) рх спрчѷт ру рс рп ч х у с п сппр сппт сппф сппц сппш спрр спрт спрф спрц *0- ѷ -' . -1 )&*! 2*-&ѵ */ ѷѵѵ Ҋ . ="'*'.4./ ($''4$(+*-/)/)&.; these are)&.2$/#.. /. of more/#)ڦфпп$''$*)ѵ $"0- хҊшѵ $,0$ .. /.*!ѵѵҊ .Ѷсппп–рч .#)ѵѵ- .0-$ ..+ - )/" *!/*/'.. /.җ-$"#/3$.Ҙ .#)ѵѵ- .0-$ .җ' !/3$.Ҙ Dollars (millions) рѶупп Percentage of total assets спрч рѶспп сф сп рѶппп рф чпп хпп рп упп ф спп п п сппп сппс сппу сппх сппч спрп спрс спру спрх спрч *0- .ѷ -' +*.$/ ).0-) *-+*-/$*)Ѹ'0'/$*).ѵ */ ѷѵѵ Ҋ .ۙ"'*'.4./ ($''4$(+*-/)/)&.Ѹ/# . )&.#1 .. /.*!(*- /#)ڦфпп$''$*)ѵ Ensuring a Balanced Financial Regulatory Landscape | 313 number of agencies; created a new regulatory body, the Consumer Financial Protection Bureau (CFPB); and merged another agency, the Office of Thrift Supervision, out of existence. It required the Federal financial regulatory agencies to create 390 new rules, of which 280 have been finalized, and to complete more than 70 studies. A 2017 study, using data from RegData 3.0, showed that Dodd-Frank had placed 27,278 new restrictions on the U.S. financial industry and the economy as a whole (McLaughlin, Francis, and Sherouse 2017).6 The Trump Administration has made it a priority to address the regulatory overreach created by the Dodd-Frank Act, while also striving to ensure the safety and soundness of the Nation’s financial system. The discussion here addresses the consequences of some of Dodd-Frank’s most important regulatory reforms and how they have in many cases failed to resolve the issues that led to the financial crisis. Dodd-Frank’s Consequences Although it has been eight and a half years since Dodd-Frank was signed into law, it still has not been fully implemented, due in large part to its complexity. However, the initial results of this partisan legislation are not encouraging. Until a recent uptick in growth, the postcrisis economic recovery has been atypically weak. The economic expansion began in July 2009, and at the end of 2018 it had concluded its 114th month, making it the second-longest expansion in U.S. history. Until 2017, it was also among the most tepid expansions on record. Real economic growth averaged 2.2 percent between the middle of 2009 (the start of the expansion) and the end of 2016. This marked the slowest growth in any expansion since the National Income and Product Accounts were introduced in 1947. Before 2007, the severest downturn during this era had been the doubledip recession of 1980–81 and 1981–82. A combination of a pro-growth agenda— including tax relief, deregulation, and price stability—led the Reagan Recovery that ensued. This long period of growth started off with two years of growth that averaged 6.7 percent, and average annual growth of 4.3 percent during the entire expansion. The election of President Trump in November 2016 produced an immediate increase in small business confidence that has remained in place ever since.7 The 4-quarter moving average of real GDP growth has risen for 9 consecutive quarters, exceeding 3 percent for only the fifth time in the 37 quarters of the expansion. In the second quarter of 2018, after the enactment of the Tax Cuts and Jobs Act (TCJA), real economic growth rose to an annualized rate of 6RegData 3.0 measures the number of regulatory restrictions in a textual analysis that identifies words and phrases that have been added to the Federal Register that are generally associated with a required or prohibited activity. 7 Further discussion of the effect of the 2016 election on business confidence and the effect of the 2017 Tax Cuts and Jobs Act on economic activity can be found in chapter 1 of this Report. 314 | Chapter 6 more than 4 percent for the first time in four years. The recovery of business confidence, hiring, and investment spending since 2016 suggests that we will see higher potential growth in the years ahead. The slow pace of growth in the first eight years of the expansion was, at least in part, attributable to the persistent effects of the severe 2007–9 recession. But the regulatory requirements imposed during this period, including those mandated by the 2010 Dodd-Frank Act, were also responsible for holding back the pace of the economic recovery.8 A 2015 study projected that Dodd-Frank’s requirements and the compliance costs it continues to introduce will result in a reduction of about $895 billion in GDP between 2016 and 2025 (Holtz-Eakin 2015). Although the financial crisis had lingering effects throughout the economy long after the recession officially ended, additional public policy choices also played a role in slower growth than would have typically been expected after such a deep recession. Some of these policies reduced labor force participation, labor productivity, and capital investment, and thus were factors in the subpar macro performance through 2016 (figure 6-10). Dodd-Frank was especially a factor in discouraging small business lending and mortgage lending, and in promoting consolidation among small and midsized banks (Peirce, Robinson, and Stratmann 2014). The importance of small businesses to the U.S. economy goes well beyond the roughly two-thirds of new jobs they typically create. Small businesses have traditionally been a source of strength for their communities and a source of innovation where new and different ideas can be pursued. They rely heavily for funding on community banks, which have a local focus that helps them meet the credit needs of small businesses. Small businesses were hit especially hard by the recession, and they recovered slowly during the early years of the expansion. Since mid-2010, small loans to farms and businesses held by FDIC-insured institutions have declined by 2 percent, while total farm and business loans have increased by more than 50 percent. The monthly Small Business Economic Trends report, which is published by the National Federation of Independent Business, recorded some of its lowest annual values on record for small-business optimism during the early stages of the recovery. The federation’s optimism index would rise above its long-term average only once over 116 months, ending in November 2016. Small business optimism has remained above the historical average in every month since then. Mortgage lending has also been slow to recover since the crisis. The annual volume of purchase mortgage originations in 2017 remained below that of the peak years 2003–6. The level of mortgage debt outstanding in the third quarter of 2018 also remained lower that of the peak level reached in 2008. 8 Chapter 2 of this Report includes an extensive discussion of the effects of regulation on economic activity. Ensuring a Balanced Financial Regulatory Landscape | 315 $"0- хҊр0ѵ1 -" *)*($-*2/#43+).$*) -$*Ѷршчт– спрч Compound annual growth rate уѵф спрчѷт уѵп Current expansion тѵф тѵп сѵф сѵп рѵф рѵп пѵф пѵп ршчт–шп ршшр–сппр сппс–0ц сппш–рх *0- .ѷ 0- 0*!*)*($)'4.$.Ѹ'0'/$*).ѵ */ ѷ #)" $)- '$.'0'/ !-*(the + &/*/-*0"#*! 3+).$*)+ -$*.ѵ спрц–рч Moreover, mortgage finance has become increasingly dominated by the GSEs— including Fannie Mae, Freddie Mac, and Ginnie Mae—in spite of their role in the crisis. These entities have accounted for funding 81 percent of the net increase in mortgage lending over the past two years. The increasing dominance of the GSEs can be attributed in no small part to the higher requirements placed on portfolio mortgage lending and private mortgage securitization. As mandated by Dodd-Frank, the 2014 interagency Risk Retention Rule requires private issuers of MBSs to retain at least 5 percent of the credit risk in the mortgage pool, unless the loan meets the definition of a Qualified Residential Mortgage that makes it a low-risk loan. Fannie Mae and Freddie Mac are not subject to this rule while operating in conservatorship or receivership with capital support from the Federal government. Another area of concern about Dodd-Frank is the overall increase in compliance cost it imposes, particularly on small and midsized banks. The hundreds of regulations required under Dodd-Frank, and the thousands of pages of detailed requirements included with each regulation, have raised concerns about what has come to be called “regulatory burden” (Hoskins and Labonte 2015). This burden refers not only to the marginal cost imposed by new rules but also to the cumulative increase in the number and scope of 316 | Chapter 6 Box 6-2. Measuring the Regulatory Burden on the Financial Sector Banking is one of the most regulated U.S. industries. McLaughlin and Sherouse (2016) ranked U.S. industries in terms of the regulatory restrictions they face. They found that though the median industry faced 1,130 regulatory restrictions, depository and nondepository credit intermediation both faced over 16,000 restrictions. The only industries facing more restrictions were petroleum and coal production, electric power generation, and motor vehicle manufacturing. As with total noninterest expenses, there are economies of scale in regulatory compliance. For example, Dahl and others (2018) found that mean total compliance costs were about 10 percent of total noninterest expenses in 2017 for banks with less than $100 million in assets, compared with 5 percent for banks with assets between $1 billion and $10 billion. Regulation may also impose a wide range of indirect costs on banks and their customers that exceed the paperwork costs associated with compliance. These include the opportunity costs of loans not made and products not offered, along with effects on deposit rates offered and loan rates charged by regulated banks. Such costs not only hurt the bottom line of the bank but can also reduce the welfare of bank customers and economic activity generally. Through the fourth quarter of 2018, community banks held just 16 percent of the total loans of FDIC-insured institutions, but they held 42 percent of the industry’s small loans to farms and businesses. Recent research finds that by raising fixed regulatory compliance costs, the Dodd-Frank Act disproportionately raised the average cost of loan origination by small banks and reduced their share of small commercial and industrial loans (Bordo and Duca 2018). They further observe a relative tightening of bank credit standards on commercial and industrial loans to small versus large firms in response to Dodd-Frank. regulations imposed on banks over time.9 They consist of both the overhead costs of complying with a regulation and the opportunity costs of restrictions on bank activities. These costs raise concerns about their effect on both bank performance and the cost and availability of credit (see box 6-2). 9 In the FDIC’s 2012 “Community Banking Study,” community bankers reported that “no one regulation or practice had a significant effect on their institution.” Instead, they cited “the cumulative effects of all the regulatory requirements that have built up over time.” They also explained that the increases in the regulatory cost over the previous five years could be attributed to the time spent by both regulatory specialists and employees that typically carry out other responsibilities (FDIC 2012, appendix B). Ensuring a Balanced Financial Regulatory Landscape | 317 A More Measured Approach to Financial Regulation Since its first days in office, the Trump Administration has been working to correct the regulatory overreach introduced by the Dodd-Frank Act and to restore the ability of the financial system to support growth in the economy and our Nation’s standard of living. Core Principles for Regulating the U.S. Financial System Seven Core Principles for financial regulation were outlined in Executive Order 13772, issued in February 2017. These principles reflect a commitment to taking measures that will: 1. Empower Americans to make independent financial decisions and informed choices in the marketplace, save for retirement, and build individual wealth. 2. Prevent taxpayer-funded bailouts. 3. Foster economic growth and vibrant financial markets through more rigorous regulatory impact analysis that addresses systemic risk and market failures, such as moral hazard and information asymmetry. 4. Enable American companies to be competitive with foreign firms in domestic and foreign markets. 5. Advance American interests in international financial regulatory negotiations and meetings. 6. Make regulation efficient, effective, and appropriately tailored. 7. Restore public accountability within Federal financial regulatory agencies and rationalize the Federal financial regulatory framework. These Core Principles are designed to promote the ability of financial institutions to do their job of providing credit and other financial services to the U.S. economy. Under Dodd-Frank, banks have been regulated like public utilities, where government oversight boards dictate the manner in which business should be conducted. The Administration’s approach is consistent with a greater reliance on market discipline and somewhat less reliance on regulatory discipline. The leaders of financial regulatory agencies that have been appointed by the Administration understand and endorse this concept. And this will make it possible for them to pursue a more coordinated and measured approach to reform that will not undermine financial stability but will make regulation simpler and less costly to implement. Recommendations for Meeting the Core Principles During the past two years, the U.S. Department of the Treasury has issued four reports that made detailed recommendations consistent with the Administration’s Core Principles. These four reports have focused, in order, 318 | Chapter 6 on (1) banks and credit unions; (2) capital markets; (3) asset management and insurance; and (4) nonbank financials, financial technology (known as fintech), and innovation. With regard to depository institutions, the Treasury has recommended a series of changes designed to simplify regulations and reduce their implementation costs, while maintaining high standards of safety and soundness and ensuring the accountability of the financial system to the American public. These recommendations are summarized and discussed in the next paragraphs. A number of these recommendations were implemented in the Economic Growth, Regulatory Relief, and Consumer Protection Act of 2018, which is generally referred to as S.2155. These cases are noted in the next paragraphs, and the overall effects of S.2155 are summarized later in the chapter.10 Improving regulatory efficiency and effectiveness. To address the U.S. regulatory structure, consideration should be given to changes in the regulatory structure that reduce fragmentation, overlap, and duplication among Federal financial regulators. This could include consolidating regulators with similar missions, as well as more clearly defining regulatory mandates. At a minimum, steps should be taken to increase the coordination of supervision and examination activities. The experience of the financial crisis points to the need for improved coordination among the financial regulators. While risks were rapidly building in subprime and Alt-A lending, the need to coordinate across several regulatory agencies made it more difficult to respond in a timely way. Interagency guidance issued in 2006 and 2007 on commercial real estate lending and mortgage lending did little to discourage the riskiest nonbank lenders, but it did lead to industry concerns that regulators were placing strict caps on making loans in those categories. To improve the regulatory engagement model, sound governance of financial institutions, where policies are developed and their implementation is monitored, is essential. Hopt (2013) emphasizes the need to clearly separate the management and control functions, and to assign committees to carry out specific governance responsibilities. The failure of board governance and oversight of their banking organizations was found to be a major impediment to risk management and a factor exacerbating the financial crisis. A successful governance model requires both highly qualified board members and a commitment to procedures that promote discipline and accountability across the organization. The approach currently taken by regulators may not be promoting effective governance. Prescriptive regulations may tend to blur the division of responsibilities between the board and management and impose a 10 These recommendations are paraphrased in the next subsections from the U.S. Department of the Treasury’s report A Financial System That Creates Economic Opportunities: Banks and Credit Unions (U.S. Department of the Treasury 2017). Ensuring a Balanced Financial Regulatory Landscape | 319 one-size-fits-all approach that unnecessarily restricts banking activities and the services they provide to their customers. This is particularly problematic for midsized and community financial institutions, which have less formal governance structures. It would be helpful to clearly define the board’s role and responsibilities for regulatory oversight and governance, and to do so more consistently across regulatory jurisdictions. More transparency and consistency across the agencies could help to assure all regulated banks that they are being treated fairly. Encouraging more constructive engagement between bank regulators, board members, and managers would help to ensure that the bank itself can more effectively meet the needs of its customers while managing the risks it faces. This requires that the board be held to the highest standards when implementing regulatory compliance procedures, and that the board—not the regulator—hold management to the same standards. A step forward in improving the governance of large banks was the Federal Reserve’s August 2017 proposal, now out for comment, to create a governance-rating system for banks with assets greater than $50 billion. It is also important to enhance the use of regulatory cost-benefit analysis. As concerns about regulatory burden have increased in the postcrisis period, cost-benefit analysis has taken on a more prominent role in financial regulation. There are requirements for cost-benefit analysis in rulemaking that apply to most Federal agencies. These requirements have been outlined in Executive Order 12866 (1993), and in the subsequent Office of Management and Budget (OMB) “Circular A-4” (2003). These directives call for an analysis of proposed rules that addresses (1) the policy objectives of the proposed rule; (2) the rule’s expected effects, including costs and benefits for the parties directly involved as well as externalities that are created for other stakeholders; and (3) an analysis of regulatory alternatives. The independent financial regulatory agencies have long been exempt from oversight by OMB in most aspects of regulatory analysis. At the same time, these agencies have increasingly adopted a cost-benefit approach to rulemaking, and have devoted more resources to regulatory analysis in recent years. This analysis has been largely based on the main requirements outlined by OMB’s directives. Financial regulators are also subject to a number of legislative mandates that serve to make the regulatory process more transparent and better informed. For example, the Administrative Procedure Act established general requirements for a notice-and-comment process that keeps the industry and the public informed about proposed rules and solicits their comments, which often provides valuable information that can inform the rulemaking process. The Regulatory Flexibility Act—known as RegFlex—requires agencies to consider the impact of regulations on small entities. If a rulemaking is expected to 320 | Chapter 6 have a “significant economic impact on a substantial number of small entities,” the agency is required to assess that impact.11 There is an active debate as to whether cost-benefit analysis can be a reliable guide to regulatory policy in banking and finance. Some question whether it is possible to reliably project, much less quantify, the costs and the benefits of bank regulations (Coates 2015). Given the discussion above of imperfect information and market failures in banking, it is clear that important outcomes in banking and finance depend heavily on intangible factors such as public confidence and market liquidity. Requiring strict quantification of costs and benefits in financial regulation is viewed by some as being both unrealistic and an excessive restriction on the ability of independent regulators to apply their judgment in addressing emerging risks. Others, including Sunstein (2015), contend that a useful cost-benefit analysis can still be performed, even when there are serious gaps in the available information on costs and benefits. These two schools of thought might not be as far apart as they initially seem. The experience of the financial crisis, and the regulatory burdens that were imposed after the crisis, both point to rather obvious conclusions about the relative costs and benefits of regulations that apply to various types of institutions. One conclusion is that regulation is relatively more burdensome for small and midsized banks than for large banks. Research has repeatedly shown that regulatory compliance costs are subject to economies of scale, as are other types of nonregulatory overhead expenses. Regulation also imposes external costs on the customers of small and midsized banks, which disproportionally include small businesses. The value of small businesses in creating new jobs and new businesses is widely recognized, and has been a motivating force behind calls for applying cost-benefit analysis to bank regulations. Another fairly obvious conclusion from the financial crisis is that the benefits of safety and soundness regulations are exponentially higher when applied to systemically important institutions than when they are applied to small and midsized institutions. As the experience of the crisis clearly showed, the negative externalities associated with the failures of systemically important institutions included severe distress in global financial markets and enormous losses in U.S. economic activity. The magnitude and the incidence of these negative externalities largely determine the benefits of regulations that reduce the probability of failure (box 6-3). The framework for cost-benefit analysis by financial regulators could be improved. They should be encouraged to adopt uniform and consistent methods to analyze costs and benefits, and to ensure that their cost-benefit analyses exhibit as much analytical rigor as possible. The standards of transparency and public accountability will be served by conducting rigorous cost-benefit 11 This requirement was established under the Regulatory Flexibility Act, Public Law 96-354, 94 Stat. 1164 (5 U.S.C. § 601). Ensuring a Balanced Financial Regulatory Landscape | 321 analyses and making better use of notices of proposed rulemakings to solicit public comment that is helpful in evaluating a rule’s possible effects. This type of public analysis will be particularly helpful for proposed regulations that are “economically significant,” as defined in Executive Order 12866. Aligning the financial system to support the U.S. economy. With the goal of ensuring access to credit, the 2017 Treasury report on banks and credit unions identified a series of regulatory factors that may be unnecessarily limiting access to credit for consumers and businesses (U.S. Department of the Treasury 2017). Addressing these constraints on credit availability will be necessary to enable the U.S. economy to operate at its full potential. Regulatory constraints also should not be allowed to unduly restrict banks’ ability to meet their customers’ needs in a rapidly changing financial marketplace. The U.S. has been—and should continue to be—a global leader in introducing innovative new financial products. The regulatory environment should support this innovation while ensuring that it does not compromise the financial system’s stability or fail to protect the interests of consumers. Among the most important elements in achieving this balance are the requirements for capital and liquidity. Adequate capitalization of bank balance sheets helps to ensure that banks face market discipline that reduces their incentives to take excessive risks. At the same time, higher capital standards can limit the ability of banks to add new loans to their balance sheets. Achieving this balance is important to promoting stability while ensuring that the availability of credit is not impaired. With regard to engaging and leading the global marketplace, the competitiveness of American financial institutions in global markets is another area that was addressed in the 2017 Treasury report. It recommended active participation by U.S. regulators in global forums, and emphasized the need for coordination among U.S. regulatory agencies. Banking is very much a global marketplace. Not only do the largest U.S. banks have interests abroad, but foreign banks have continued to play a larger role in U.S. financial markets. Coordination between regulatory jurisdictions around the world has improved since the financial crisis. On net, these trends should be seen as positive developments over the long term. The U.S. regulatory agencies should engage their counterparts overseas in ways that serve the interests of U.S. financial institutions, the U.S. economy, and the American people. The Treasury made several recommendations addressing bank capital standards. More study is needed of the somewhat complex capital and liquidity requirements that have been placed on U.S. global systemically important banks (G-SIBs). If not properly calibrated, these regulations could place U.S. banks at a competitive disadvantage without contributing to financial stability and safe and sound banking. Additional research should explore several aspects of G-SIB regulation, including “the U.S. G-SIB surcharge, the mandatory minimum debt ratio included in the Federal Reserve’s total loss 322 | Chapter 6 Box 6-3. Evaluating the Costs and Benefits of Bank Regulations An example of the trade-off between benefits and costs as applied to large banks can be seen in the FDIC’s 2016 final Rule on Recordkeeping for Timely Deposit Insurance Determination. This rule addresses a particular problem that the FDIC has faced in closing failed institutions in a timely and efficient manner due to the difficulty in identifying related deposit accounts from large bank systems. The problem arises in part from complex coverage rules spelled out in statutes, along with the sometimes disconnected information systems that large banks have accumulated over the years through acquisitions. The rule requires banking organizations with more than 2 million deposit accounts to improve their data systems to facilitate the calculation of the deposit insurance coverage for each account. When the final rule was adopted, the FDIC estimated that it would apply to apply to 38 institutions, each with 2 million or more deposit accounts. Taken together, these institutions hold more than $10 trillion in total assets and manage over 400 million deposit accounts. Some, but not all, of these institutions could be considered systemically important. But the FDIC’s experience in resolving institutions with so many accounts shows that it is doubtful that they could be promptly resolved unless their data systems met the standards of the rule. The result could be a significant delay in the full availability of funds to bank depositors, which threatens to reduce the confidence of other large institutions that their funds would be promptly available in a time of distress. The benefit of the rule is measured in terms of the assurance that depositors would have prompt access to their funds as well as the confidence of depositors in other large institutions. The accuracy of any estimate of the dollar value of these benefits is doubtful at best. However, as the experience of the recent financial crisis has shown, maintaining confidence in the financial system offers potentially large benefits to the public. The costs of complying with the rule are not negligible. Based on a consultant’s estimate that is documented in the rule’s preamble, the initial and ongoing costs of implementation will likely be about $386 million. This figure represents 0.25 percent of the pretax income of these banks in 2015. Another way to place these costs in context is that they represent 93 cents for every one of the 416 million accounts these institutions manage. Equally important are the potential opportunity costs that may be imposed on banks that are subject to the rule. For example, banks may shy away from the 2 million account threshold to avoid incurring the cost of implementing this rule. Though these opportunity costs are more difficult to quantify than compliance costs, these negative external effects should be taken into account when considering the potential effects of such a rule. The decision as to whether the rule’s benefits outweighed its costs was made on the basis of this information and the judgment of the FDIC’s Board of Directors. Whether it is worth 25 basis points of pretax earnings for one year, Ensuring a Balanced Financial Regulatory Landscape | 323 and the opportunity costs of forgone business opportunities, to enhance the stability of the financial system is ultimately a judgment call. What is important is that this judgment be informed with good information where available, and not clouded by estimates whose accuracy may be vastly overstated. absorbing capacity . . . and minimum debt rule, and the calibration of the [enhanced supplementary leverage ratio]” applied to each banking company (U.S. Department of the Treasury 2017, 16). The Treasury report continued to be supportive of the ongoing Basel Committee process. The goals of establishing international bank capital standards are to strengthen the capital standards that apply to G-SIBs in general, and level the competitive playing field by establishing a floor for global riskbased capital standards. The complexity of capital rules for G-SIBs remains a challenge in achieving these goals. U.S. bank regulators will need to carefully consider the implications of any changes in the Basel III standardized approach to account for credit risk. It is important to evaluate both the possible impact on systemic risk and the effect on credit availability. Making these evaluations public as capital rules are introduced will be helpful to inform this debate as to the proper balance inherent in capital regulation. Reducing the regulatory burden and unnecessary complexity through “tailoring.” Allowing community banks and credit unions to thrive is a key aspect of the 2017 Treasury report’s recommendations. Previous discussions of economies of scale in regulatory compliance and the widespread diseconomies associated with the potential failure of a systemically important institution inescapably lead to the conclusion that most community banks and credit unions are overregulated. These institutions have a role in the U.S. economy that is more important, in relative terms, than the share of industry assets they hold. For example, in 2014 there were 646 U.S. counties in which the only banking office was one operated by a community bank (Breitenstein and McGee 2015). Yet these smaller institutions pose virtually no systemic risk that would justify burdensome regulation. Moreover, they are diverse in terms of their business models and customer bases, and can benefit from less rigid regulatory requirements. These considerations have led to calls for “tailoring” regulatory requirements in banking to better meet the needs and challenges that pertain to individual institutions. A 2017 report by the Congressional Research Service (Perkins 2017) defines tailoring as a departure from current threshold-based (typically assetbased) standards for regulation, to new standards that would (1) raise or lower the current threshold; (2) abandon numerical thresholds altogether; or (3) use 324 | Chapter 6 alternative methods to tailor regulation based on bank activities, capital levels, or greater regulator discretion. Introducing regulatory thresholds of this type can potentially distort the decisions made by regulated banks as they seek to maneuver around regulatory requirements. Accordingly, the more financial regulation can be tailored to match the business model and complexity of individual institutions, the more efficient the regulatory system will be in preserving safety and soundness, promoting innovation, and minimizing regulatory burden. Examples thus far of tailoring regulation have included the expansion of size-based exemptions from a number of regulatory requirements. For example, the Economic Growth, Regulatory Relief, and Consumer Protection Act of 2018, or S.2155, simplified the capital standards applied to banks with assets less than $10 billion and exempted them from the U.S. Basel III risk-based capital system. It also raised the Small Bank Holding Company Policy Statement asset threshold from $1 billion to $3 billion. Requirements for data reporting are being relaxed for banks with up to $5 billion in assets, and the frequency of on-site examinations are being relaxed for banks with assets of less than $3 billion. And the threshold for exemption from the CFPB’s ability-to-repay / qualified mortgage rule was raised from $2 billion to $10 billion. Based in part on a Treasury recommendation, the National Credit Union Administration has raised the threshold for stress-testing requirements for federally insured credit unions from $10 billion to $15 billion in assets. The National Credit Union Administration has also raised the asset size threshold for applying a risk-weighted capital framework from $100 million to $500 million. These steps promote greater equality with bank capital requirements that apply to commercial banks of a similar size and complexity. Refining capital, liquidity, and leverage standards. Improving, and appropriately tailoring, the regulatory standards for capital, leverage, and liquidity remain an essential element of postcrisis regulatory reforms. The 2017 Treasury report made a number of recommendations aimed at both decreasing the burden of statutory stress-testing and improving its effectiveness by tailoring the stress-testing requirements to the size and complexity of banks. The May 2018 enactment of S.2155 implemented many of these recommendations. Section 165 of Dodd-Frank required the Federal Reserve to establish enhanced prudential standards for certain bank holding companies and foreign banking organizations and for nonbank financial companies that have been designated by the Financial Stability Oversight Council as systemically important financial institutions (SIFIs). These standards included enhanced requirements for: 1. Risk-based and leverage capital and liquidity. 2. The submission of periodic resolution plans. 3. Limits on single-counterparty credit exposures. 4. Periodic stress tests to evaluate capital adequacy. Ensuring a Balanced Financial Regulatory Landscape | 325 5. A debt-to-equity limit to be applied to companies that the Financial Stability Oversight Council determined pose a grave threat to financial stability. Section 165 also authorized the Federal Reserve to “establish additional prudential standards, including three enumerated standards—a contingent capital requirement, enhanced public disclosures, and short-term debt limits—and other prudential standards” that the Federal Reserve determined to be appropriate (Federal Reserve Board of Governors 2018, 595). The 2017 Treasury report contained a number of recommendations to better tailor the requirements placed on midsized and regional banks—those with total assets between $10 billion and $250 billion—to the actual risk that they pose to financial stability. For the company-led annual Dodd-Frank Act Stress Test (DFAST), the report recommended raising the dollar threshold above the $10 billion level to reduce the regulatory burden placed on banks that are, in fact, not systemically important. This recommendation was largely implemented with the May 2018 passage of S.2155. Under S.2155, institutions with total assets below $100 billion are exempt from DFAST, while banks with assets between $100 billion and $250 billion are only subject to DFAST at the discretion of the Federal Reserve. This approach gives regulators the flexibility to tailor the stress-testing requirement to each bank’s business model, balance sheet, and organizational complexity. It not only reduces the compliance burden of banks that are not systemically important but also relieves them from assessments related to enhanced regulation. The Treasury report also recommended adjusting the thresholds applied under the Comprehensive Capital Analysis and Review (CCAR), and to adjust the review to a two-year cycle. Given that stress-testing results are forecast over a nine-quarter cycle, extending the CCAR review cycle to two years should not compromise the review’s quality. These changes, however, are covered by separate legal authorities and will need to be implemented over time on an interagency basis. Another important element of the 2017 Treasury report’s recommendations was a proposed “off-ramp” exemption from compliance with DFAST, CCAR, and certain other prudential standards for any bank that elects to maintain a sufficiently high level of capital. Providing this choice of a simplified capital standard over a more complex standard will help to ensure that the institution is subject to capital requirements that are appropriate to its particular situation, thereby helping to minimize the regulatory cost of compliance. This, too, has largely been implemented through the Economic Growth, Regulatory Relief, and Consumer Protection Act. In addition, the Treasury recommended that the Federal Reserve subject its stress-testing and capital planning review frameworks to public notice and comment. This type of transparency will help to inform market participants about the nature of this analysis and enable them to make more informed 326 | Chapter 6 decisions about the institutions that are subject to these reviews. In February 2019, the Federal Reserve finalized its implementation of enhanced disclosure of the models used in its supervisory stress test (see box 6-4). The 2017 Treasury report also made specific recommendations related to a number of other important Dodd-Frank standards—including those related to liquidity and funding for SIFIs, the resolution plans filed by SIFIs under Title I of Dodd-Frank, the Supplementary Leverage Ratio, the enhanced Supplementary Leverage Ratio that forms part of bank capital requirements, and the Volcker Rule’s limitations on proprietary trading. In each of these areas, what was originally a well-motivated attempt to address areas of risktaking that preceded the banking crisis turned out to be an overprescribed fix that unnecessarily raised the costs of regulatory compliance. The recommendations of the 2017 Treasury report include narrowing the application of the liquidity coverage ratio and recalibrating how the Net Stable Funding Ratio and Fundamental Review of the Trading Book interact with the liquidity coverage ratio and other relevant regulations. In addition, the report showed how the requirement for resolution plans to be filed by SIFIs under Title I of Dodd-Frank could be relaxed without abandoning an important element of lowering the potential for systemic risk. This measured tailoring of regulatory requirements to match the risks that are being addressed is a fundamental element of the regulatory reform efforts that have been, and continue to be, pursued by the Administration. Regulatory reforms enacted thus far. Having established this agenda for reform with the Economic Growth, Regulatory Relief, and the Consumer Protection Act of 2018, the Administration is concentrating on implementing it. The most prominent accomplishment to date in implementing the Administration’s agenda was the May 2018 passage of the Economic Growth, Regulatory Relief, and Consumer Protection Act (hereafter, the act), also referred to as S.2155. Unlike the Dodd-Frank Act, S.2155 garnered significant bipartisan support, receiving a 67–31 vote in the U.S. Senate and a 258–159 vote in the U.S. House of Representatives before being signed into law by the President. The act exemplifies the shift away from the insufficiently tailored regulation found in portions of Dodd-Frank and to a more right-sized approach. These changes directly address some of the shortcomings of the Dodd-Frank Act described earlier in this chapter, and does so in four main areas, Titles I though IV of the act. Title I provides relief to portfolio mortgage lenders who originate and hold residential mortgage loans on their balance sheet. Its expected effect will be to loosen unnecessary regulatory constraints on the availability of mortgage credit to U.S. households. Dodd-Frank had created a potential liability for banks that originated loans that later defaulted, unless those loans met the terms of the “qualified mortgage,” a definition established in 2013 by the CFPB. Ensuring a Balanced Financial Regulatory Landscape | 327 Box 6-4. Restoring Market Discipline in Banking Market discipline can be promoted by equity capital requirements and practices for failed bank resolution that help to ensure that the owners of the bank are first in line to absorb losses if the bank should fail. It has proven to be a highly effective, and sometimes disruptive, means to limit risk-taking among financial institutions. Market discipline is the antithesis of moral hazard, where the costs of risk-taking are imposed on parties other than the owners of the bank. At the same time, a sudden collapse in the confidence of bank depositors and bondholders can exert enough market discipline to force the failure of the bank. There are two basic approaches to enhancing the market discipline that discourage banks from taking on excessive risk: a minimum capital requirement, and a framework to resolve failed banks in an orderly fashion. Minimum capital requirements represent a commitment, in advance, of private capital to absorb losses incurred by the bank. As such, this capital helps to limit moral hazard. The more capital the bank holds, the greater the share of failure cost that will be absorbed by the bank’s owners before losses are imposed on other stakeholders. Other things being equal, this alignment of incentives to take risks will limit the subsidization of risk-taking bank owners and will result in a level of risk-taking that is closer to the socially optimal level. Undercapitalized banks have been cited as factors exacerbating both the savings-and-loan crisis of the late 1980s and the financial crisis of 2008 (FDIC 1997, 2017). An orderly resolution process to resolve failed banks is another essential element of market discipline. An orderly bank resolution will impose losses on equity claims first, and then on unsecured debt, before imposing losses on uninsured depositors and the FDIC’s Deposit Insurance Fund. This process helps to ensure that the equity holders that control the bank are in a first-loss position, even if their equity cannot completely cover the losses generated by the failure. An orderly resolution process for failed banks is essential for longterm financial stability. Since 1989, more than 2,000 FDIC-insured institutions have failed and have been resolved, with no losses to insured depositors. During the recent crisis, we saw instances in which the FDIC chose not to impose losses on the equity and debt holders of very large and complex failing banks, and instead provided them with open bank assistance. These exceptions from normal procedures were based on concerns that imposing losses on uninsured creditors would transmit these losses to other banks and financial companies and worsen the systemic crisis. In the short run, this expansion of government support clearly helped to maintain the stability of the financial system. Over the long term, these actions could be expected to undermine market discipline, subsidize growth and risk-taking, and create competitive inequities between large and small banks. These considerations underlie the provisions of Section 1106 of the Dodd-Frank Act, which effectively ended the FDIC’s authority to provide open bank assistance, even in a crisis situation. 328 | Chapter 6 To summarize, a regulatory approach based on market discipline must (1) create strong capital standards that limit moral hazard, and (2) enhance the ability to properly impose market discipline in the event of a failure. Historical experience shows that the ability to maintain market discipline according to these principles has been inversely related to the size and complexity of the institutions to which they are applied. The potential liability for defaulted loans under the “ability-to-repay” provision of Dodd-Frank was thought to impose market discipline on the mortgage lending process. But it also applied to mortgage loans held on the bank balance sheet, which already faced market discipline to the extent that private capital stood first in line to cover any losses from the loan. Title I simply extends the presumption of ability-to-repay compliance to all mortgages originated and held by banks and credit unions with assets under $10 billion, which will be presumed to meet the definition of a qualified mortgage. Title I also provides an exemption for depository institutions that make few mortgage loans from reporting requirements under the Home Mortgage Disclosure Act. Title I of the act defers to the judgment of the managers of small and midsized banks about the quality of the mortgages they make and hold on their own balance sheets, and steps back from having regulators make this judgment for them. Title II’s provisions are aimed at reducing the regulatory burden placed on community banks without undermining the market discipline they face. Title II exempts banks with under $10 billion in assets from the Volcker Rule, which prohibits proprietary trading by banks. This exemption reflects the fact that very few small and midsized banks engage in proprietary trading. Title II also established the new Community Bank Leverage Ratio. Banks with limited amounts of certain assets and off-balance-sheet exposures will be able to choose this relatively simple measure of capitalization and be exempted from the more complicated Basel risk-based capital standards. In its November 9, 2018, proposal to implement the Community Bank Leverage Ratio, the FDIC proposed setting the standard at 9 percent of tangible equity to total assets. An estimated 80 percent of community banks would be eligible to adopt this simplified capital standard. The provisions of Title II are designed to simplify and streamline the regulatory standards that apply to community banks. Their relatively simple business models do not require complex regulatory approaches. And the economies of scale they face in the cost of regulatory compliance make it imperative that the standards applied to them are simple and straightforward (box 6-5). Title III of the Economic Growth, Regulatory Relief, and Consumer Protection Act addresses a number of issues related to consumer protection. Ensuring a Balanced Financial Regulatory Landscape | 329 One of these is the need to give consumers more control over their own credit reports, which are a valuable reputational asset for all Americans. Title III requires the credit reporting agencies to provide updated fraud alerts to consumers for at least a year following a security incident, and gives consumers a right to place security freezes on their credit reports for free to prevent them from being inappropriately downgraded. Credit-reporting agencies will be required to omit certain medical debts from the credit reports of U.S. veterans. These requirements recognize the importance of the consumer financial information on which we all rely to gain access to credit. Although these provisions do not directly affect the safety and soundness of regulated banks, they do recognize the priority of fairness in handling this valuable and sensitive information. Title IV of the act addresses what was probably the biggest cost-benefit miscalculation made in the Dodd-Frank Act. Dodd-Frank required that all banking organizations with assets of $50 billion or more be subject to enhanced prudential standards. This approach relies too heavily on asset size as a measure of systemic importance. A better measure of the systemic importance of a particular bank is the FDIC’s ability to resolve the institution successfully without creating financial instability. In cases where an institution is deemed resolvable, subjecting it to heightened regulatory requirements imposes high regulatory costs but gives very little benefit in terms of preserving financial stability. The designation carries with it a number of regulatory requirements designed to introduce regulatory discipline as well as market discipline to designated institutions. To the extent that the institution is already resolvable, there would appear to be little benefit to the designation. In a case like this, the considerable regulatory costs imposed on SIFIs are for naught. Under Title IV, banks with $250 billion or more in assets continue to be subject to the heightened regulatory standards already imposed by DoddFrank. Banks with between $100 billion and $250 billion in assets are statutorily required to be subject only to the Dodd-Frank Act’s supervisory stress tests, while the Federal Reserve has the ability to impose other regulatory requirements as appropriate. Banks with assets between $50 billion and $100 billion will no longer be subject to the heightened regulatory requirements under Dodd-Frank. The regulatory relief provided to midsized and regional banks will be an important step toward enhancing the banking system’s ability to meet the credit needs of the U.S. economy. At the end of 2018, there were 32 banks with between $50 billion and $250 billion in total assets. Together, they hold 22 percent of the banking industry’s assets. But few or none of them can truly be deemed to pose a systemic risk. As a result, the benefits of subjecting them to heightened prudential requirements, in which they are likely to fall far short of the costs they incur by being regulated in this manner, are questionable. 330 | Chapter 6 Box 6-5. Factors Driving the Long-Term Consolidation in Banking The number of federally insured banks and savings institutions declined from 18,033 at the end of 1985 to 5,406 at the end of 2018, a total decline of 70 percent. This consolidation has been characterized by two main features. First, there has been a dramatic decline in the number of very small institutions, those with assets less than $100 million. In 1985, there were 13,631 institutions with assets less than $100 million, making up 76 percent of federally insured banks and thrifts. By 2018, this number had declined to just 1,278, making up just 24 percent of all banks and thrifts. Some 98 percent of the net decline in federally insured institutions over this period took place among banks with less than $100 million in assets. This decline of the smallest banks can be attributed in large part to economies of scale in banking. A rough measure of economies of scale is the difference in total noninterest expenses as a percentage of average assets for banks in different size categories. FDIC-insured community banks reported a noninterest expense ratio of 2.75 percent in 2018, compared with 2.59 percent for the larger noncommunity banks. This 16-basis-point difference in overhead expenses translates into expenses that were $3.5 billion higher than they would have been at the ratio reported by noncommunity banks. This figure represents more than 13 percent of community bank net income in 2018. Hughes and others (2018) compare average operating costs and costs associated with overhead, reporting and compliance, and telecommunica$"0- хҊ$$ѵ*).*'$/$*)$)..)&$)")#-$!/Industries, ршту–спрч Number of institution charters at the end of the year тпѶппп спрч сфѶппп *(( -$')&.) .1$)".$)./$/0/$*). спѶппп -''4$).0- )&.)/#-$!/. рфѶппп рпѶппп фѶппп фѶупх п рштп ршуп ршфп ршхп ршцп ршчп ршшп сппп спрп *0- .ѷ -' +*.$/ ).0-) *-+*-/$*)Ѹ -' *( *))&*-Ѹ -' +*.$/ ).0-) *-+*-/$*)ѵ */ ѷData for c*(( -$')&.) .1$)".$)./$/0/$*).- !-*(#$./*-$'.*0- .ѵData for f -''4$).0- )&.)/#-$!/.- !-*(''- +*-/.)/#-$!/!$))$'- +*-/.ѵ Ensuring a Balanced Financial Regulatory Landscape | 331 tions across three asset size categories. They show that large community banks and midsized banks both have efficiency advantages over small community banks. Looking back to the mid-1980s, we see that total noninterest expenses as a percentage of average assets have diverged from what was rough parity in the late 1990s to an advantage of up to 100 basis points for the largest banks by 2017. Larger banks may benefit from economies of scale in absorbing the costs of technology and regulatory compliance, both of which have become more important over time. The second main feature of banking industry consolidation since the mid-1980s has been the emergence of a few very large institutions that have absorbed large shares of industry assets. The share of industry assets held by the four largest banking organizations rose from 11 percent in 1985 to 40 percent at the end of 2018. Here, too, economies of scale have played an important role. The ratio of noninterest expenses to total assets for banks larger than $250 billion in total assets fell by more than 100 basis points between 2000 and 2018. Consolidation since 1985 has come about through failures (2,619 between the end of 1985 and the end of 2018), intercompany mergers (8,722 since 1985), and intracompany consolidation of charters (5,123 since 1985). Most of the failures took place during two crisis periods, first in the 1980s and early 1990s, and then following the financial crisis of 2008. Voluntary $"0- хҊ$$$ѵ#- *! )0./-4.. /. '4ѵѵ )&$)" -")$5/$*).#/ ( /# *0- -" ./4сппчѶршчф–спрч - )/# '4#-/ -.0'/$(/ '4,0$- 4/# !*0-'-" ./)&$)" *-")$5/$*). - )/# '4)&$)"*-")$5/$*)./#/ ( /# !*0-'-" ./$)сппч Share of industry assets at the end of the quarter хп спрчѷу фп уп тп сп рп п ршчф ршшп ршшф сппп сппф спрп *0- .ѷ -' +*.$/ ).0-) *-+*-/$*)Ѷ'0'/$*).ѵ 332 | Chapter 6 спрф mergers and charter consolidations during this period were spurred not only by the prospect of economies of scale but also by the decline in regulatory restrictions on branching and interstate banking. Geographic deregulation facilitated the formation of larger, more efficient banks. Though this promoted greater efficiency and opportunities for diversification, it also helped create large, complex banks that have benefited from the perceived implicit government support of systemically important banks (figures 6-ii and 6-iii). There are a number of pending regulatory proposals to implement the provisions of S.2155. Taken together, Titles I though IV of the act represent a new approach to regulating financial institutions that reflect the Core Principles delineated at the outset of this Administration. They will relieve community and regional banks from excessive and costly regulatory requirements that should really only apply to SIFIs. Moreover, they preserve the elements of regulatory reform that were designed to contain the systemic risks that led to the financial crisis of 2008. But they take an approach that appropriately tailors regulatory requirements according to the activities and structure of the bank, and the level of risk that it poses to financial stability. Signing the bipartisan S.2155 bill into law is the most visible reform yet put into place by President Trump. This act is expected to have a range of long-term benefits for financial institutions, the economy, and the public. It levels the competitive playing field between the smaller community banks and credit unions and the larger, more complex financial institutions. It recognizes the vital importance of small and midsized banks, as well as the high costs and negligible benefits of subjecting them to regulatory requirements better suited for the largest financial institutions. S.2155 is expected to reduce regulatory burdens and help to expand the credit made available to small businesses that are the lifeblood of local communities across the nation. Additional steps taken to address regulatory concerns. As important as S.2155 is in scaling back costly and unnecessary regulations, a number of other, less well-known measures have been adopted that also represent progress in implementing the Administration’s agenda. Addressing the CFPB’s arbitration rule was one crucial step. In July 2017, the CFPB released a rule intended to ban certain financial companies from using mandatory arbitration clauses in consumer contracts, and to permit consumers to participate in class action lawsuits. But the new rule was later reconsidered after it was shown to adversely increase the cost of credit for consumers. In November 2017, the rule was nullified under the Congressional Review Act after a joint resolution to do so was signed into law by President Trump. Ensuring a Balanced Financial Regulatory Landscape | 333 Another priority is reform of the Community Reinvestment Act (CRA). In 1977, the CRA was enacted to encourage banks to meet the credit needs of all segments of their communities, including low- and moderate-income households. In response to growing feedback—including from the Department of the Treasury—that the CRA requires modernizing (especially with the rise in online banking), in August 2018 the OCC published its Advanced Notice of Proposed Rulemaking to seek input on the best ways to update the regulatory framework that supports the CRA. To improve credit availability in the areas most in need, the OCC is soliciting input on topics such as how to improve the current approach to performance evaluations, expand the activities that qualify for CRA, and better define communities. The data collection rule under the Home Mortgage Disclosure Act also needs improvement. In 2015, the CFPB had issued an update to the 1975 Home Mortgage Disclosure Act, which expanded data disclosure requirements for lending institutions. The new rule, which was set to go into effect January 1, 2018, was delayed pending a review of and improvements to the CFPB’s data security systems. The CFPB, with interagency assistance, concluded that its “security posture is well-organized and maintained,” and it has recently sought comments from relevant parties as it considers whether changes to its data governance and data collections programs would be appropriate. It seeks to balance the protection of privacy without hindering its ability to accomplish its objectives and statutory mandate. Another needed initiative is updating the commercial real estate appraisal rule. In a coordinated effort between the OCC, the Federal Reserve, and the FDIC, effective in April 2018, the threshold for commercial real estate appraisals was raised from $250,000 to $500,000. The rule amends Title XI of the Financial Institutions Reform, Recovery, and Enforcement Act of 1989, partly in response to concerns among relevant stakeholders that the prior threshold level did not reflect the appreciation of commercial real estate in the 24 years since the threshold was initially set. The three agencies determined that the increased appraisal level would materially reduce regulatory burden and the amount of transactions requiring an appraisal, while not sacrificing the safety and soundness of financial institutions. The regulatory capital rule for small banks, with respect to transitions, is another improvement. The OCC, the Federal Reserve, and the FDIC adopted a final “Transitions Rule” that extends the 2017 regulatory capital treatment for certain items for smaller banks. The relief provided under this rule specifically applies to banking organizations that are not subject to the capital rules’ advanced approaches, which tend to be smaller banks. This rule went into effect on January 1, 2018. And finally, the regulatory capital rule for small banks, with respect to simplifications, is being considered. The “Simplification Rule”—which was proposed by the OCC, the Federal Reserve, and the FDIC in October 2017—would 334 | Chapter 6 aim to simplify compliance with certain aspects of the capital rule, particularly for smaller banks. Conclusion This chapter has chronicled the financial crisis of 10 years ago—its causes, its costs, and its consequences. The financial crisis of 2008 was the most severe systemic financial event in the U.S. since at least the 1930s. It was a selfreinforcing crisis that arose within the financial industry itself but soon spread to the wider economy. The crisis exposed weaknesses in institutional and regulatory structures that were in dire need of reform. Before it was over, the Federal government was required to provide assistance to financial institutions that was unprecedented in its scale and its scope. Notwithstanding this extraordinary support, the crisis took a heavy toll on U.S. economic activity that affected the vast majority of Americans. The declines in manufacturing, construction, employment, and overall economic activity that came after the crisis were historically large and long lasting. These economic effects underscored the need for an appropriate regulatory response that would enhance the stability of financial markets and institutions, and would protect the American people from the consequences of enduring a future crisis. The Financial Crisis Inquiry Commission was organized in 2009 to examine the causes of the crisis and inform the regulatory reforms that were sure to follow. The FCIC released a 662-page report in January 2011 documenting the various factors that exacerbated the financial crisis. This report did not receive bipartisan support. But it did provide first-hand accounts of a wide range of bankers, regulators, and analysts that could have been considered as reforms were being planned. Yet even before this report was published, Congress passed the 2010 Dodd-Frank Act—a sweeping overhaul of financial regulation that did not result in a rapid economic recovery and that has had a number of unintended consequences. The Dodd-Frank Act has proven to be a misguided approach to regulatory reform. It called for almost 400 new rules, not all of which have been implemented. The act placed unnecessary burdens on banks and their customers through its frequent overreach and its prescriptive approach to regulation. There is a growing body of evidence that Dodd-Frank’s one-sizefits-all approach has been very costly for community banks and for the small businesses that depend on them for credit. Studies confirm the economies of scale that are associated with regulatory compliance, and support the notion that postcrisis regulatory changes have had a disproportionate effect on small and midsized banks. This regulatory approach has had substantial economic consequences. The average pace of economic growth in the first eight years of the expansion, through 2016, was the slowest of any U.S. expansion since 1950. Ensuring a Balanced Financial Regulatory Landscape | 335 Dodd-Frank was especially problematic in discouraging small business lending and in promoting consolidation among small and midsized banks. Although community banks had little to do with the onset of the financial crisis, their numbers have fallen by more than 2,400, or one-third, since 2008. FDIC data show that community banks are vitally important to communities that are not served by larger institutions. They also make small business loans in proportions that are almost three times higher than their share of total industry loans. Similarly, the importance of small businesses for the U.S. economy goes well beyond the roughly two-thirds of new jobs they typically create. Small businesses have traditionally been a source of strength for their communities and a source of innovation where new and different ideas can be pursued. They spark innovation. And they rely heavily for funding on community banks, which have a local focus that helps them meet their credit needs. From its first days, the Trump Administration outlined a more informed approach to financial regulation that will make the Nation’s financial system more efficient and more effective. Seven Core Principles for financial regulation were outlined at the outset of the Administration, calling for an end to taxpayer bailouts, a more accountable regulatory framework, more and better analysis before imposing new regulations, a leveling of the competitive playing field between U.S. and foreign banks, and other steps to enable Americans to make their own informed financial decisions in a stable financial system. From the start, it was emphasized that well-reasoned financial reforms would be essential to bring the pace of the United States’ economic growth and its standard of living up to their true potential. During the past two years, the Department of the Treasury has issued four reports that made detailed recommendations consistent with the Administration’s Core Principles. With regard to depository institutions, the Treasury has recommended a series of changes designed to simplify regulations and reduce their implementation costs, while maintaining high standards of safety and soundness and ensuring the accountability of the financial system to the American public. These recommendations have been discussed at length in this chapter. A number of these recommendations were implemented in the Economic Growth, Regulatory Relief, and Consumer Protection Act of 2018, which was enacted in May 2018 and is generally referred to as S.2155. The act addresses some of the most important shortcomings associated with Dodd-Frank, and does so in a way that does not undermine the safety and soundness of the banking industry. It provides regulatory relief to small banks by recognizing their judgment in terms of the mortgage loans they hold, simplifying their capital requirements, and giving them a presumption of compliance with regard to proprietary trading, which few of them do in the first place. Most importantly, S.2155 scales back the heightened regulatory standards that were applied to midsized banks—those with assets between $50 336 | Chapter 6 billion and $250 billion—that were treated as systemically important banks under Dodd-Frank. As a result, more than 30 midsized and regional banks that hold 22 percent of industry assets can get relief from unnecessary regulatory standards that would otherwise limit their ability to grow, prosper, and serve their customers. This change is an example of how a one-size-fits-all approach is giving way to “tailored” regulatory standards that are matched to the actual level of risk imposed by each institution. The Administration’s agenda is ambitious, and its accomplishments thus far are many. This effort is part of the Administration’s overall push, along with other forms of deregulation (see chapter 2) and tax reform (see chapter 1) to reverse the historically slow economic growth of the immediate postcrisis period, and to enhance the performance of the economy so it can reach its true potential. The election of President Trump in November 2016 produced an immediate increase in small business confidence that has remained in place ever since. The pace of economic activity has quickened during the Administration’s first two years. After the enactment of the Tax Cuts and Jobs Act, real economic growth rose to an annualized rate of more than 4 percent for the first time in four years. The recovery in business confidence, hiring, and investment spending since 2016 suggests that we will see higher potential growth in the years ahead. Most important, these commonsense adjustments to the financial regulatory framework signal an end to the war on Wall Street that took place in the immediate aftermath of the 2008 financial crisis. The implicit support for the largest banks and the government-sponsored enterprises has been rolled back by the reforms enacted thus far. The diverse U.S. financial system will continue to include elements that meet the needs of corporations, of Main Street, and of everyone and everything in between. A smoothly functioning and prosperous financial industry has long been one of the pillars that has supported the development of the U.S. economy into the largest and most stable in the world. The sensible financial reforms being pursued by the Trump Administration suggest that this institutional strength is back, and will endure in the decades to come. Ensuring a Balanced Financial Regulatory Landscape | 337 x Chapter 7 Adapting to Technological Change with Artificial Intelligence while Mitigating Cyber Threats Although technological change has always had significant effects on economic activity, artificial intelligence (AI) and high-speed automation are among its most important recent manifestations. The expansion of computing power and availability of big data have fueled remarkable advances in computer science, enabling technology to perform tasks that traditionally required humans and significant amounts of time. However, along with these advances’ prospects for encouraging continued productivity growth, they also threaten to significantly disrupt the labor market, particularly among people whose work involves routine and manual tasks. Astute policymaking will play an integral role in leveraging technology as an asset for the country, while mitigating potential disruptions. The first section of this chapter briefly defines AI and corresponding advances in computer science. AI’s most distinctive feature is that it can be used to manage a wide range of highly complex tasks with little required supervision, relative to conventional technology. This general applicability broadens the types of tasks where AI could plausibly be a substitute for human labor, underscoring both the economic promise of AI and its potential risks. The second section places AI within the broader historical context of technological change and highlights the CEA’s predictions for its short-, medium-, and long-run effects on productivity and wages. Although we may experience a span of years where AI substitutes for human-based labor for many tasks, AI, 339 like much technological change, will ultimately benefit labor through greater productivity and real wage growth. The third section explores AI’s heterogeneous effects and automation across industries and the skill distribution. Using autonomous vehicles as a case study, we show that one of the key factors for understanding the impact of technological change on employment is the price elasticity of demand. AI is expected to have a positive net effect on industrial employment, though there could be subsector-specific price declines based on changing consumer demand. The fourth section pivots to the possible risks of technological advances. Building on findings in the 2018 Economic Report of the President on the cost of cybersecurity breaches, we analyze how measurement problems related to these breaches make it difficult to estimate their costs. We present new data from 2018 on the pervasiveness of cybersecurity vulnerabilities and the paucity of firms’ responses to them across Fortune 500 companies. The fifth and final section highlights the role of policy and the considerable strides that have been taken by the Trump Administration during the past two years. The Administration will continue to embrace technological change, while maximizing its promise and minimizing its risk. R ecent years have seen enormous advances in computer science, leading to skyrocketing hardware and software capabilities. The refinement of computers continues to advance at a rapid rate. The computational power that took up enormous refrigerated rooms a few decades ago has been miniaturized to a fraction of its former size. Moreover, computer scientists and engineers have made remarkable discoveries in artificial intelligence (AI) and automation. These advances have complemented years of rapid growth in computer processing power, along with the explosive growth in the availability of digitized data. According to two prominent scholars, “the key building blocks are already in place for digital technologies to be as important and transformational to society and the economy as the steam engine” (Brynjolfsson and McAfee 2014, 9). In last year’s Report, we highlighted one aspect of the rapid diffusion of computer technology: the increasing exposure of the economy to malicious cyber activity—for example, cybercrime. We found that cybercrime had 340 | Chapter 7 expanded so much that in 2016 alone that it caused up to $109 billion in harm to the economy. Yet computers have, of course, created many more benefits than costs, and their rapid evolution promises to fundamentally transform the economy in the decade ahead. In 2016, President Obama’s Council of Economic Advisers published a sweeping report outlining the likely economic impact and policy challenges of accelerating technological change. One metric of how rapidly the sector is advancing is that already, in 2018 and 2019, enough change has occurred so an update of the previous reports is essential for meeting the challenges of the next decade and beyond. We look ahead in wonder at the possibilities of advanced thinking machines, but also worry that automation will proceed at such a rapid pace that many workers in today’s economy will suddenly find themselves superfluous or disconnected from competitive job opportunities. We also consider the additional cybersecurity risks posed by the increased reliance on information technology. In this chapter, we dig deeper than we did a year ago into the promise and risks of the ongoing computer science revolution. We begin by reviewing the latest developments in AI and automation, discussing their likely economic effects. The central theme of the first section of this chapter is that a narrow, static focus on possible job losses paints a misleading picture of AI’s likely effects on the Nation’s economic well-being. With technological advances, specific types of legacy positions are usually eliminated, though new jobs and evolving work roles are created—increasing real wages, national income, and prosperity over time. Automation can complement labor, adding to its value; and even when it substitutes for labor in certain areas, it can lead to higher employment in other types of work and raise overall economic welfare. This will likely be what happens as AI transforms more and more aspects of the economy, though new challenges will arise about cybersecurity. In the years to come, AI appears poised to automate tasks that had long been assumed to be out of reach. Thus, we also analyze the important role of reskilling, apprenticeship initiatives, and future hiring processes to help mitigate the potentially disruptive employment effects of technological change and automation throughout the skill distribution. One key question for economists today is whether—in addition to improving traditional productive processes—AI will alter processes whereby creative new ideas are generated and implemented. In other words, is AI simply the next phase in automation, or is it a real break from the past with unique implications? We explore both possibilities, but conclude that AI is likely to have major effects on the value of different skill sets and the rate at which they appreciate and depreciate. In particular, in the long run, aggregate wages will be higher because of these new advances. We then turn to an update of our previous research on the economic vulnerabilities associated with the diffusion of technology and mobile computing capabilities into virtually every corner of our lives. Technology is leading to Adapting to Technological Change | 341 new and constantly evolving complex security challenges because individuals, firms, and governments are already reliant on interconnected and interdependent technology. Whereas past conflicts unfolded on land, sea, and air, future conflicts and criminal activity will increasingly take place in cyberspace. Drawing on new data, we document that cyber vulnerabilities are quite prevalent—even in Fortune 500 companies with significant resources at their disposal. Although these new data do not allow us to update our 2018 estimate of the economic costs of malicious cyber activity, the latest data suggest that our previous estimate might have been too low, given the underreporting of cybercrime. We conclude by discussing the initiatives that are being implemented by the Trump Administration and the policy challenges that lawmakers will likely face in the years ahead. What Is Artificial Intelligence? Although there is no universal definition of artificial intelligence (AI),1 the Future of Artificial Intelligence Act of 2017 (H.R. 4625), for example, defines AI as “any artificial system that performs tasks under varying and unpredictable circumstances, without significant human oversight, or that can learn from their experience and improve their performance. . . . They may solve tasks requiring human-like perception, cognition, planning, learning, communication, or physical action.”2 These intelligent systems generally use machine learning to form predictions and adaptively make adjustments based on new information in their environment (Russell and Norvig 2010). Because AI has such a wide array of applications across sectors and disciplines, it is viewed as a general purpose technology and important source of economic growth (Agrawal, Gans, and Goldfarb 2018). Automation technologies usually focus on automating a specific process, or multiple commonly understood processes, to reduce labor intensity, which differs greatly from highly complex, human-like decision logic, which has already been observed in the emerging embodiments of AI. Although the general concepts and algorithms within AI are decades old, AI has emerged as an especially powerful and widely applied tool for 1 A recent study by Deloitte (2017) contains survey results that point out ambiguity in how many top executives and everyday citizens define AI. 2 Similarly, in the National Defense Authorization Act for Fiscal Year 2019, “the term ‘artificial intelligence’ includes the following: (1) Any artificial system that performs tasks under varying and unpredictable circumstances without significant human oversight, or that can learn from experience and improve performance when exposed to data sets. (2) An artificial system developed in computer software, physical hardware, or other context that solves tasks requiring human-like perception, cognition, planning, learning, communication, or physical action. (3) An artificial system designed to think or act like a human, including cognitive architectures and neural networks. (4) A set of techniques, including machine learning, that is designed to approximate a cognitive task. (5) An artificial system designed to act rationally, including an intelligent software agent or embodied robot that achieves goals using perception, planning, reasoning, learning, communicating, decision making, and acting.” 342 | Chapter 7 Figure 7-1. Error Rate of Image Classification by Artificial Intelligence and Humans, 2010–17 Error rate (percent) 30 2017 25 20 15 AI 10 Human 5 0 2011 2012 2013 2014 Sources: Russakovsky et al. (2015); CEA calculations. 2015 2016 2017 performing not only existing tasks much more efficiently but also new tasks that were traditionally viewed as infeasible. To give just one example, researchers have created AI algorithms capable of classifying images even more reliably than humans can do under certain conditions, and at a much faster rate and scale than ever before (figure 7-1)—although these algorithms can still be tricked by savvy programmers (CSAIL 2017). More examples abound in other areas, ranging from natural language processing to theorem proving (Artificial Intelligence Index 2017). Other types of computer science and AI advances include solutions to automate high-skill human cognitive tasks, such as automated reasoning and intelligent decision support systems (Arai et al. 2014; Davenport and England 2015; Kerber, Lange, and Rowat 2016; Mulligan, Davenport, and England 2018). The convergence of two factors have made these remarkable advances possible. First, accumulated decades of sustained growth in technology have led to an explosion in computing power. As Gordon Moore (1965) first observed, computing power historically doubles every 18 months. These advances have led to an increase in transistor density, which, combined with the declining cost of manufacturing integrated circuits, have led to a staggering increase in Adapting to Technological Change | 343 computing power (Brynjolfsson and McAfee 2014).3 Moreover, lower manufacturing costs for hardware have been complemented by annual price declines in cloud computing of 17 percent between 2009 and 2016 (Byrne, Corrado, and Sichel 2018). Second, the colossal increase in data availability has complemented the surge in computing power, allowing researchers to develop and test AI algorithms on much larger data sets.4 The emergence of big data has been driven by “digitization,” which means the ability to take different types of information and media, ranging from text to video, and convert them into streams of ones and zeros—“the native language of computers and their kin” (Brynjolfsson and McAfee 2014, 37). Researchers have also found creative ways to convert different types of digital media into comprehensive sets of numeric quantities, which often involve “feature engineering,” or optimizing the permutations of data inputs, to produce reliable predictions (Arel, Rose, and Karnowski 2010). Machine Learning Machine learning (ML) is integral to the design and implementation of AI (Russel and Norvig 2010). Unlike computers, which tend to execute a set of prespecified rules, AI is defined by the ability to learn and adapt to its environment.5 There are three main types of ML algorithms—supervised, unsupervised, and reinforcement learning—which we summarize in the next paragraphs (Hastie, Tibshirani, and Friedman 2009). First, supervised learning algorithms take a set of descriptive variables that are matched with a corresponding label (“outcome variable”) and “learns” the relationship between the two. For example, to predict college attainment, a researcher could use data on whether the individual has a college degree, together with a set of individual characteristics, such as parental education and gender, to estimate classification models. Supervised learning algorithms take a subset of the sample and search for the parameters that best fit the data based on a prespecified objective function. Second, unsupervised learning algorithms, in contrast to supervised ones, take a set of feature variables as inputs and detect patterns in the data. Though these algorithms have not been as prolifically applied as supervised 3 An integral part of the efficiency gains among producers of computer equipment is the rapid decline in effective prices of semiconductors due to advances in chip technology (Triplett 1996). These empirical patterns have also continued during the past decade. For example, Byrne, Oliner, and Sichel (2017) find that semiconductor prices fell by 42 percent, relative to the meager 6 percent decline in the producer price index between 2004 and 2009. 4 Computer scientists often refer to the process of developing and testing AI algorithms as “training.” The process refers to estimating model parameters on a subsample, subsequently using the estimated parameters to predict out-of-sample. The quality of the out-of-sample prediction is used to, sometimes iteratively, tune model parameters. 5 Russell and Norvig (2010, 43) remark that algorithms in deterministic settings are not a form of AI because they are executing a set of preprogrammed tasks. 344 | Chapter 7 learning algorithms, they are often used to simplify otherwise computationally demanding problems by reducing the number of variables that need to be kept track of, sometimes referred to as “dimensionality reduction” (Bonhomme, Lamadon, and Manresa 2017). Third, reinforcement learning algorithms have been among the most influential class of algorithms in the emerging set of AI and big data applications. Unlike supervised and unsupervised algorithms, reinforcement learning algorithms do not require complete representation of input/output pairs, but rather only require an objective function. This function specifies how the intelligent system responds to its environment under arbitrary degrees of stochasticity (i.e., the extent to which it involves a random variable). Consider the game of chess, which contains millions of potential moves. Though individuals face cognitive limitations that preclude internal simulation of thousands, and potentially millions, of scenarios simultaneously, “deep learning” reinforcement learning algorithms have largely overcome these limitations. For example, Google’s new AI algorithm, AlphaZero, defeated the world’s best chess engine, Stockfish. Unlike Deep Blue—the IBM supercomputer that defeated Garry Kasparov, the world’s leading chess champion in 1997—AlphaZero trained itself to play like a human, but at an unprecedented scale and aptitude (Gibbs 2017). One way a reader can picture this evolution of computing power is by considering the computer modeling of sports outcomes. It is now common for commentators at sporting events to announce midgame the probability, given the current score, that the team that is currently ahead in the score will indeed win the game. At one point during the 2017 American football championship game Super Bowl LI, the New England Patriots had a mere 0.3 percent chance of victory (ESPN Analytics 2018). This probability was calculated based on data from previous games and an analysis of the percentage of times that a team went on to win after trailing by a certain margin deep into the third quarter. Algorithms used by various networks and media platforms allow for these odds to be constructed from historical performance data of past teams that have been in similar situations. Moreover, as with other games, like chess, estimating probabilities of winning can grow in complexity because of real-time interactions between the players, as well as the astronomical number of possible outcomes that can be reached, even without repetitive actions between the start and end of a game. In a game with finite outcomes, given an enormously powerful computer and a set of initial conditions describing the configuration of pieces on the board, a program could explore all possible moves and responses from that state and “solve” the game. The optimal computer would then, for a given player, recommend a move from that initial state associated with the highest probability of victory for that player. Adapting to Technological Change | 345 However, because there are infinite possible future states associated with almost every state of the world in a chess match, software must discover the types of moves that tend to lead to victory because exploring all future paths and developing a discrete solution is impossible for a problem with infinite outcomes. A computer equipped with AI, however, allows for a combination of human rationality with computing probabilities of victory. This provides improved predictions that can lead the AI algorithm to “play” the game, rather than attempting to solve it. Applications of AI Technology Today, facial recognition is possible because data (e.g., images) can be not only digitized but also collected and analyzed at scale. Suppose our AI machine, in addition to assessing the remaining possible outcomes, could also discern the identities of the players themselves and use this information to further revise its predictions based on knowledge about the two players. For instance, the probabilities of victory associated with an advantageous position would need to be updated if player 1 was an amateur and player 2 was a professional. However, if player 1’s position was so advantageous that the odds of victory were 99.7 percent, even someone as talented as the professional could lose if forced to start from a severely disadvantaged position. In addition to assessing situations from a static perspective, an AI algorithm that can discern the identity of the player through facial recognition can choose strategies that are tailored to the player’s weaknesses. Another example of how AI can complement society and human tasks is through its effects on the delivery and production of educational services. One of the primary types of AI educational applications are personalized learning algorithms that allow instructors to tailor information to the unique ways that individuals learn. For example, Georgia State University sends customized text messages to students during the college enrollment process, which Page and Gehlbach (2017) find is associated with a 3.3-percentage-point increase in the probability that individuals will enroll on time. Similarly, Arizona State University uses adaptive and hybrid learning platforms that enable teachers to offer more targeted learning experiences (Bailey et al. 2018). These platforms provide instructors with real-time intelligence to assess how well their students understand each concept, allowing instructors to pivot, when needed, to improve the learning experience. In sum, economists find significant returns on student outcomes from these “edtech” programs (Escueta et al. 2017). Given that at least 54 percent of all employees will require significant reskilling and/or upskilling by 2022 (World Economic Forum 2018), educational institutions will need to become increasingly adaptive, finding ways to integrate technology to simultaneously reduce costs, improve quality, and raise agility. 346 | Chapter 7 AI systems have mastered tasks that have traditionally been performed by humans. One way of measuring the breadth of these AI-based applications is to examine the clusters of emerging research content. Using the universe of Scopus and Elsevier articles, Elsevier (2018, 34) identified seven clusters of AI capabilities, including “machine learning and probabilistic reasoning, neural networks, computer vision, natural language processing and knowledge representation, search and optimization, fuzzy systems, and planning and decisionmaking.” Moreover, using the subset of papers that have been uploaded to the research platform arXiv, Elsevier (2018) finds that articles about core AI categories that are posted on arXiv have increased by 37.4 percent in the past five years. These sustained research efforts will continue to expand AI’s capabilities. Indeed, Brynjolfsson and McAfee (2014, 52) remark that “we’re going to see artificial intelligence do more and more, and as this happens costs will go down, outcomes will improve, and our lives will get better.” Already, AI is being applied in four main areas of the marketplace, according to Lee and Triolo (2017): (1) the Internet (e.g., online marketplaces); (2) business (e.g., datadriven decisionmaking); (3) perception (e.g., facial and voice recognition); and (4) autonomous systems (e.g., vehicles and drones). Take, for instance, the domain of perception AI. One discovery helps individuals who have historically been visually impaired to use a device with digital sensors that can survey the physical environment and create sound waves through the bones of the head. The technology clips onto eyeglasses, and after being oriented toward text within the user’s vision and signaled to read the source by the wearer, the device reads and verbalizes the text (Brynjolfsson and McAfee 2014). Similarly, Brynjolfsson and McElheran (2016) also illustrate how manufacturing establishments using data to influence their decisionmaking exhibit greater productivity than their counterparts. Companies in the digital economy will increasingly compete based on their ability to use data efficiently and strategically. Technological Progress and the Demand for Labor This section explores the interaction between technological progress and the demand for labor. First, it gives a brief history of technological change and work. Then it describes the effects of technological progress on investment and wages. Finally, it considers how specialization and comparative advantage affect trade between people and machines. A Brief History of Technological Change and Work Do technological advances reduce employment? That is not a new question—concern about job losses caused by automation dates back at least two centuries. During the early 19th century, English artisans (Luddites) in the rapidly changing textile industry famously attempted to destroy the mills and Adapting to Technological Change | 347 automated machine looms that they believed threatened their livelihoods. Despite the opposition of the Luddites to automation, the next two centuries witnessed a transition to mechanization of much of the physical labor performed by workers (Galor and Weil 2000). The agriculture sector provides a notable example. Tractors replaced horsepower and manual labor in 19thcentury plowing work, and labor-intensive manual tasks were mechanized (Rasmussen 1982). Similar examples abound among many types of skilled artisanal work after the introduction of machine tools, as well as the transformation of manufacturing after advances such as steam power and electricity. Automation’s effects on labor are no longer confined to manufacturing and agriculture (Brynjolfsson and McAfee 2014; Autor 2015; Polson and Scott 2018). Computers and constantly evolving software have eliminated the need for many of the administrative and clerical tasks that had long been performed by white-collar workers in commercial business. Indeed, before the word “computer” referred to a microprocessor on a desk, it was a job title for a person who laboriously performed simple arithmetic or more complex mathematical calculations. Today, an accountant or financial specialist can do in seconds what would have once taken hours or days of painstaking computation by a team of educated people. An online tax preparation system can do much of what a professional certified public accountant might have done, while being faster and more accurate. White-collar work environments are likely to undergo further disruptive changes as AI technologies continue expanding into logistics and inventory management, financial services, complex language translation, the writing of business reports, and even legal services. Even medical diagnoses are likely to involve AI technologies in the foreseeable future. Economists and policymakers have long studied the question of job displacement caused by technological advancement. In just one example, in 1964 Congress authorized the National Commission on Technology, Automation, and Economic Progress to study the effects of technological advancement, particularly in relation to unemployment. The commission’s 1966 report included the finding that “technology eliminates jobs, not work” (Bowen 1966, 9). In a more contemporary discussion, David Autor (2015, 5) noted that “journalists and even expert commentators tend to overstate the extent of machine substitution for human labor and ignore the strong complementarities between automation and labor that increase productivity, raise earnings, and augment demand for labor.” Though the introduction of new technologies can create job displacement, examining technological change from a historical perspective shows that these transformations do not lead to permanently lower employment, but rather an increase in demand for new tasks (Mokyr, Vickers, and Ziebarth 2015). 348 | Chapter 7 Effects of Technological Progress on Investment and Wages Capital investments, such as in machines and software, embody AI, which Brynjolfsson, Rock, and Syverson (2017) call a general purpose technology. New investments that embody AI are expected to be more like (“closer substitutes for”) labor than traditional capital investments were. Here, we begin by relating capital to labor and productivity and explain why labor is expected to receive most of the net benefits from AI in the long run. In particular, we argue that, though AI is expected to increase real wages on average, the economy has three phases of adjustment where the wage effects are different. In the anticipation phase, real wages are somewhat elevated as businesses begin to switch to activities that are intensive in cognitive tasks, but still do not have machines to adequately perform those tasks. Then, AI arrives and can fill many of the positions, temporarily depressing real wages during the implementation phase as workers compete with the new machines. In the long run, business formation catches up with the new technology and real wages are higher. Growth in labor productivity can come from changes in three distinct factors: a rise in the quality of labor, which can occur with greater education, training, or skill attainment; a rise in capital, which occurs when firms invest in productive inputs, such as machines, factories, or computers; or a rise in what economists call total factor productivity (TFP), which pertains to other determinants of productivity, ranging from regulatory frictions to unmeasured quality improvements (Solow 1957). TFP growth often increases real wages and the return to capital in the short run because it makes the factors more productive.6 A greater return to capital also stimulates additional investment leading to business creation and growth. As a result of the additional capital, real wages rise and, because new capital competes with old capital, the return to capital declines. Indeed, a century or more of economic growth has increased real wages by more than a factor of five (Fisk 2001; Zwart, van Leeuwen, and van Leeuwen-Li 2014), while the return to capital has been almost constant over time (Caselli and Feyrer 2007; Mulligan and Threinen 2011). Nearly all the long-run benefits of TFP go to labor by reducing the effective prices of goods and services or by raising total compensation (Caselli and Feyrer 2007; CEA 2018c). Although real wages trend up and the return to capital does not, as discussed above, labor’s share of gross domestic product (GDP) can be constant, rising, or falling, depending on the type of technological change and the degree to which the new investment substitutes for labor in the production process. In other words, some types of TFP growth may reduce labor’s share of GDP in the long run even while the entire benefit from TFP growth goes to workers in the form of higher real wages. For example, Karabarbounis and Neiman (2014) 6 Our discussion of wages in the text that follows views it as representing all compensation from work, including fringe benefits. Adapting to Technological Change | 349 show that the decline in the relative price of investment goods (e.g., due to the expansion of information technology and computers) helps to account for the decline in the labor share. Although the TFP growth occurring during most of the 20th century did not reduce labor’s share of national income (Kaldor 1961), AI might reduce it in the long run to the degree that it is more substitutable for labor than 20thcentury capital investments were. The transition to a labor-substitutable AI is illustrated in figure 7-2 from the perspective of the capital market. Because a downward-sloping capital demand curve shows the relationship between the amount of capital and its marginal contribution to output, the area under the curve up to the equilibrium amount of capital is equal to the total amount of output. This output is divided between capital and labor, with capital’s income equal to the rectangular area, which has dimensions equal to the amount of capital and the rental rate per unit of capital. In the figure, the triangular area above the rectangle is the output not paid to capital, which is labor income. The arrival of AI makes new capital investments more productive, which is why the capital demand curve is shifted up by the discovery. Initially, AI investments earn returns greater than the normal capital return, as at the point b in figure 7-2, which stimulates more investment. The additional investment begins to drive down the return to capital, but more slowly than investment did in earlier eras, because the new investment does not compete as directly with existing capital, which is why the new demand curve is flatter than the old one. In the long run, the return to capital falls back to normal, the economy is at point c in figure 7-2, and labor income has increased by the amount of the shaded area L.7 Labor’s share is lower in the long run than it was before AI arrived, as shown in the diagram by the fact that the rectangular increment to capital income is disproportionate to L. Ironically, the addition to capital income is a symptom of more investment and real wage growth due to the assumption that AI investments are more substitutable for labor than older types of capital. In the short run, after the arrival of AI, new investment that is a good substitute for labor reduces real wages to the extent that human workers compete with AI for jobs and the additional business formation is not yet complete. This phase resembles the commonly expressed concern that workers would be harmed by AI. In terms of figure 7-2 the capital rental rate r at point b is temporarily elevated, at the expense of labor income. However, it is important to also consider the phase before AI arrives. Here, real wages are elevated by the anticipation of AI because businesses are formed with the expectation that they will eventually have both human and machine labor, but in the meantime will need to perform their operations entirely with human labor. 7 In the limit in which AI is a perfect substitute for human workers, the area L is zero. The subsection of this chapter titled “Trade between People and Machines” explains why the perfect-substitution case is ruled out by market forces. 350 | Chapter 7 Figure 7-2. The Effect of AI on the Amount of Capital and the Distribution of Factor Incomes Capital rental rate (r) Supply Short run b a L c Supply Long run Demand After AI Demand Before AI Q Before AI Aggregate quantity of capital (Q) Q After AI Source: Adapted from Jaffe et al. (2019). This stylized discussion highlights the situation that though AI can depress real wages for a period if it is a good substitute for labor, ultimately AI will raise real wages above what they were before AI because of the investment and increased productivity that it stimulates. These conclusions are consistent with not only theoretical models of economics featuring AI in general equilibrium (Aghion, Jones, and Jones 2017) but also with evidence on how the introduction of robots raised labor productivity across 17 countries between 1993 and 2007 (Graetz and Michaels 2018). Moreover, taking the information technology (IT) revolution as an analogue, Autor, Katz, and Krueger (1998) show that the introduction of computers led to strong and persistent growth for skilled workers, which accounts for the increased demand (and subsequent expansion of supply) for workers who have gone to college. In summary, even though AI is expected to temporarily decrease real wages, in the long run it will increase real wages, on average, because of the investment it stimulates. The next section highlights the role of comparative advantage behind the reallocation of tasks across and within sectors of the labor market (Acemoglu and Autor 2011), explaining how firms will apply AI in ways that are complementary to labor and therefore have a more positive effect on real wages, and a less negative effect on labor’s share of GDP than shown in figure 7-2. Adapting to Technological Change | 351 Trade between People and Machines When and how much is AI likely to substitute for human tasks? The principle of comparative advantage tells us that human workers can benefit from being in the same market with machines, even if these machines excel at many traditionally human tasks. The benefit comes from workers’ specialization in the tasks humans can do better than machines, or at least the tasks where humans are at the smallest disadvantage (Autor 2015). Specialization allows the machines to be used on their best tasks without wasting resources on tasks that people can do at a lower opportunity cost. To put it another way, even if it were technologically possible to let machines do all tasks, and do them better than humans do, an owner of the machines would sacrifice profits by deploying them without regard for specialization. Consider the operation of a store that requires cashier tasks, communication with suppliers, the delivery of products, and arranging displays. The AI machines perform the arrangement tasks 10 times better (in terms of speed and accuracy) than humans, and perform the other tasks 20 times better. Given comparative advantage, and assuming that the machines are cheap enough to justify using them for any task, profit-maximizing deployment will have workers performing the arrangement tasks, thereby freeing up machines to do the other tasks where they are especially productive. The theory of comparative advantage means that humans inevitably have a comparative role to play, even if they do not have an absolute advantage in every task. Moreover, the choice of which machines to deploy is not merely determined by what is technically possible with engineering and computer science.8 Robocop, Star Wars’ C-3PO, and other near-human machines are great entertainment, but in many situations they would be poor investments precisely because of their close similarities to humans.9 Because machines and AI are ultimately another form of capital, designing machines to complement, rather than substitute for, humans will be more profitable. In other words, the potential for specialization implies that producers will look for ways to magnify differences with people. For example, Abel and others (2017) explain how providing algorithms with expert (human) advice—part of a broader class of “Human-in-the-Loop Reinforcement Learning”—can improve various aspects of learning and prediction. 8 Consider the analogous case of agricultural tobacco production. Though some countries, like Brazil, display very labor-intensive tobacco production (Varga and Bonato 2007), U.S. production of tobacco is highly mechanized (Sykes 2008). For a similar illustration from cotton production, see FAO (2015). In this sense, the mere presence of capital does not guarantee its use; the opportunity cost of labor in an economy will drive the division of labor and degree of specialization. Lagakos and Waugh (2013) formalize these insights within a general equilibrium Roy model with agriculture and nonagriculture sectors. 9 Research in human–machine interaction finds situations in which people can more easily and intuitively work with robotic partners when the robots look and behave in ways similar to humans. In these cases, people can project human expectations of how robots should act, and thus do not need to be trained (or study user manuals) in order to figure out how to work with the robot. 352 | Chapter 7 The purposeful acquisition of comparative advantage has long been observed in human labor markets (Becker and Murphy 1992). Consider an electrician and a carpenter who work together to build a high-quality house. Their comparative advantage is obvious at the time that they are building the house, but neither of them was born with his or her specialized skills. They both chose to specialize knowing directly—or perhaps indirectly, through market prices— that they would be a more valued member of a construction team if they could excel at carpentry, or excel at electrical work, rather than having mediocre skills at both types of tasks. Robotics research already suggests that productivity is enhanced when machines specialize (Nitschke, Schut, and Eiben 2012). Also see, for example, box 7-1, which describes the Defense Advanced Research Projects Agency’s (DARPA’s) initiatives regarding “partnering with machines.” In light of these examples of complementarity between AI and humans, the entertainment industry’s anthropomorphic portrayal of robotics and artificial intelligence is somewhat misleading about how much these types of investments will substitute for human workers. The concern, of course, is that the price associated with human tasks will decline to a point where humans are driven out of the workforce and are not incentivized to work. For example, some manufacturers might find that production is cheaper with complete automation, rather than by retaining a mix of some human employees and AI. However, specialization and trade also occur at the market level. A robot-intensive business may engage in one phase of production, selling its output to a person-intensive business at a later phase of production. In this sense, even if certain tasks traditionally performed by humans are instead now done by machines, humans will nonetheless hold a comparative advantage for other tasks and thus will continue to play a role in production processes. Although there are some concerns about complete automation of human activities (Frey and Osborne 2017), the emerging empirical evidence suggests that the main effects of AI and automation are on the composition of tasks within a job, rather than on occupations in general. For example, Brynjolfsson, Rock, and Mitchell (2018) introduce an index of suitability for machine learning (SML), and they find that, though most occupations have at least some tasks that are SML, few (if any) have tasks that are all SML. Similarly, Nedelkoska and Quintini (2018) use data on skills across occupations and 32 countries, and they find that, though 14 percent of jobs are likely to be automated by over 70 percent, 26 percent of jobs face a change of automation of 30 percent or less. The key observation is that, as automation progresses, workers will increasingly be drawn to the jobs and tasks that are more difficult to automate. Astute policymaking will nonetheless play a role in promoting workforce development, particularly for less educated workers—through, for example, the Pledge to America’s Workers, which we discuss later in the chapter. Adapting to Technological Change | 353 Box 7-1. DARPA: Strategic Investments in Artificial Intelligence and Cybersecurity The Defense Advanced Research Projects Agency (DARPA) is focused on a future where AI is a complement to humans in the production of goods, services, and ideas—that is, where humans can safely “partner with machines” more as colleagues, rather than as tools (DARPA 2018a). To facilitate this vision, DARPA is actively funding the development and application of a socalled third wave of AI technologies that would result in intelligent machines capable of reasoning in context. In particular, DARPA announced a $2 billion, multiyear investment in new and existing programs in September 2018. These investment areas include “security clearance vetting or accrediting software systems for operational deployment; improving the robustness and reliability of AI systems; enhancing the security and resiliency of machine learning and AI technologies; reducing power, data, and performance inefficiencies; and pioneering the next generation of AI algorithms and applications, such as ‘explainability’ and commonsense reasoning” (DARPA 2018a). DARPA has already piloted a number of successful programs, including the Cyber Grand Challenge in 2016—a competition that showcased the state of the art in Cyber Reasoning Systems (DARPA 2018b). Competing systems played an “attack-defend” style of “Capture the Flag,” where contestants were tasked with developing AI algorithms to “autonomously identify and patch vulnerabilities in their own software while simultaneously attacking the other teams’ weaknesses” (Hoadley and Lucas 2018). Although conventional cybersecurity programs may take up to several months to find and patch problems, the competing and largely rules-based algorithms found the bugs in seconds. According to DARPA (2016), “the need for automated, scalable, machine-speed vulnerability detection and patching is large and growing fast as more and more systems . . . get connected to and become dependent upon the Internet.” The major innovation in the Cyber Grand Challenge was the demonstration that AI can play both an offensive and defensive role. DARPA continues to build out these human-machine cyber detection capabilities for pinpointing and addressing vulnerabilities through its Computers and Humans Exploring Software Security program, known as CHESS. The activities funded by CHESS involve helping computers and humans work collaboratively through tasks, such as finding zero-day vulnerabilities at scale and speed. 354 | Chapter 7 The Uneven Effects of Technological Change This section delineates the uneven effects of technological change. It first considers these changes’ differential effects by occupation and skill. Then it explores the scale and factor-substitution effects of an industry’s technological progress and how they moderate the effect on labor. Finally, the section asks when we will see the effects of AI on the economy. Differential Effects by Occupation and Skill Many types of technological change affect workers and industries in heterogeneous ways. For example, the widespread adoption of computers and information technology during the past several decades has enormously increased productivity for certain types of workers, but has brought comparatively little or no productivity enhancement for others (Acemoglu et al. 2014). Because earnings are determined by workers’ productivity, such changes in technology are expected to have varying effects on workers with different sets of skills, such as workers with or without a college or graduate education (Katz and Murphy 1992). Economists have concluded that “skill-biased technical change” can account for most of the observed rise in earnings disparities between some higher-skilled workers (whose productivity was greatly enhanced by technology, like computers) and some lower-skilled workers (who were less affected), which was amplified during the IT revolution (Autor, Katz, and Krueger 1998; Autor, Levy, and Murnane 2003). This disparity is in part explained by the complementarity between capital and certain types of skills (Krusell et al. 2000). In the context of AI and automation, the complementary relationship means that there is processing power that mainly benefits workers who use computer technology. In this sense, the more rapid increase in earnings among collegeeducated workers, despite the corresponding rise in the supply of these workers, represents a skills premium for individuals who can leverage technology to augment their productivity (Juhn, Murphy, and Pierce 1993). The Scale and Factor-Substitution Effects of an Industry’s Technological Progress Technological progress allows an industry to produce the same output with fewer inputs (e.g., workers). At first glance, we might therefore expect workers to leave the industry and find work elsewhere. One could point to the example of changes in agriculture in the 20th century, when the agricultural employment share dropped from 41 to 2 percent between 1900 and 2000 (Autor 2015), at the same time that agricultural TFP rapidly increased (Herrendorf, Rogerson, and Valentinyi 2014). See box 7-2 for an example of technological change in the agricultural sector that has fueled productivity. Adapting to Technological Change | 355 Box 7-2. Technological Change in Agriculture and Rural America Agriculture has been one of the sectors experiencing rapid technological change, including the computer science revolution. For example, output per hour in the agricultural sector grew annually by 4.3 percent between 1948 and 2011, whereas it grew annually by 2.4 percent in manufacturing (Wang et al. 2015). For example, precision agriculture—which refers to a broad class of AI applications allowing for precise control over agricultural inputs based on detailed, site-specific data—has allowed farmers to improve the productivity of soil by better understanding the characteristics that are most conducive to growth within a specific geographic area; see figure 7-i for evidence on its incidence across peanut and soybean farming. Moreover, these systems contain sensors that allow farmers to monitor crop yields and self-guided tractors and variable rate planters that vary their seeding and fertilizer rates based on fertility and past yield data. In brief, these technologies have allowed corn and soybean farmers, among others, to produce more at lower costs (Schimmelpfennig 2016). Figure 7-i. Precision Agriculture Use in Peanuts and Soybeans Peanuts, 2013 Soybeans, 2012 Share of planted acres (percent) 80 60 40 20 0 Some form of Yield monitor GPS-based soil Variable rate Guidance or precision properties technology AutoSteering agriculture map system Source: United States Department of Agriculture. Note: GPS = Global Positioning System 356 | Chapter 7 AI is also used in animal agriculture. For example, over 35,000 robotic milking systems are in operation globally on dairy farms. According to Salfer and others (2017), farms using robotic milking systems are much more productive, selling 43 percent more milk per hired worker and 9 percent more milk per cow. Moreover, rather than displacing humans, the introduction of automation in dairy farms has allowed labor and management to reallocate their time toward maintaining animal health, analyzing records, and managing reproduction and nutrition on the farms. For example, John Deere runs a two-year associate degree program to help its employees not only stay current on the latest farming machine tools but also acquire new skills in data science (Burkner et al. 2017). However, rural Americans have not always seen the gains of technological progress (Forman, Goldfarb, and Greenstein 2012). Motivated by these disparities, President Trump signed Executive Order 13821 in January 2018 (White House 2018c), expanding and streamlining access to broadband in rural America. Given the importance of high-speed Internet access for data science capabilities, connectivity in rural America is essential for its economic competitiveness. Moreover, the Trump Administration is committed to investing in and promoting workforce development through, for example, the Pledge to America’s Workers, which we discuss in below in this chapter’s main text. However, as an industry’s productivity advances, it is producing each unit of output at a lower cost and thereby selling at lower prices. Consumers of this output respond by purchasing more, which is a force toward more industry employment known as the “scale effect” on labor demand. The productivity revolution in agriculture did result in more production and higher sales of food. However, because consumers’ demand for agricultural output is price inelastic—consumers spend less of their budget on agriculture when it becomes cheaper—the “factor-substitution effect” dominated the scale effect on the demand for agricultural labor.10 If demand for a good is price elastic—meaning that consumers spend more of their budget on the good when prices fall—then cost-reducing technology might raise that sector’s shares of employment and GDP. Consider the recent history of taxi dispatchers, who take calls from individuals desiring a ride and direct a driver to the pickup point. About a decade ago, companies discovered how to use a smartphone to perform the tasks of the dispatcher, and these companies famously distributed such an app to millions of smartphone users. The result was a dramatic increase in the number of people working in the transportation industry, broadly understood to include drivers for Uber, 10 The decomposition of labor demand into scale and factor-substitution effects is usually attributed to Alfred Marshall (1890) and John Hicks (1932). Adapting to Technological Change | 357 Lyft, and other ride-sharing platforms. By observing what happened to overall employment in the industry (which provides rides for passengers, and which now includes ride sharing in addition to traditional taxis), we can see that it had price-elastic demand. The cost reductions associated with the new technology increased the number of rides even more than it increased the number of humans giving rides. Although there is some difficulty in measuring participants in the sharing economy in ways that are directly comparable with traditional taxi employment, there is emerging evidence of its expansion. For example, JPMorgan Chase (2018) found that the share of families generating earnings on transportation platforms over the course of a year increased to 2.4 percent of the labor force in March 2018 after the inception of ride sharing in about 2010 (figure 7-3).11 A large part of the increase came from the introduction of 460,000 driver-partners in just three years under the Uber platform alone (Hall and Krueger 2018). Increasing empirical evidence suggests that these ride-sharing applications not only have provided significant flexibility for drivers (Chen et al., forthcoming; Koustas 2019) but also have generated social welfare benefits for those who are not platform participants (Cohen et al. 2016; Makridis and Paik 2018). These ride-sharing applications are an early, pre–autonomous vehicle (AV) manifestation of transportation as a service. Whereas transportation has traditionally been about assets (i.e., vehicle ownership), it may increasingly move toward services as more AVs enter the market. For example, even though PricewaterhouseCoopers (PwC) estimates that the transportation sector may require 138 million fewer cars in Europe and the U.S. by 2030 (PwC 2018a), it also estimates that the market for shared, on-demand vehicles may grow to $1.4 trillion by 2030, in comparison with $87 billion in 2017 (PwC 2018b). Though predicting the growth in the AV market is outside the scope of this Report, the emerging patterns in ride sharing and AVs are illustrative examples of the impact of technological change. When Will We See the Effects of AI on the Economy? Some economists have noted a puzzling productivity paradox with the historical and ongoing patterns described above. Although most researchers agree that the recent advances in AI and automation promise production possibilities that are even greater than the initial emergence of the digital 11 The National Academies (2017) also cite estimates pointing toward growth from 10 to 16 percent in alternative work arrangements between 2005 and 2015. According to Katz and Krueger (2018), who did a survey in November 2015, 0.5 percent of workers report working through an online intermediary. Though there is debate about the measurement of alternative work arrangements, a recent assessment by Katz and Krueger (2019) concludes that, despite the only modest increase in these arrangements obtained from the 2005 and 2017 Contingent Work Surveys in the Current Population Survey, this survey’s data are likely underestimates. 358 | Chapter 7 Figure 7-3. Share of Respondents Reporting Income from Ride-Sharing Platforms in the Past Year, 2013–18 Percent 2.5 Mar-18 2 1.5 1 0.5 0 2013 2014 2015 2016 2017 2018 Source: JPMorgan Chase (2018). economy (Brynjolfsson and McAfee 2014), the growth of labor productivity, at least in the way it has traditionally been measured, has been surprisingly sluggish.12 For example, in contrast to the 2.8 percent annual growth in aggregate labor productivity seen in the United States between 1995 and 2004, its annual growth between 2005 and 2015 was only 1.3 percent (Syverson 2017). This pattern is consistent with growth across other economies; Syverson (2017) found the annual growth rate in labor productivity was 2.3 percent between 1995 and 2004 in 29 sampled countries, but fell to 1.1 percent between 2005 and 2015. If technological change and the adoption of AI have been especially rapid during the past decade, what can account for the slower growth of labor productivity? One possibility is that the productivity effects of technology may have been oversold (Gordon 2000) and the period of rapid growth of the Information Age was a temporary aberration in a long-run trend toward slower technology-related productivity growth (Gordon 2018). However, Oliner and Sichel (2000) show, using a multisector neoclassical growth model with both IT and non-IT capital, that the increase in IT and corresponding efficiency gains account for two-thirds of the increase in labor productivity for the nonfarm 12As the Nobel laureate Robert Solow famously said, “You can see the computer age everywhere but in the productivity statistics.” Adapting to Technological Change | 359 business sector over the 1990s.13 Moreover, Byrne, Oliner, and Sichel (2013) apply the same framework and fit more recent data between 2004 and 2012, suggesting that there is no inconsistency with theory. Jorgensen and Stiroh (2000) also obtain slightly lower contributions to growth from computer hardware because they use a broader definition of output. Yet another related explanation is that the expansion of credit in the early 2000s led to a misallocation of investment into less productive sectors, creating a drag on growth (Borio et al. 2016). However, productivity has recently ticked up (e.g., see chapter 10 of this Report). Therefore, secular stagnation and the misallocation of investment do not appear to be viable explanations. Another possibility is that our official estimates of growth and productivity fail to capture many of the recent gains from technological advancement. Many of today’s new technologies involve little or no direct cost to consumers, but give them great utility. These developments include, for example, Internet social networks, information search capabilities, and downloadable media. A quick Internet search today can yield information that, a few generations ago, would have required a team of individuals searching a university library—such benefits are not captured in our measurement of GDP. Though these benefits are clearly important factors behind consumer welfare (Brynjolfsson, Eggers, and Gannamaneni 2018), mismeasurement between 2005 and 2015 would need to be unrealistically high to account for the sluggish GDP growth, relative to the overall trend (Syverson 2017). Perhaps the strongest argument for why productivity statistics in recent history have not shown the expected benefits from the new technologies is that, for practical reasons, there have so far simply been lags between productivity and the widespread implementation of AI and ML. The theoretical genesis of this argument is an insight from Paul David (1990). Much as the dynamo and the computer were fundamental components of a broader technological infrastructure, AI is a similar general purpose technology. Although these discoveries often have immediate effects on productivity, their full impact is not realized until all the complementary investments are made, thereby creating a lag with investment. Brynjolfsson, Rock, and Syverson (2017) apply this logic to AI, reconciling the productivity paradox. Under their preferred interpretation of the data, we are simply awaiting the results of a necessary trial-and-error process and the productivity benefits will eventually be realized. 13 An integral part of the efficiency gains among producers of computer equipment is the rapid decline in effective prices of semiconductors due to advances in chip technology (Triplett 1996). Byrne, Oliner, and Sichel (2017) find that semiconductor prices measured with a hedonic index fell at an estimated annual rate of 42 percent between 2009 and 2013, much faster than the 6 percent decline experienced by the microprocessor producer price index series that provides a broader measure that subsumes semiconductors. 360 | Chapter 7 Cybersecurity Risks of Increased Reliance on Computer Technology Although technological advances and the emergence of AI have the potential to raise productivity and economic growth, the widespread reliance on technology also exposes the economy to new threats of malicious cyber activity. Cyber threat actors may be nation-states, cyber terrorists, organized criminal groups, “hacktivists” (individuals or collectives that aim to advance their social agenda through cyber interference), or simply disgruntled individuals. These threats transcend the typical boundaries of conflict, which have been analyzed through the lens of land, sea, and air. However, the emergence of the “Internet of Things” implies that anything connected to the Internet is vulnerable to malicious cyber intrusions, introducing threat vectors throughout the Internet ecosystem (Hoffman 2009). Malicious cyber activity imposes costs on the U.S. economy through the theft of intellectual property and personally identifiable information, denial-ofservice attacks, data and equipment destruction, and ransomware attacks. The CEA estimated this cost to be as high as $109 billion in 2016 (CEA 2018b). Most innovations, however, lead to little-understood risks, whether for new drugs or computer technologies. This section describes our current assessment of the scope of cyber vulnerabilities, how they vary by industry, and the factors that may exacerbate failures to adopt cybersecurity best practices. Assessing the Scope of the Cyber Threat The 2018 Economic Report of the President (CEA 2018b) estimated the 2016 costs of malicious cyber activity by adding up the costs experienced by the private sector, the public sector, and private individuals. It estimated the costs to the private sector using event-study methodology, whereby it quantified the loss of firm value as a result of an adverse cyber event. It estimated the costs to the corporate sector using event-study methodology, whereby it quantified the loss of firm value as a result of an adverse cyber event. The estimate further took into account the spillover effect of these costs to economically linked firms. On the basis of a sample of cyber incidents occurring between January 2000 and January 2017, the Report estimated that the total economic cost for 2016 ranged between $57 and $109 billion. Although these event studies provide an important starting point for evaluating the costs of cybersecurity incidents, they presuppose that the timing of the event was reliably recorded and that investors knew the distribution of new risks induced by the event. However, to give just one example, when the largest recorded data breach, according to the Privacy Rights Clearinghouse, occurred in late 2013, it was not reported until September 2016 (Lee 2016). Delays between the time when an incident takes place and the time it is reported are a function of not only a firm’s ability to identify the incident but Adapting to Technological Change | 361 also of varying State laws that mandate disclosure (Bisogni 2016).14 The affected firm’s own estimate of the damage caused by the 2013 breach has been updated and increased on several occasions, illustrating how difficult it can be to accurately calculate the cost. Moreover, data on the number of records or systems that have been breached often contain significant measurement error and sampling variability. In addition to reporting discrepancies across States, there are also discrepancies across sectors. Makridis and Dean (2018) study sector discrepancies using data from the Privacy Rights Clearinghouse and the Department of Health and Human Services to investigate the relationship between recorded breaches and firm outcomes. Though they find some evidence of a negative association between productivity and record breaches in the Health and Human Services data, where healthcare companies face greater disclosure requirements, they do not find such evidence in the data from the Privacy Rights Clearinghouse covering all sectors. Publicly traded companies, based on requirements from the Securities and Exchange Commission (SEC 2011), must provide timely and ongoing information in the periodic reports of material cybersecurity risks and incidents that trigger disclosure obligations. Beyond the Federal securities laws, other reporting standards in specific sectors, like the Health Insurance Portability and Accountability Act, may result in disclosures of other data breaches that are not material. Since 2009, the National Cybersecurity and Communications Integration Center (NCCIC) of the Department of Homeland Security (DHS) has served as the Nation’s flagship cyber defense, incident response, and operational integration center. The NCCIC serves as the national hub for cyber and communication information, technical expertise, and operational integration, operating a 24/7 watch floor tasked with providing situational awareness, analysis, and incident response capabilities to the Federal government; private sector stakeholders; and State, Local, Tribal, and Territorial Partners. Through this process, DHS has been collecting robust data on the types of incidents that are having an impact on the Nation. Furthermore, the Federal Bureau of Investigation (FBI) also maintains CyWatch, a 24/7 command center for cyber intrusion prevention and response operations based on consensual monitoring and third parties that report to the FBI. CyWatch monitors must notify companies whose network security has been breached (34 U.S.C. § 20141 creates an obligation for Federal law enforcement agencies to notify victims of a crime). After notification, CyWatch shares information with its partner law enforcement agencies—including the Department of Defense, DHS, and National Security 14 Using data from the Privacy Rights Clearinghouse, Bisogni, Asghari, and Van Eeten (2017) estimate that adoption of the “inform credit agency” and the “notification publication by informed attorneys general” State provisions would increase the number of publicly reported cybersecurity breaches by at least 46 percent. 362 | Chapter 7 Figure 7-4. Cybersecurity Breaches That Were Made Public, 2005–18 Number of breaches 1,000 800 600 400 200 0 2005 2007 2009 2011 2013 2015 2017 Sources: Privacy Rights Clearinghouse; CEA calculations. Agency—to improve preparedness and attribution behind attacks and guide appropriate responses.15 Despite the serious limitations associated with data from the Privacy Rights Clearinghouse, they nonetheless provide a time series proxy for the increased frequency of data breaches since 2005; see figure 7-4. Although there is an upward trend in cyber breaches between 2005 and 2018, these data largely understate the number of data breaches (Bisogni, Asghari, and Van Eeten 2017; ITRC 2019). The Internet Crime Complaint Center, a partnership between the FBI and National White Collar Crime Center, gives victims of cybercrime an accessible reporting mechanism for alerting the authorities about suspected criminal or civil violations. Although not directly comparable, the 2017 “Internet Crime Report” announced a total of 301,580 complaints of cyber breaches in 2017. Even though these complaints represent a broader range of potential Internet crimes, the number far exceeds the 863 publicly reported incidents. Recommending possible solutions for these cyber vulnerabilities requires an accurate understanding of their sources. We suggest that there are at least 15 Though exact attribution in cyberspace is possible, it requires not only technical expertise but also leadership and information sharing and coordinating across the layers of an organization (Rid and Buchanan 2015). Adapting to Technological Change | 363 two underlying drivers behind the above-mentioned empirical regularities. First, organizations could lack informational awareness. Much like the quantitative management science literature on the adoption of best practices in business (Bloom et al. 2013), many organizations might simply not be aware of basic cyber hygiene practices. Second, the executives of organizations could suffer from incomplete incentives to promote cybersecurity practices. If, for example, financial metrics are easier to measure, relative to cybersecurity, then managers might allocate too little effort to cybersecurity due to a “multitasking problem” (Holmstrom and Milgrom 1991). Particularly because cybersecurity breaches generate network externalities, the private sector could underinvest in cybersecurity (Gordon et al. 2015). Our preceding evidence on the lack of many basic cybersecurity practices among the most profitable companies in the U.S. economy suggests that a lack of information awareness and a lack of resources are unlikely to be the primary culprits behind existing vulnerabilities. Moreover, the “Cybersecurity Framework” of the National Institute of Standards and Technology’s (NIST 2014), which details best practices, is publicly available and has been disseminated through many channels. These facts suggest that the alternative culprit could be incomplete incentives arising from agency problems within organizations that lead managers to overlook cyber hygiene. Information sharing and dissemination of best practices must remain a priority, particularly for small businesses that are more likely to lack the resources or infrastructure to search out and implement best practices. In particular, information needs to be publicly available, transparent, and shared to disseminate best practices and call attention to dangerous practices. For example, Gal-Or and Ghose (2005) show that industry-based information sharing and analysis centers can lead to improvements in social welfare, but the degree of competition in the marketplace is an important moderating factor that determines whether a firm participates. In particular, unless firms in an industry understand the downside associated with their vulnerability to cyberattacks, they may not realize the gains that can come from collaboration through information sharing. Many security operations companies also provide a source of market discipline by promoting transparency and information vis-à-vis cyber vulnerabilities (such organizations that raise firms’ awareness of cybersecurity flaws are often referred to as “white hat hackers”). Conversely, a survey by Malwarebytes (2018) suggests that roughly 1 in 10 U.S. security professionals admit to considering participating in “black hat hacker” activity, which involves exploiting discovered cybersecurity vulnerabilities for financial gain. Roughly 50 percent of security professionals say they have known or know someone involved in black hat hacking activities. 364 | Chapter 7 Potential Vulnerabilities by Industry The prevalence of cyber threats suggests that firms are relatively unprepared to protect themselves. Indeed, according to Hiscox (2018a), in 2017 nearly threequarters of organizations based in the United Kingdom, the United States, Germany, Spain, and the Netherlands failed basic cyber readiness tests. Even though the United States ranks higher than most countries in cyber readiness (Makridis and Smeets 2018), its preparedness is still poor enough to concern policymakers studying the impact of cyber insecurity on the U.S. economy. To better understand these cybersecurity risks at a more granular level, Rapid7, an Internet security firm whose business model involves collecting publicly observable data on cybersecurity practices of any firm with an Internet presence, shared its 2018 data for Fortune 500 companies with the CEA. Using public data and a proprietary methodology, Rapid7 matches uniquely identified Internet protocol addresses of Internet-connected devices to a specific firm. Though the security scan is voluntary, only 4 percent of Fortune 500 firms opt out. These data show that the majority of Fortune 500 companies are vulnerable to cyberattacks, and thus fail to take even the most basic security measures. And though there are many metrics for gauging vulnerabilities, we focus here on an important and transparent metric: whether email has been configured for protection against spam. Motivated by the frequency of phishing email attacks, which are the most common method used by malicious cyber actors to penetrate network security, configuring a secure email network is one of the first lines of defense. One metric for email security is whether the organization has adopted the Domain-Based Message Authentication, Reporting & Conformance (DMARC) protocol. Although it is not a panacea for all types of phishing attacks, DMARC allows senders and receivers to authenticate whether a message is legitimately from a sender. Adopting DMARC for email makes it easier for organizations to not only identify spam and phishing messages, but also to keep them out of employees’ inboxes, thereby reducing the probability that an employee accidentally clicks on a link. Moreover, properly configured DMARC records are able to actively quarantine or reject emails that are a threat to safety by allowing the message’s sender to signal to the recipient that the message is protected by a Sender Policy Framework and/or as DomainKeys Identified Mail. We note, however, that DMARC is only one metric out of many and that having it does not guarantee cyber safety. Figure 7-5 reports the percentage of all Fortune 500 firms without a DMARC email configuration, together with value added, across industrial sectors. This figure illustrates significant exposure across industries, ranging from 40 percent of firms in business services to 93 percent of those in chemicals that are not implementing DMARC protocol. Moreover, although we do not interpret the relationship between value added and a lack of DMARC as causal, the data Adapting to Technological Change | 365 suggest that a 10-percentage-point increase in share of firms without DMARC in a sector is associated with $345 billion less in value added in that sector (in 2017 dollars). This suggests that greater adoption of DMARC could avoid breaches and phishing scams. Given that the combined market value of the Fortune 500 firms is over $21 trillion, these results suggest that much of this value may be exposed to cyber thefts of intellectual property, various destructive and ransomware attacks, and the destruction of reputational capital. Moreover, as outlined in the 2018 Economic Report of the President, an attack on entities—especially large, publicly traded Fortune 500 firms that are part of the Nation’s critical infrastructure—could have effects throughout the U.S. economy, affecting other firms in the supply chain and individual customers. Given the limited preparedness among Fortune 500 companies—manifested by not only the failure to adopt DMARC, but also a range of other cyber vulnerabilities detailed by Rapid7 (2018)—an additional concern is that smaller firms may have even less robust cybersecurity measures in place (Hiscox 2018b). The Federal government continues to modernize its cyber practices. OMB and DHS worked together to transform the Trusted Internet Connection (TIC) policies and processes so that Federal departments and agencies can take advantage of common and advanced cloud computing capabilities to meet their requirements. AI is not specifically identified in the policy updates, but departments and agencies are now able to use outside expertise in the cloud, which can include using AI and other methods, while continuously meeting appropriate cybersecurity and privacy controls. In alignment with the action steps identified in the Report to the President on Federal IT Modernization (American Technology Council 2017), those cooperating in the interagency effort continue to identify if there are any real or perceived policy limitations, by working through cases of real-world use that support their current and future needs. This continuous approach is instrumental for realizing the value of AI and other methods that best meet national needs. The Federal government is more prepared than the private sector to protect against phishing attacks, which are a primary method for hackers to gain access to enterprises, due to the 2017 Binding Operational Directive 18-01, which introduced requirements for agencies to enhance email and web security. Using data from the 2018 Federal “Cyber Exposure Scorecard,” figure 7-6 plots the number of government agencies with various email configurations. In the figure, “fully rejects” means that an organization has properly configured its email, whereas “no rejections” means that it is vulnerable to an attack. Government agencies’ use of the DMARC email configuration is 47.9 percent, which is better than the average of 26 percent in the private sector. Moreover, of the 1,018 Federal second-level “dot-gov” domains, 86 percent have a valid DMARC record with a policy of “reject.” Though adoption of DMARC is only one of many indicators of cyber hygiene, and was linked to the implementation 366 | Chapter 7 Figure 7-5. Industries That Are Most Lacking the DMARC Protocol Among Fortune 500 Companies by Value Added, 2017 Value added by industry (billions, 2017) 5,000 Financials 4,000 3,000 Business services 2,000 Retail Healthcare 1,000 Restaurants and leisure Transportation Technology 0 30 40 50 60 70 Share of industry lacking the DMARC protocol 80 90 Sources: Rapid7; Bureau of Labor Statistics; Bureau of Economic Analysis; CEA calculations. Note: DMARC = Domain-Based Message Authentication, Reporting & Conformance, which is an email validation system designed to detect and prevent the use of forged sender addresses for phishing and email-based malware. Points are scaled by industry employment in 2017, and only the top 10 sectors (ranked by employment) are plotted. Figure 7-6. DMARC Protocol Use Across Government Agencies, 2018 Number of agencies 50 47 40 30 30 21 20 10 0 Fully reject Some rejection Source: Office of Management and Budget. No rejections Adapting to Technological Change | 367 of Binding Operational Directive 18-01 across Federal agencies, these results nonetheless suggest that Federal cyber best practices could set an example for the private sector.16 The Role of Policy This section discusses the longer-run policy implications of both AI advancement and cybersecurity issues, and details the Trump Administration’s current policies in these areas. The discussion highlights the Administration’s priorities for AI readiness and implementation, reskilling, and cybersecurity initiatives to contend with the changing nature of work and emerging technological threats. Policy Considerations as AI Advances: Preparing for a Reskilling Challenge As discussed in earlier in this chapter, economists agree that technological change resulting from AI will affect the structure of the demand for labor in the years to come (Brynjolfsson and McAfee 2014; Agrawal, Gans, and Goldfarb 2018). One potential challenge that policymakers could face as AI advances is an increase in the number of workers who need new skills to find work in a changed labor market. Reskilling efforts, both for workers whose jobs have been displaced by technology and for those who need new skills to operate new technologies, could become more urgent as the demand for labor enters a new phase of its decades-long evolution. For example, the World Economic Forum (2018) found in a sample of firms that at least 54 percent of all employees will require significant reskilling and/or upskilling by 2022. In 2016, the Obama Administration’s Council of Economic Advisers examined the economics of AI, including its possible effects on jobs in the future, predicting that “2.2 to 3.1 million existing part- and full-time U.S. jobs may be threatened or substantially altered,” by AI. In addition, it predicted roughly 364,000 self-employed “drivers” (ride-sharing workers) would be at risk from a shift toward use of autonomous vehicles as of May 2015 estimates (CEA 2016, 15). However, they also concluded that other workers could see a rise in productivity and increasing demand for certain skills. They identified four areas that could see a rise in labor demand: (1) engaging with AI to complete tasks, (2) developing new AI tools, (3) supervising and maintaining AI tools to ensure they are achieving the desired aims, and (4) responding to paradigm shifts where entirely new approaches are needed (CEA 2016). Because the jobs most vulnerable to automation are concentrated among lower-paid, lesseducated workers, reskilling programs could play an important role in helping avert further wage polarization and reallocating skills to where they are most 16 Although it is also possible that the Federal government does not perform as well in other dimensions, the data from Rapid7 (2018) suggest that the sample of Fortune 500 companies also are exposed in other important dimensions of basic cybersecurity practices. 368 | Chapter 7 needed. The CEA (2016) made three primary recommendations: (1) investing and developing AI for its many benefits in both the public and private sectors, (2) educating and training workers so they are prepared for the jobs of the future, and (3) helping workers transition across jobs to ensure shared gains from technological change. More recently, in discussing how automation may interact with the economy and workforce, the CEA (2018a) has referred to an observation made in a report by the National Academies (2017, 140), that continued advance of information technology implies “workers will require skills that increasingly emphasize creativity, adaptability, and interpersonal skills over routine information processing and manual tasks.” This report also reiterates findings by the Organization for Economic Cooperation and Development (OECD 2018), among others, that workers who have not obtained a college degree are most at risk for displacement by automation. Similarly, motivated by the declining college and cognitive skills premium—as documented by Beaudry, Green, and Sand (2016); Valletta (2016); and Gallipoli and Makridis (2018)—individuals in occupations that involve greater IT-based tasks have continued experiencing rising wage premiums. All these pieces of empirical evidence point to the need for digital skills in the emerging labor market. Policymakers may also address the concern that job losses from automation could disproportionately affect those who are least able to afford the tuition costs of reskilling programs up front, and those who are least likely to be able to sustain a forfeiture of labor income for the duration of the reskilling period. Gallipoli and Makridis (2018) find that individuals in jobs that tend to require more routine and manual skills are especially exposed to the growing demand for IT-based tasks. Another factor to consider in future policymaking is the unpredictable nature of disruption on the workforce. In determining federally funded programs to address displaced workers, the CEA (2018a, 21) cautions against programs targeting specific industries, instead suggesting that “keeping programs as flexible as possible reduces the need for continual re-optimization and increases the return on Federal dollars spent.” In addition to studying reskilling challenges, the Trump Administration has also established the President’s National Council of the American Worker to develop and implement a strategy aimed at expanding educational attainment, training, and nontraditional degree programs that will prepare workers for the emergence of automation and AI (White House 2018a). Chapter 3 of this Report discusses the reskilling challenge in detail, including the job openings rates by industry. The opportunity for reskilling is perhaps greatest in the field of cybersecurity, where there is a shortage of skilled workers (Burning Glass 2018). Figure 7-7, for example, uses 2018 data from CyberSeek (2018)—a partnership between Burning Glass Technologies, the Computing Technology Industry Association, and the National Initiative for Cybersecurity Education—to Adapting to Technological Change | 369 $"0- цҊцѵ0++'4Ҋ)Ҋ () /$*!*- 4 -. 0-$/4 *.Ѷ спрч рѵф–сѵп *0- ѷ 4 - сѵр–сѵф сѵх–тѵп тѵр–тѵф тѵх–уѵп &җспрчҘѵ characterize the ratio of supply and demand for cybersecurity workers across locations (e.g., States). Although no State has a ratio less than 1, the vast crosssectional heterogeneity highlights how different State labor markets face very different intensities of shortage (e.g., the District of Columbia has a value of 1.4, vs. Kentucky, which has a value of 3.2). To put these numbers in perspective, a value of 2 means that half of a State’s existing cybersecurity workforce would need to change jobs every year to meet new postings, underscoring the amount of turnover that would be required to meet the skills gap. The Administration’s Policies to Promote Cybersecurity It is essential that the Federal government and the private sector promote cyber best practices and cyber hygiene. For example, as discussed above, many Federal agencies have properly configured their email systems with DMARC. DHS’s National Cybersecurity Assessments and Technical Services team determined that 71 of the 96 Federal agencies surveyed have cybersecurity programs that are either at risk or at high risk, for at least four reasons, according to OMB (2018a); in the next paragraph, we summarize these factors from the “Federal Cybersecurity Risk Determination Report and Action Plan” (White House 2018a). Government agencies, along with the private sector, are not always aware of the situational context and/or the resources that exist to tackle the current threat environment. For example, 38 percent of the Federal cyber 370 | Chapter 7 incidents that were reported in 2018 did not specifically identify an attack vector. Organizations continue to adopt best practices, but there can be challenges with implementation. For example, only 49 percent of agencies have the ability to detect white-list software running on their systems.17 Moreover, the lack of network visibility means that agencies may be unable to detect data exfiltration. For example, only 27 percent of agencies report that they have the ability to detect and investigate attempts to access large volumes of data. Finally, the lack of organizational and managerial policies surrounding the ownership of cybersecurity risk results in chief information officers or chief information security officers who lack the authority to make the relevant organization-wide decisions, but are nonetheless charged with the responsibility of maintaining network security. For example, only 16 percent of agencies achieved the government-wide target for encrypting inactive data. These challenges are only going to grow, given the proliferation of data and increasing use of machine learning. Countries and malicious actors may turn toward counter-AI operations that attempt to alter and/or manipulate data (Weinbaum and Shanahan 2018). Individuals throughout the Federal civilian government, Department of Defense, intelligence community, and private sector will need to evolve to meet the expectations with identifying, protecting, detecting, responding, and recovering from threats in a timely manner. The Trump Administration—particularly through OMB, in partnership with the Department of Homeland Security, NIST, and the General Services Administration—is working to actively address these shortcomings. For example, the update to the TIC initiative is only one component of a broader effort by the Federal Chief Information Security Officer Council to obtain and test usecases, particularly from the private sector (OMB 2018c). Moreover, as discussed in box 7-1, DARPA is developing new AI capabilities that help national security personnel more rapidly and reliably identify and address cybersecurity threats. The Administration’s Policies to Maintain American Leadership in Artificial Intelligence The Trump Administration’s AI agenda prioritizes advancing U.S. leadership in AI as well as helping the Nation’s workforce adapt to the changes that are coming. As evidenced in the Administration’s 2017 and 2018 budget priorities memoranda and highlighted at the White House AI summit in May 2018, the Administration continues to prioritize research-and-development funding for AI research and computing infrastructure, machine learning, and autonomous systems (OSTP 2018). To complement these active financial investments, the Administration also chartered the Select Committee on Artificial Intelligence under the National Science and Technology Council. This committee advises the White House on interagency research-and-development priorities, to foster 17 An application white list refers to a set of applications that are authorized to be present according to a well-defined benchmark (Sedgewick, Souppaya, and Scarfone 2015). Adapting to Technological Change | 371 collaboration between the private sector and academia, to identify opportunities to leverage Federal data and computational resources, and to improve the efficiency of government planning and coordination. The recent Executive Order on “Maintaining American Leadership in Artificial Intelligence” has formalized these commitments by calling for increased prioritization of investments, engaging in development of standards, and training and workforce development initiatives (White House 2019). Second, the Administration has implemented policies that are conducive to more rapid economic growth and innovation by removing regulatory barriers, including those on the deployment of AI-powered technologies. In September 2017, the Department of Transportation released an update of the 2016 Federal Automated Vehicles Policy, providing nonregulatory guidance for AV developers, which was later further updated in October 2018 to provide a framework and multimodal approach to the safe integration of AVs into the surface transportation system. Similarly, the Administration is developing new rules in compliance with the Space Policy Directive–2 to streamline the licensing process for commercial space enterprises (White House 2018d). The Administration is also taking steps internationally to ensure that there is a level playing field for AI technologies. For example, at the World Trade Organization, and in trade agreements like the United States–Mexico–Canada Agreement, the Administration is protecting U.S. intellectual property and limiting the ability of foreign governments to require disclosure of proprietary computer source code and algorithms. These actions will better protect the competitiveness of our digital suppliers, and promoting access to government-generated public data, to enhance innovative use in commercial applications and services (USTR 2018). Third, the Administration has begun integrating advances in AI and related technologies to improve the delivery of government services to the American people. The President’s Management Agenda calls for the use of automation software to improve the efficiency of government services and maximize the applications of Federal data to help evaluate and modify Federal programs (OMB 2018b). In addition, in April 2017, the Department of Energy (DOE) and the Department of Veterans Affairs launched the Million Veteran Program Computational Health Analytics for Medical Precision to Improve Outcomes Now—known as CHAMPION—which uses high-performance computing infrastructure in the DOE National Laboratories to analyze large quantities of data and make recommendations that focus on suicide prevention and enhanced predictions and diagnoses of diseases (DOE 2017). Recognizing that AI holds promise not only for greater economic opportunity but also for national security aims, the Trump Administration has directed considerable resources and leadership into targeted strategic investments, particularly at the nexus of AI and cybersecurity. One example, as discussed in box 7-1, is the Defense Advanced Research Projects Agency (DARPA 2018c), 372 | Chapter 7 which is actively investing in a “third wave” of AI technologies to make AI more transparent and accessible for deployment across both the public and private sectors. In particular, these initiatives focus on identifying ways for humans to use AI as tools for more effectively completing their tasks and maintaining network security. To complement these broad-based research-and-development funding priorities, the Administration signed a memorandum directing, “Secretary of Education DeVos to place high quality STEM [science, technology, engineering, and mathematics] education, particularly Computer Science, at the forefront of the Department of Education’s priorities” (White House 2017b). The Department of Education is working to devote over $200 million a year in grant funds toward these STEM and computer science activities, in addition to exploring other administrative actions that will advance computer science in K–12 and postsecondary institutions. Moreover, box 7-3 describes the emerging National Cyber Education Program, which is a prime example of an initiative focused on increasing the supply of STEM talent, specifically for the cybersecurity field. The Administration’s Implementation of the National Cyber Strategy In addition to the National Security Strategy (White House 2017a), the Administration has also developed the comprehensive 2018 National Cyber Strategy, the first of its kind in over 15 years, to address the cybersecurity challenges of the coming decades (White House 2018b). This strategy’s fourfold overarching goals mirror the pillars of the 2017 National Security Strategy; we paraphrase and synthesize these four objectives here, together with their priority areas. The first objective is protecting the American people, the Homeland, and the American way of life. To do this, the Administration is securing Federal networks and information, securing critical infrastructure, and combating cybercrime and improving incident reporting. Three priorities associated with this objective involve improving risk management and incident reporting practices, modernizing Federal technology and security systems, and streamlining processes and roles and responsibilities. The second objective is promoting American prosperity. To accomplish this, the Administration is fostering a vibrant and resilient digital economy, encouraging and protecting U.S. ingenuity, and developing a superior cybersecurity workforce. The priorities associated with this objective include promoting an agile and next-generation digital infrastructure, protecting intellectual property, and creating a pipeline and incentive structure that cultivate highly skilled cybersecurity and technology workers. The third objective is to preserve peace through strength. To do this, the Administration is enhancing cyber stability through norms of responsible Adapting to Technological Change | 373 Box 7-3. Educating the Cyber Workforce of Tomorrow One of the most commonly cited workforce challenges within both the public and private sectors is the shortage of skilled workers. According to recent estimates from International Information System Security Certification Consortium—known as ISC²—there is a shortage of 2.9 million cybersecurity employees globally (ISC2 2018). Moreover, numerous survey results suggest that organizations are increasingly more likely to report a shortage of cybersecurity skills (Oltsik 2018; Burning Glass 2018). Although there is debate about the its magnitude, there is a general recognition that more workers are needed to fill the increasing demand for cybersecurity skills, particularly as the paths by which hackers can gain access to computers and network servers expand in the growing digital economy. A national program that could help cultivate a new generation of cyber professionals prepared to meet the needs of the government, the defense community, and the private sector constitutes an Administration priority for both national security and the economy. One example of a long-run and scalable solution is the National Cyber Education Program, which is a joint public–private initiative supported by the Trump Administration that seeks to inspire and educate children in elementary through high school about potential career paths and tools for careers in cybersecurity. This program is a multipart, public–private education initiative within the NIST Framework and with themes from the National Integrated Cyber Education Research Center at its core and strong support and leadership from a large educational services firm that serves 30 million K–12 students and 3 million teachers through its online education platform. This initiative includes these features: 1. Core curricular cyber content for grades K–12. 2. Virtual professional development for improving skills among STEM and cybersecurity educators to deliver content effectively and across disciplines. 3. Transformative learning tools and curricula for students to promote both technical content and real-world applications. 4. A career portal for connecting students with cybersecurity opportunities in government and the private sector, as well as regional conferences that provide access to counselors, educators, and industry professionals. 5. Tools for cybersecurity industry partners to engage their local communities, particularly schools, through volunteerism and mentorship. The National Cyber Education Program has an estimated total budget of $20 to $25 million, which will be provided by a combination of committed private sponsors. 374 | Chapter 7 behavior and attributing and deterring unacceptable behavior in cyberspace. A priority related to this objective is countering malign cyber influence with information operations and better intelligence. The fourth objective is to advance American influence. To accomplish this, the Administration is promoting an open, interoperable, reliable, and secure Internet and building international cyber capacity. Two priorities related to this objective include developing partnerships across the public and private sectors to promote innovation and cutting-edge technologies and promoting free and secure markets worldwide. As discussed in box 7-3, the National Cyber Education Program is an example of a public–private initiative that empowers teachers with the resources to improve learning outcomes and career pathways for students, particular for the emerging cyber workforce. The Trump Administration is advancing these four objectives through a combination of short- and long-run efforts. In the long run, U.S. policymakers seek to prioritize an active and prepared pipeline of technology workers with mastery of information security practices. In the short run, the United States will continue strengthening network security, especially in critical infrastructure sectors. OMB issued a memorandum in May 2018 detailing the risk assessment process, which builds upon the Federal Information Security Modernization Act of 2014 Chief Information Officer metrics from 2017 and the Inspectors General metrics from 2016 (OMB 2018a). These metrics are based on the NIST Framework for Improving Critical Infrastructure Cybersecurity (NIST 2014), which provides best practices to which both public and private organizations can adhere, and aims to create predictability and encourage the adoption of best practices throughout government. Although no system in today’s geopolitical environment is completely secure, these actions are setting the groundwork for a safe and secure digital infrastructure; see box 7-4 for a discussion of how Estonia became one of the world’s leading countries in digital infrastructure. Further Artificial Intelligence and Future of Work Policy Considerations Motivated by the increasingly rapid pace of technological change and its implications for individuals, there are several lines of inquiry about the role of government.18 First, some have suggested, as part of the social safety net, the 18 We do not, however, discuss in depth the concerns about AI reaching a point of singularity, or general intelligence, whereby algorithms can create new ideas on their own without human assistance. Though the concept of singularity and the prospect of accelerated knowledge creation could lead to a large gain in productivity (Nordhaus 2015), an alternative scenario is one where algorithms would begin to dictate decisionmaking over human judgment. These discussions are beyond the scope of this chapter and the bulk of ongoing policy deliberations. Adapting to Technological Change | 375 Box 7-4. Estonia: A Case Study of Modern Cybersecurity Practices Although residents of Estonia rarely had access to electronic devices or the Internet a few decades ago, it has become an economic success story and digital leader in its region. Between 1995 and 2017, its real GDP grew by 141.5 percent (vs. 69.8 percent in the United States). According to the Estonian government, 99 percent of public services were available online as of 2017. Estonia does not use a centralized or master database, but rather X-Road—a software platform that allows links among its public and private e-service databases. According to the Estonian government, X-Road saves over 800 years of working time every year, reducing bureaucracy and raising efficiency (Vainsalu 2017). Though Estonia “was, effectively, a disconnected society” in the early 2000s, moving toward a digital economy through the introduction of its X-road infrastructure has allowed the country to raise productivity and become more secure (Vassil 2015). Consider, for instance, queries involving vehicle registration data. Typically, this search would require three police officers working for about 20 minutes; but the X-Road software platform eases the retrieval of information, so a single officer can complete the search within a few seconds (Vassil 2015). All of Estonia’s government services, ranging from collecting taxes to health records for personalized medical services, are made secure and readily accessible with the proper authentication credentials. These technological strides are arguably a major factor behind Estonia’s emergence as one of the top countries for doing business, ranking as the most competitive tax system in the OECD, according to the Tax Foundation (2014), and as the seventh-most-free economy in the world, according to the Heritage Foundation (2018). Interestingly, the number of queries through X-Road has grown exponentially, which is remarkable because similar digital services, such as data repositories and services, tend to grow linearly (Vassil 2015). An integral part of Estonia’s success through X-Road has been its data security and privacy features. For example, citizens may use digital signatures, secured with a 2,048-bit encryption, to perform daily tasks such as banking and notarizing documents. Public safety has improved because the presence of digital identification cards has shortened response times to 10 seconds or less for 93 percent of emergency calls (Estonia 2018). In fact, as of 2018, the only legal transactions that one could not make online were marriage, divorces, and real estate. The core of these online activities is a 2000 digital signature law that created a framework for digital contracting. Of course, the transition to a digital economy has come with increased targeting from other state and nonstate actors. Healthcare, energy, and the public sector face continuous cyberattacks, primarily from malware infections or outdated software. Perhaps Estonia’s largest attack was in 2007; it involved distributed denial-of-service attacks that disabled computer networks, halting communication between the country’s two largest banks 376 | Chapter 7 and causing reverberations for political parties. After the attack, Estonia established the NATO Cooperative Cyber Defense Center of Excellence in its capital, Tallin, in addition to founding the Cyber Defense League, which works to counter cyberattacks (Czosseck, Ottis, and Talihärm 2011). These increased security precautions and this institutional infrastructure have helped thwart attacks, including a large attempted attack on the country’s digital identification cards, raising public confidence. The system is highly secure because access to databases via X-Road is gated via a secure identification card using two-factor authentication and end-to-end encryption (Estonia 2018). Estonia has continued to prioritize improving its digital economy, in addition to developing a broader global network in partnership with other countries; see, for example, Estonia’s “Digital Agenda 2020,” which details plans to improve the well-being of its people and public administration through digitization (Estonia 2018). provision of a universal basic income, which would help individuals potentially suffering from job displacement. Proponents argue, for example, that the scale of technological change is unlike anything developed countries have experienced in the past and, therefore, social safety nets must evolve to adapt to the new risks. However, a universal basic income would not only discourage work, especially in light of the existing social safety net (e.g., unemployment insurance and food stamps), but would also undermine the intrinsic value that work plays in creating meaning and purpose in peoples’ lives (Opportunity America 2018). Second, given the wide array of applications of AI for national security and warfare, there is an ongoing debate about whether AI should be regulated to prevent an “AI arms race” among countries (Taddeo 2018; Horowitz 2018). Particularly because AI is a general purpose technology (Agrawal, Gans, and Goldfarb 2018), the dual uses of AI developments mean that they will diffuse rapidly upon entering the private sector. One primary fear, for example, is that AI algorithms could make decisions about troop and/or drone deployments, which would put human lives at risk without the traditional human decisionmaking process. Much like the concerns about autonomous vehicles and passenger safety, some policymakers and researchers are calling for greater guidance on regulating AI when lives are at stake. Third, although machine learning algorithms have been remarkably successful at predicting individual outcomes using increasingly accessible and granular data, many researchers and policymakers have voiced concern about the potential for these algorithms to propagate bias and discrimination (Kleinberg, Mullainathan, and Raghavan 2018). If the data on which algorithms are trained exhibit certain biases, then AI could propagate these biases on a wider and more subtle scale. Though these concerns are valid, the implications Adapting to Technological Change | 377 for regulation are ambiguous. In particular, Kleinberg, Mullainathan, and Raghavan (2018) outline three conditions that are required for algorithmic fairness at the heart of these debates about algorithmic classification—showing that, except in special cases, no method can satisfy all three conditions simultaneously. In this sense, though concerns about algorithmic fairness ought to continue being voiced, policymakers should approach with caution when formulating policy to avoid simply reacting to the latest fad or worry. Fourth, some are concerned that the emergence of big data and AI will pose a threat to competition because larger companies will be better equipped to train models on larger data (Seamans 2017; Bessen 2018). For example, companies with access to more data might be able to reduce business uncertainty by incorporating more information into their forecasts, thereby obtaining lower costs of capital (Begenau, Farboodi, and Veldkamp 2018). However, a countervailing force is the impact of AI on the cost of entry and creative destruction. For instance, the discovery and application of cloud computing allow firms to rent computer power and/or data storage. Aside from the 25 to 50 percent direct cost savings observed in government (West 2010), the indirect effects on entry costs and competition, particularly in concentrated markets, may be larger (Colciago and Etro 2013). Nonetheless, regulation and competition policy around big data and AI will remain an active ongoing debate. Despite these general categories of concerns, caution is especially important when considering prospective regulation. For example, according to Stanford University’s One Hundred Year Study of Artificial Intelligence, “The Study Panel’s consensus is that attempts to regulate ‘AI’ in general would be misguided, because there is no clear definition of AI (it isn’t any one thing), and the risks and considerations are very different in different domains” (Stanford University 2016). Moreover, because AI is an inherently global technology, regulation in one country could put companies that are competing in an international marketplace due to cross-country linkages at a significant disadvantage. Conclusion Recent advances in computer science and artificial intelligence technology are revolutionizing the U.S. economy. In many fields, tasks that traditionally required humans can now easily be performed by AI algorithms. Although these discoveries have the potential to “be as important and transformational to society and the economy as the steam engine,” according to Brynjolfsson and McAfee (2014, 9), they are also creating known and unknown dependencies and challenges, such as accelerated polarization in the labor market and increased exposure to cybersecurity threats. This chapter has defined and reviewed recent developments in AI and automation. Unlike traditional forms of information technology (e.g., computers) that require humans to provide instructions and programmatic 378 | Chapter 7 commands, intelligent systems are defined by their applicability to a wide range of tasks that need little supervision. For example, Google’s new AI algorithm, AlphaZero, successfully trained itself how to play and subsequently defeat the world’s best chess engine, Stockfish. Similarly, DARPA has also created tools capable of reliably and rapidly identifying cybersecurity vulnerabilities. Apart from these gaming and national security applications, AI is also frequently applied in the private sector—through, for example, data-driven decisionmaking business analytics and precision agriculture. Drawing on historical examples, we have demonstrated the potential effects of AI technology on the U.S. labor market. Although advances in AI, and the introduction of technology more broadly, will inevitably change the composition of tasks and jobs by making some tasks typically performed by humans obsolete, we have shown in the text above that humans will continue to have an important economic function because of their comparative advantage over AI in other tasks, even if they do not hold an absolute advantage. This means that companies and entrepreneurs will find it more profitable to design technology capital that complements human capabilities. However, to alleviate the potentially adverse effects of AI on individuals and jobs that are more exposed to disruption, the Trump Administration has responded proactively by supporting and funding reskilling and apprenticeship initiatives in areas where humans retain a comparative advantage. For example, the Pledge to America’s Workers, an initiative from the National Council for the American Worker, already has over 6.5 million pledges toward reskilling workers. In addition, we have applied economic theory to analyze the wage patterns among industries that are adopting AI technology. In the initial anticipation phase, firms know that they will be more productive, but, because they currently lack the AI capital, raise real wages. However, in the arrival phase, which is typically the primary focus among the popular press, the introduction of AI substitutes for labor as workers compete with machines, thereby depressing real wages. But as business formation catches up with the new technology, real wages ultimately rise to levels above what they were before AI. We have also explored ongoing cybersecurity vulnerabilities, along with future threats, as dependence on technology increases. The CEA (2018b) estimated the cost of attacks on these vulnerabilities to be $109 billion in 2016. Drawing on new data from Rapid7 across industries, we find that cybersecurity vulnerabilities are more pronounced than previously thought, even among well-established Fortune 500 firms. The prevalence of these vulnerabilities, coupled with the underreporting of public cybersecurity breaches, suggests that traditional measures of the cost of malicious cyberattacks may be greater than previously anticipated. We have discussed potential causes behind the failure to adopt cybersecurity best practices in the private sector, along with the policy implications, including tools already being used by the Federal government to prevent malicious cyberattacks and phishing attempts. Adapting to Technological Change | 379 We conclude by highlighting the Trump Administration’s current policy initiatives to tackle the risks posed by continued technological change in the labor market and new cybersecurity threats. The 2018 National Council for the American Worker, for example, has introduced initiatives to promote reskilling and apprenticeships to help workers transition into new and emerging jobs. For example, the Pledge to America’s Workers already has over 6.5 million commitments to these aims by companies. In a similar vein, the 2018 National Cyber Strategy lays out a comprehensive framework for engaging and dealing with cybersecurity threats. For example, the “Federal Cybersecurity Risk Determination Report and Action Plan” (White House 2018a) establishes a detailed risk assessment process based on best practices from the NIST Framework to create predictability and the adoption of best practices throughout the Federal government. Moreover, by modernizing educational curricula and equipping teachers with new multimedia content and tools, the emerging National Cyber Education Partnership will help address the cybersecurity skills gap that currently threatens U.S. economic and national security. The expansion of artificial intelligence and automation is already having profound effects on the U.S. economy and geopolitical landscape. Although we are only beginning to see their manifestations, and thus the full scale of potential threats and benefits cannot be entirely quantified, these changes pose both new challenges and opportunities. The Trump Administration is committed to policymaking that leverages technological change as an asset rather than a liability, to advancing economic gains for American workers, and to promoting best practices for our digital infrastructure so that America can remain the most prosperous and competitive country during the emerging technological transformation. 380 | Chapter 7 x Chapter 8 Markets versus Socialism When the Council of Economic Advisers was founded in 1946, our Nation was at a crucial crossroads. There was bipartisan concern that the transition away from a war economy would lead to another depression, and there was much public debate over the best policies to ensure prosperity. As detailed in the first CEA Annual Report to the President, there were two distinct schools of thought that Congress implicitly charged the CEA’s members to evaluate. One held “that ‘individual free enterprise’ could, through automatic processes of the market, effect the transition to full-scale peacetime business and (even with recurrent depressions) the highest practicable level of prosperity thereafter.” The other school held “that the economic activities of individuals and groups need, under modern industrial conditions, more rather than less supplementation and systemizing (though perhaps less direct regulation) by central government.” The three members of the first CEA contrasted the “Roman” view that economic prosperity can be handed down by a powerful central government with the “Spartan” view that much of American history at times “carried a cult of individual self-reliance to the point of brutality.” The report warned against “100 percenters” of both views, as each misunderstood the role of government in fostering prosperity, and it advised that “the great body of American thinking on economic matters runs toward a more balanced middle view.” The focus of that first report reminds us that there was a time in American history when grand debates over the merits of competing economic systems were front and center, and the terms of the debates and characteristics of the competing views were widely known. It is clear that such a time may be returning. Detailed policy proposals from self-declared “socialists” are gaining support in Congress and are receiving significant public attention. Yet it is much less clear 381 today than it was in 1946 exactly what a typical voter has in mind when he or she thinks of “socialism,” or whether those who today describe themselves as socialists would be considered “100 percenters” by the first CEA. There is undoubtedly ample confusion concerning the meaning of the word “socialist,” but economists generally agree about how to define socialism, and they have devoted enormous time and resources to studying its costs and benefits. With an eye on this broad body of literature, this chapter discusses socialism’s historic visions and intents, its economic features, its impact on economic performance, and its relationship with recent policy proposals in the United States. Inevitably, this chapter uses evidence to weigh in on the relative empirical merits of capitalism and socialism, a topic that can be quite divisive. In his landmark book Capitalism, Socialism and Democracy, Joseph Schumpeter (1942, 145) predicted that socialism would become the only respectable ideology of the two, in part because the scholarship regarding both would be dominated by university professors. At the American university, he warned, capitalism “stands its trial before judges who have the sentence of death in their pockets. . . . Well, here we have numbers; a well-defined group situation of proletarian hue; and a group interest in shaping a group attitude that will much more realistically account for hostility to the capitalist order than could the theory.” As documented in this chapter, the scholarship has not become as one-sided as Schumpeter envisioned. The chapter first briefly reviews the historical and modern socialist interpretations of market economies and the challenges socialist policy proposals face in terms of distorting incentives. Thereafter, we review the evidence from the highly socialist countries showing that they experienced sharp declines in output, especially in the industries that were taken over by the state. We review the experiences of economies with less extreme socialism and show that they also generate less output, although the shortfall is not as drastic as with the highly socialist countries. Finally, we assess the economic impact of the current American proposal for socialized medicine, 382 | Chapter 8 “Medicare for All,” and we find that the taxes needed to finance it would reduce the size of the U.S. economy. T o economists, socialism is not a zero-one designation. Whether a country or industry is socialist is a question of the degree to which (1) the means of production, distribution, and exchange are owned or regulated by the state; and (2) the state uses its control to distribute the country’s economic output without regard for final consumers’ willingness to pay or exchange (i.e., giving resources away “for free”).1 As explained below, this definition conforms with both statements and policy proposals from leading socialists, ranging from Karl Marx to Vladimir Lenin to Mao Zedong to modern self-described socialists.2 In modern models of capitalist economies, there is, of course, an ample role for government. In particular, there are public goods and goods with externalities that will be inefficiently supplied by the free market. Public goods are undersupplied in a completely free market because there is a free-rider problem. For example, if national defense, a public good enjoyed by the whole country, were sold at local supermarkets, few would contribute because they would feel their individual purchase would not matter and they would prefer others to contribute while still being defended. Consequently, the market would not provide sufficient defense. However, socialist regimes go well beyond government intervention into markets with public goods or externalities. This chapter is an empirical analysis of socialism that takes as its benchmark current U.S. public policies. This benchmark has the advantage of being measureable, but it necessarily differs from theoretical concepts of “capitalism” or “free markets” because the U.S. government may not limit its activity to theoretically defined public goods. Relative to the U.S. benchmark, we find that socialist public policies, though ostensibly well-intentioned, have clear 1 Criterion 1 is from the Oxford English Dictionary, which defines socialism as public policy based on “a political and economic theory of social organization which advocates that the means of production, distribution, and exchange should be owned or regulated by the community as a whole.” Criterion 2 further focuses the discussion to rule out state ownership or regulation for other purposes, such as fighting a war. See Sunstein (2019); and see Samuelson and Nordhaus (1989, 833), who describe “democratic socialist governments [that] expanded the welfare state, nationalized industries, and planned the economy.” 2 For classical socialists, “communism” is a purely theoretical concept that has never yet been put into practice, which is why the second “S” in USSR stands for “Socialist.” Communism is, in their view, a social arrangement where there is neither a state nor private property; the abolition of property is not sufficient for communism. As Lenin explained, “The goal of socialism is communism.” The supposed purpose of the “Great Leap Forward” was for China to transition from socialism to communism before the USSR did (Dikӧtter 2010). The classical definition therefore stands in contrast to vernacular usage of communism to refer to historical instances of socialism where the degree of state control was the highest, such as the USSR, Cuba, North Korea, or Maoist China. This chapter therefore avoids the term “communism.” Markets versus Socialism | 383 opportunity costs that are directly related to the degree to which they tax and regulate. We begin our investigation by looking closely at the most extreme socialist cases, which are Maoist China, the USSR under Lenin and Stalin, Castro’s Cuba, and other primarily agricultural countries (Pipes 2003). Referring to these same countries, Janos Kornai (1992, xxi) explained that the “development and the break-up and decline of the socialist system amount to the most important political and economic phenomena of the twentieth century. At the height of this system’s power and extent, a third of humanity lived under it.” Not long ago, distinguished economists in the U.S. and Europe offered favorable assessments of highly socialist economies, and many contemporary commentators appear to have forgotten or overlooked this record. Moreover, as one analyzes the impact of moving away from a purely socialist model, as many modern proposals envision, it may be helpful to understand the history of extreme examples. Socialists in the highly socialist countries accused the agriculture sector of being unfair and unproductive (equivalently, food was too expensive in terms of the labor required to produce it) because farmers, who had been working on their land for generations, were too unsophisticated and because the market failed to achieve economies of scale. Government takeovers of agriculture, which forcibly converted private farms into state-owned farms directed by government employees and party apparatchiks, were advertised as the way for socialist countries to produce more food with fewer workers so resources could be shifted into other industries. In practice, however, socialist takeovers of agriculture delivered the opposite of what was promised.3 Food production plummeted, and tens of millions of people died from starvation in the USSR, China, and other agricultural economies where the state took command. Planning the nonagricultural parts of those economies also proved impossible. Present-day socialists do not want the dictatorship or state brutality that often coincided with the most extreme cases of socialism. However, peaceful democratic implementation of socialist policies does not eliminate the fundamental incentive and information problems created by high tax rates, large state organizations, and the centralized control of resources. Venezuela is a modern industrialized country that elected Hugo Chávez as its leader to implement socialist policies, and the result was less output in oil and other industries that were nationalized. In other words, the lessons from socialized agriculture carry over to government takeovers of oil, health insurance, and other modern industries: They produce less rather than more, even in today’s information age, where central planning is possibly easier. 3 Many socialist scholars concur on this point (Nolan 1988, 6; Roemer 1995, 23–24; Nove 2010). 384 | Chapter 8 Proponents of socialism acknowledge that the experiences of the USSR and other highly socialist countries are not worth repeating, but they continue to advocate increased taxation and state control. Such policies would also have negative output effects, albeit of a lesser magnitude, as are seen in crosscountry studies of the effect of greater economic freedom on real gross domestic product (GDP). A broad body of academic literature quantifies the extent of economic freedom in several dimensions, including taxation and spending, the extent of state-owned enterprises, economic regulation, and other factors. This literature finds a strong association between greater economic freedom and better economic performance, suggesting that replacing U.S. policies with highly socialist policies, such as Venezuela’s, would reduce real GDP more than 40 percent in the long run, or about $24,000 a year for the average person. Participants in the American policy discourse sometimes cite the Nordic countries as socialist success stories. However, in many respects, the Nordic countries’ policies now differ significantly from policies that economists view as characteristic of socialism. Indeed, Nordic representatives have vehemently objected to the characterization that they are socialist (Rasmussen 2015). Nordic healthcare is not free, but rather requires substantial cost sharing. As compared with the U.S. rates at present, including implicit taxes, marginal labor income tax rates in the Nordic countries today are only somewhat greater. Nordic taxation overall is greater and is surprisingly less progressive than U.S. taxes. The Nordic countries also tax capital income less and regulate product markets less than the United States does, but they regulate labor markets more. Living standards in the Nordic countries, as measured by per capita GDP and consumption, are at least 15 percent lower than those in the United States. With an eye toward the inaccurate description of Nordic practices, some in the U.S. have proposed nationalizing payments for healthcare—which makes up more than a sixth of the U.S. economy—through the recent “Medicare for All” proposal. This proposal would create a monopoly government health insurer to provide healthcare for “free” (i.e., without cost sharing) and to centrally set all prices paid to suppliers, such as doctors and hospitals. We find that if this policy were financed through higher taxes, GDP would fall by 9 percent, or about $7,000 per person in 2022. As shown in chapter 4 of this Report, evidence on the productivity and effectiveness of single-payer systems suggests that “Medicare for All” would reduce longevity and health, particularly among seniors, even though it would only slightly increase the fraction of the population with health insurance.4 To the extent that policy proposals mimic the 100 percent experience, the burden is on advocates to explain how their latest policy agenda would 4 This Report refers to the specific “Medicare for All” bills in Congress (S. 1804; H.R. 676). The economic effects of other healthcare reform proposals, or aspirations, are not necessarily the same even if they share the same name. Markets versus Socialism | 385 overcome the undeniable problems observed when socialist policies were tried in the past. As the sociology professor Paul Starr (2016) put it, “Much of [modern American socialists’] platform ignores the economic realities that European socialists long ago accepted.”5 Marx’s 200th birthday is a good time to gather and review the overwhelming evidence.6 The “Economics of Socialism” section of this chapter begins by briefly reviewing the historical and modern socialist interpretations of market economies and some of the challenges with socialist policy proposals. The subsequent section reviews the evidence from the highly socialist countries, by which we mean countries that were implementing the most state control of production and incomes. Highly socialist countries experienced sharp declines in output, especially in the industries that were taken over by the state. Economies with less extreme forms of socialism also generate less output, although the shortfall is not as drastic as with the highly socialist countries, as shown in the section titled “Socialism and Living Standards in a Broad Cross Section of Countries.” A section on the Nordic-countries provides a more detailed examination of them. The final section assesses the economic impact of the headline American proposal, “Medicare for All.”7 The Economics of Socialism Historically, philosophers and even some well-regarded economists have offered socialist theories of the causes of income and wealth inequality, and they have advocated for state solutions that are commonly echoed by modern socialists. They both argue that there is “exploitation” in the market sector and there are virtually unlimited economies of scale in the public sector. Profits are undeserved and unnecessarily add to the costs of goods and services. The solutions include single-payer systems, prohibitions of for-profit business, state-determined prices to replace the “anarchy of the market,” high tax rates (“from each according to his ability”), and public policies that hand out much of the Nation’s goods and services free of charge (“to each according to his needs”) (Gregory 2004; Marx 1875). The Socialist Economic Narrative: Exploitation Corrected by Central Planning When Marx was writing over 150 years ago, obviously exploitive practices were still familiar. The modern socialist view is that exploitation remains real but is somewhat hidden in the market for labor (Gurley 1976a). Much inequality 5 See also Boettke (1990). 6 See also Acemoglu and Robinson (2015), who review Marx’s key predictions about trends for wages and profits and find them to be falsified by the evidence. 7 The CEA previously released research on topics covered in this chapter. The text that follows builds on The Opportunity Costs of Socialism (CEA 2018a), a research paper produced by the CEA. 386 | Chapter 8 arises, it is said, because market activity is a zero-sum game, with owners and workers paid according to the power they possess (or lack), rather than their marginal products. From the workers’ perspective, profits are an unwarranted cost in the production process and are reflected in an unnecessarily low level of wages. The contest over the fraction of output paid in wages, known among socialists as the “class struggle,” can take place in the political arena, in the private sector with union activity and the like, or violently with riots or revolution (Przeworksi and Sprague 1986). As Karl Marx put it, “Modern bourgeois private property is the final and most complete expression of the system of producing and appropriating products, that is based on class antagonisms, on the exploitation of the many by the few” (Marx and Engels 1848, 24). The Chinese leader Mao Zedong, who cited Marxism as the model for his country, described “the ruthless economic exploitation and political oppression of the peasants by the landlord class” (Cotterell 2011, chap. 6). The Democratic Socialists of America, and elected officials who are affiliated with and endorsed by them, today express similar concerns that workers are harmed when the profit motive is allowed to be an important part of the economic system.8 The French economist Thomas Piketty, whose 2014 book Capital in the 21st Century recalls Marx’s Das Kapital, asserts that inequality today is “terrifying” and that public policy can and must reduce it; wealth holders must be heavily taxed.9 Piketty (2014) concludes that the Soviet approach and other attempts to “abolish private ownership” should at least be admired for being “more logically consistent.” Historical and contemporary socialists argue that heavy taxation need not reduce national output because a public enterprise uses its efficiency and bargaining power to achieve better outcomes. Mao touted the “superiority of large cooperatives.” He decreed that the Chinese government would be the single payer for grain, prohibiting farmers from selling their grain to any other person or business (Dikӧtter 2010).10 In describing China, the British economists Joan Robinson and Solomon Adler (1958, 3) celebrated that “the agricultural producers’ cooperatives have finally put an end to the minute fragmentation of the land.” Lenin stressed transforming “agriculture from small, backward, individual farming to large-scale, advanced, collective agriculture, to joint cultivation of the land.” Proponents of socialism in America today 8 See Stone and Gong (2018) and Day (2018a). See also Bernhardt et al. (2008), Sanders (2018), and Section 103 of the House “Medicare for All” bill (H.R. 676), which prohibits health providers from participating unless they are a public or not-for-profit institution. 9 Piketty (2014, 572) writes that “the right solution is a progressive annual tax on capital,” and that “the primary purpose of the capital tax is not to finance the social state but to regulate capitalism” (p. 518). 10 Lenin (1918) also enforced a grain monopoly in the USSR. Markets versus Socialism | 387 argue that the Federal government can run healthcare more efficiently than many competing private enterprises.11 State ownership of the means of production is an often-repeated Marxist proposal for ending worker exploitation by leveraging scale economies. This aspect of socialism is less visible in modern American socialism, because in most instances, socialists would allow individuals to be the legal owners of capital and their own labor.12 However, the economic significance of ownership is control over the use of an asset and of the income it generates, rather than the legal title by itself. In other words, the economic value of ownership is sharply diminished if the legal owner has little control and little of the income.13 Full ownership in the economic sense is rejected by socialists; they maintain that private owners left to themselves would not achieve full economies of scale and would continue exploiting workers. Public monopolies, “public options,” profit prohibitions, and the regulatory apparatus allow the socialist state to control asset use, and high tax rates allow the state to determine how much income everyone receives, without necessarily abolishing ownership in the narrow legal sense. Historical socialists—such as Lenin, Mao, and Castro—ran their countries without democracy and civil liberties. Modern democratic socialists are different in these important ways. Nevertheless, even when socialist policies are peacefully implemented under the auspices of democracy, economics has much to say about their effects. The Role of Incentives in Raising and Spending Money Any productive economic system needs incentives: means of motivating effort, useful application of knowledge, and the creation and maintenance of productive assets. The higher an economy’s tax rates, the more its industries are monopolized by a public enterprise, and the more its goods and services are distributed free of charge, then the more disincentives reduce the value created in the economy. Mancur Olson’s famous 1965 book The Logic of Collective Action showed how large groups have trouble achieving common goals without individual incentives. As an important example, Olson disputed Marx’s claim that business 11 The CEA notes that it is directed by the 1946 Employment Act to “formulate and recommend national economic policy to promote employment, production, and purchasing power under free competitive enterprise” (sec. 4a). 12 Even the USSR and other highly socialist countries had elements of private property (Dolot 2011, 134; see also Pryor 1992, chap. 4). The CEA also notes that American socialists may not only intend to prohibit private health insurance but also, for example, intend to nationalize energy companies (Day 2018b). 13 Epstein (1985) and Fischel (1995). See also Samuelson and Nordhaus (1989, 837), who define a socialist economy as one “in which the major economic decisions are made administratively, without profits as a central motive force for production,” and Roemer (1994), who defines socialism independent of legal property rights. 388 | Chapter 8 $"0- ч-рѵ*0-4./*+ )*) 4 )2#*((*) 4$..+ )/ # +0-#. - #*. (*) 4$. .+ )/ *( *) '. # +0-#. -’. *)*($5 ). & #$"# ./1'0 *)*($5 Ѷ0/*)’/. & #$"# ./1'0 *( *) '. ’. *)’/ *)*($5 Ѷ0/ . &#$"# ./1'0 *)’/ *)*($5 )*)’/ . &#$"# ./1'0 owners were working together to reduce wages, even though Olson acknowledged that business owners would have greater profits if wages were lower. The paradox, Olson said, is that the market wage is the result of a great many employers’ individual actions. Any specific employer decides the wage and working conditions to offer based on its own profits, without valuing the effects of its decision on the profits of competing employers. The result of competition among employers is that wages are in line with worker productivity, even though wages below that would enhance the profits of employers as a group. The kinds of free-rider problems analyzed by Olson are also a challenge for socialist planning, because the persons deciding on resource allocations— that is, how much to spend on a product and how that product should be manufactured and delivered to the final consumer—are different from those providing the resources and different from the final consumer who is ultimately using them. As the Nobel Prize–winning economist Milton Friedman demonstrated with his illustration of “four ways to spend money” (see figure 8-1), consumers in the market system spend their own money, and are therefore more careful how much to spend and on what the money is spent (Friedman and Friedman 1980). To the extent that they also use what they purchased—the upper left corner in figure 8-1—they are also more discerning, so that the items purchased are of good value. They will gather and consider information that helps compare the values of different options. The upper right hand corner of figure 8-1 gives the case of spending one’s own money on someone else, which introduces inefficiencies because the recipient may place a lower value on the spending. The inefficiency of the lower left corner is exemplified by the larger spending that takes place when spending on oneself using other people’s money, as with fully reimbursed corporate travel or entertainment. The lower right category is the one applicable to government employees who spend tax revenue on government program beneficiaries; not only is there a tendency to overspend using other people’s Markets versus Socialism | 389 money, but that spending may have little value from the perspective of program beneficiaries.14 Many presentations of socialist policy options, even those by expert economists, ignore the distinction between individual and group action stressed by Olson. The “Medicare for All” bills currently in Congress, for example, supposedly just swap household expenditures on health insurance that occur under a private system for household expenditures on taxes earmarked for the public program.15 But this swap fundamentally changes the types of healthcare that are ultimately received by consumers, the size of the healthcare budget, and the size of the overall economy. In a private system, a consumer has some control over his or her spending on health insurance—by, for example, selecting a plan with different benefits, or switching to a more efficient provider. Insurers in a private system must be responsive to consumer demands if they want to attract and retain customers and thus stay in business.16 Individuals also have little reason to economize on anything that they can obtain without payment (Arrow 1963; Pauly 1968). In a socialist system, the state decides the amount to be spent, how it is spent, and when and where the services are received by the consumer. A consumer who is unhappy with the state’s choices has little recourse, especially if private businesses are prohibited from competing with the state (as they are under “Medicare for All”). It may be argued that “giant” private corporations also limit consumer choice, but this comparison ignores how corporations are subject to competition. For example, a consumer can purchase goods from Walmart rather than Amazon, not to mention a whole host of other retailers. Amazon is legally permitted to entice Walmart customers, and vice versa, with low prices, better products, free shipping, and so on. Whereas retail customers are not forced to open their wallets, giant state enterprises are guaranteed revenue through taxation and are often legally protected from competition.17 Those who maintain that Amazon and Walmart are too large might note that 14 The gap between program spending and value to beneficiaries has been measured by Gallen (2015), Finkelstein and McKnight (2008), and Olsen (2008), among others. 15 Cooper (2018) refers to it as the “taxes-for-premiums swap.” Krugman (2017) writes that “most people would gain more from the elimination of insurance premiums than they would lose from the tax hike” without mentioning any of the economic problems with spending someone else’s money on someone else. As Von Mises (1990, chap. 1) observed long ago, advocates of socialist policies “invariably explain how . . . roast pigeons will in some way fly into the mouths of the comrades, but they omit to show how this miracle is to take place.” 16 See also Shleifer (1998). 17 Interestingly, socialist policies could simultaneously reduce the size of private enterprises with antitrust and other policies and enlarge government enterprises with legal protections from competition. 390 | Chapter 8 the single-payer revenues proposed in “Medicare for All” will be about eight times the revenue for either of these corporations.18 Another problem with the socialist system is that “other people’s money” starts to disappear when the “other people” realize that they have little incentive to earn and innovate because what they receive has little to do with how much they make.19 An important reason that people work and put forth effort is to obtain goods and services that they want. Under socialism, the things they want may be unavailable because the market no longer exists, or are made available without the need for working. Noneconomists sometimes claim that high taxes do not prevent anyone from working, as long as the tax rate is less than 100 percent, because everyone strives to have more income rather than less. This “income maximization” hypothesis is contradicted by the most basic labor market observations, not to mention decades of research.20 Earning additional income requires sacrifices (a loss of free time, relocating to an area with better-paying jobs, training, taking an inconvenient schedule, etc.), and people evaluate whether the net income earned is enough to justify the sacrifices. Socialism’s high tax rates fundamentally tilt this trade-off in favor of less income. The Economic Consequences of “Free” Goods and Services Because market prices reveal economically important information about costs and consumer wants, regulations and spending programs that distribute goods or services at below-market prices, such as those that are “free,” have a number of unintended consequences (Hayek 1945). Fewer goods and services will be produced, and what is produced may be misallocated to consumers with comparatively little need. We explain in this section why the very idea that a single-payer government program will use its market power to obtain lower prices is an acknowledgment that the program will be purchasing less quantity or quality. On the demand side of a market, people vary in their willingness to pay for the product or service, and their willingness varies over time. The market system allocates the available goods to consumers who are willing to pay more than the market price, while those not willing to pay the price go without. Willingness to pay is related to income, but it is also related to “need,” at least as consumers perceive need. Consumers are, for example, willing to pay more for food when they are hungry and to buy health insurance when they are 18 Chapter 4 of this Report estimates that “Medicare for All” would be financed with about $2.4 trillion in 2022. In 2017, Walmart’s U.S. revenues were about $0.3 trillion, while Amazon’s U.S. revenues were less than $0.2 trillion. The final section of this chapter also explains why “Medicare for All” would sharply reduce consumer spending, which suggests that 2017 revenues would be an optimistic projection for what retail corporations would earn with “Medicare for All” in place. 19 For an analysis of the private sector’s innovation advantage, see Winston (2010). 20 E.g., Prescott (2004), Rogerson (2006), Chetty et al. (2011) and Mulligan (2012). Markets versus Socialism | 391 older. In this way, the market has a tendency to allocate goods and services when and to whom they are needed. If the government decrees that a product shall be free, then something other than a willingness to pay the market price will determine who receives the available supply. It may be a willingness to wait in line, or political connections, or membership in a privileged demographic group, or a government eligibility formula (Shleifer and Vishny 1992; Barzel 1997; Glaeser and Luttmer 2003). By comparison with the market, giving a product away for free may sometimes have the effect of taking the good away from consumers when they need it most and transferring it to consumers when they need it least. As we show in chapter 4 of this Report, single-payer healthcare programs tend to reallocate healthcare from the old to the young. Centrally planned agricultural systems have, in effect, taken food products away from starving people in rural areas and transferred the products to urban consumers or sold them on the international market. Prices that are below their competitive levels also affect supply. Although a single government payer has market power that it can use to reduce the incomes of suppliers, the price reduction is accomplished by reducing the quantity or quality of what it purchases in order to squeeze its suppliers.21 This may be one reason why single-payer healthcare systems have longer appointment waiting times than in the U.S. system (see chapter 4 of this Report), and why “free” Nordic colleges yield lower financial returns than higher education in the United States, even though the Nordic returns include no tuition expense (see the Nordic section below). Von Mises (1920) and Hayek (1945) emphasized the value of market prices for coordinating and executing decisions in complex economies and went so far as to assert that central planning is impossible because it eschews markets. Perhaps contrary to their expectations, centrally planned economies did survive for decades, although these economies performed poorly and survived so long only because of their deviations from the socialist program (Gregory 2004, 5–6). Socialism’s Track Record Socialism is a continuum. No country has zero state ownership, zero regulation, and zero taxes. Even the most highly socialist countries have retained elements of private property, with consumers sometimes spending their own money on themselves (Pryor 1992). This chapter therefore begins with the 21 This effect is the monopsony mirror image of monopoly pricing. Sellers with market power typically exercise it by constraining the quantity or quality of what they produce and thereby squeeze the buyers in the market (Williamson 1968; Farrell and Shapiro 1990; Whinston 2006). Buyers with market power typically exercise it by constraining the quantity or quality of what they purchase. 392 | Chapter 8 historically common highly socialist regimes, by which we mean countries that implemented the most state control of production and incomes for at least a decade.22 Highly socialist policies continue “to have considerable emotional appeal throughout the world to those who believe that it offers economic progress and fairness, free of chaotic market forces” (Gregory 2004, x). Of more than a dozen countries meeting these criteria, this section emphasizes Maoist China, Castro’s Cuba, and the USSR under Lenin and Stalin, which are the subject of much scholarship, and Venezuela, which has been unusual as an industrialized economy with elements of democracy that nonetheless pursued highly socialist policies.23 Many of the highly socialist economies were agricultural, with state and collective farming systems implemented by socialist governments to achieve purported economies of scale and, pursuant to socialist ideology, to punish private landowners. Agricultural output dropped sharply when socialism was implemented, causing food shortages. Between China and the USSR, tens of millions of people starved. It took quite some time for sympathetic scholars outside the socialist countries to acknowledge that large, state-owned farms were less productive than small private ones. The economic failures of highly socialist policies have been described at length by both survivors and scholars who have reviewed the evidence in state archives. Not only did highly socialist countries discourage the supply of effort and capital with poor incentives, but they also allocated these resources perversely because central planning made production decisions react to output and input prices in the opposite direction from those of a market economy. Although agriculture is not a large part of the U.S. economy, present-day socialists echo the historical socialists by arguing that healthcare, education, and other sectors are unfair and unproductive, and they promise that large state organizations will deliver fairness and economies of scale. It is therefore worth acknowledging that socialist takeovers of agriculture have delivered the opposite of what was promised. Present-day socialists do not want the dictatorship or state brutality that often coincided with the most extreme cases of socialism, and they do not propose to nationalize agriculture. However, the peaceful democratic implementation of socialist policies does not eliminate the fundamental incentive and information problems created by high tax rates, large state organizations, and the centralized control of resources. As we report at the end of this section, 22 The highly socialist countries are sometimes called “communist” or “centrally planned” although, as noted above, communism has a different meaning in the theory of socialism. We presume that, in contrast to the Nordic countries, central government spending far exceeds private spending in highly socialist countries—although, with pervasive state ownership and centralized control, it is difficult to construct accurate measures of the components of spending that would be comparable between highly socialist countries and the rest of the world. 23 Also recall, from the “Economics of Socialism” section above, the parallels between modern socialist rhetoric and the statements attributed to Mao, Castro, and Lenin. Markets versus Socialism | 393 Venezuela is a modern industrialized country that elected Hugo Chávez as its leader to implement socialist policies, and the result was less output in oil and other industries that were nationalized.24 When evaluating the misalignment between the promises of highly socialist regimes to eliminate the misery and exploitation of the poor and the actual effects of their policies, it is instructive to look at a major guide that economists use to determine value: the revealed preference of the population—in other words, people voting with their feet. The implementation of highly socialist policies, such as in Venezuela, has been associated with high emigration rates. Perhaps more telling is that historically socialist regimes—such as the USSR, China, North Korea, and Cuba—have forcibly prevented people from leaving. State and Collective Farming State and collective farming (hereafter, “state farming”) is a historically common practice in highly socialist countries.25 The state acquires private farmland, and often much livestock, by force. The land is organized in large parcels, typically about one per village, as compared with the multitude of parcels in a typical village before collectivization. Villagers are required to work on the land, with the output belonging to the state. Decisions are made by government employees and party apparatchiks, who may have had little or no experience or specialized knowledge in comparison with the original landowners (Pryor 1992). These decisions include devising and implementing complex systems of production targets and quality requirements (Nolan 1988). The socialist narrative emphasizes exploitation and class struggle, which in an agricultural economy refers to the power dynamic that determines the division of agricultural income between landlords and farm workers. State farms purport to end the exploitation by eliminating the landlords, known as kulaks in the USSR.26 Another advantage of state farms, from the socialist perspective, was economies of scale (Pryor 1992). In principle, the knowledge and 24 See also the sections of this chapter on socialism in the Nordic countries and on “Medicare for All,” and chapter 4 of this Report, which include analyses of single-payer healthcare. Further evidence about the effects of socialism on nonagricultural industries are reported by Conquest (2005), Gregory (2004), Horowitz and Suchlicki (2003), and Kornai (1992). Johnson and Brooks (1983, 9) describe how the “Soviet rural road system can only be described as a disgrace, the result of decades of socialist neglect.” 25 Among the highly socialist countries, state or collective farms were formed, e.g., in the USSR; elsewhere in the Soviet Bloc; and in Vietnam, North Korea, China, Cuba, South Yemen, Congo, Ethiopia, Cambodia, and Laos (Pryor 1992, chap. 4). In principle, participation in collective farms was voluntary, and operations were collectively managed by villagers, whereas state farms were owned and managed by government with the farm workers as government employees. In practice, even the collective farms may come “under the control of the Communist Party and the government,” as they did in the USSR (Dolot 2011, chap. 2). See also Johnson and Brooks (1983, 4–5), Conquest (1986, 171), and Pryor (1992, 12–14). 26 With landlords resisting the seizure of their property, the state often imprisoned or murdered landlords (Conquest 1986; Rummel 2011). 394 | Chapter 8 techniques of the best farmer could be applied to all the land rather than the comparatively small plot that the best farmer owned.27 Capital may be easier to obtain for a larger organization. Writing about the USSR in 1929, Joseph Stalin stressed transforming “agriculture from small, backward, individual farming to large-scale, advanced, collective agriculture, to joint cultivation of the land.” Writing about China in 1958, the British economist Joan Robinson asserted that “the minute fragmentation of the land” that prevailed before collective farming was a major source of inefficiency. The family itself was sometimes criticized as operating on too small a scale; in China, household utensils were confiscated and villagers were assigned to communal kitchens for eating and food preparation (Jisheng 2012).28 Eyewitnesses tell a different story concerning the operation of state farms, and central planning more generally. In Cuba and the USSR, for example, the managers of state farms were chosen from the ranks of the Communist Party, rather than because of management skill or agricultural knowledge (Dolot 2011).29 “The state monopoly stifled incentives for increasing production,” describes a Chinese eyewitness (Jisheng 2012, 174–77). Production units sometimes had an incentive to produce less and to hoard inputs, in order to obtain more favorable allocations the next year (Gregory 1990). Unintended Consequences State farms reduced agricultural productivity rather than increasing it. The unwarranted faith in state farms had a doubly negative effect on agricultural output. Not only was less produced per worker, but workers were removed from agriculture, on the mistaken understanding that farming was becoming more productive (Conquest 1986) and would produce surpluses that would finance the growth of industry (Gregory 2004). For China and the USSR, both the lack of food and reliance on central planning, rather than market mechanisms, resulted in millions of deaths by starvation. Statistics from highly socialist regimes are informative, but necessarily imprecise. Gregory (1990), Kornai (1992), and others explain how officials in these regimes deceive their superiors and the public. Refugees from the regimes may be free to talk after their escape, but they may not constitute a 27 The CEA is not aware of socialist explanations of why the best farmer owned comparatively little land or did not contribute his or her talents to a larger but purely voluntary collective. A neoclassical explanation might involve credit constraints and the like, or simply that it would not be efficient for the best farmer to control more land than he or she chose to purchase in the marketplace (i.e., the market reflects genuine limitations on scale economies; see also Conquest 1986). 28 See also Lenin (1951). 29 See also O’Connor’s (1968, 205) description of Cuban state farms with “[inefficiencies] arising from overcentralized decisionmaking, together with a shortage of qualified personnel which was aggravated by a tendency to place politically reliable people in top administrative posts even when they lacked technical skills.” Markets versus Socialism | 395 random sample of the populations they left and may have imperfect memories. Readers are advised that the estimates in this section are necessarily inexact. In Cuba, the disincentives inherent in the socialist system sharply reduced agricultural production. As O’Connor (1968, 206–7), explains, “Because wage rates bore little or no relationship to labor productivity and [state farm] income, there were few incentives for workers to engage wholeheartedly in a collective effort.” Table 8-1 shows the change in agricultural production in Cuba spanning the agrarian reform period of 1959–63, when about 70 percent of farmland was nationalized (Zimbalist and Eckstein 1987). Production of livestock fell between 14 percent (fish) and 84 percent (pork). Among the major crops, production fell between 5 percent (rice) and 75 percent (malanga). The biggest crop, sugar, fell 35 percent. There was not a major Cuban famine, however, because of Soviet assistance and emigration.30 The CEA also notes that, though Cuba had a gross national income similar to that of Puerto Rico before the Cuban Revolution in the late 1950s, by 2000 the Cuban gross national income had fallen almost two-thirds relative to Puerto Rico’s.31 In the USSR, the collectivization of agriculture occurred with the First Five-Year Plan, from 1928 to 1932. Horses were important for doing farm work, but their numbers fell by 47 percent, in part because nobody had much incentive to care for them when they became collective property (Conquest 1986). In the Central Asian parts of the USSR, the number of cattle fell more than 75 percent, and the number of sheep more than 90 percent (Conquest 1986). Looking at official Soviet data for about 1970, Johnson and Brooks (1983) concluded that the entire program of socialist policies—“excessive centralization of the planning, control, and management of agriculture, inappropriate price policies, and defective incentive systems for farm managers and workers and for enterprises that supply inputs to agriculture”—was reducing Soviet agricultural productivity about 50 percent.32 A famine ensued in 1932 and 1933, and about 6 million people died from starvation (Courtois et al. 1999).33 The death rates were high in Ukraine, a nor- 30 On Soviet economic aid to Cuba, see Walters (1966). 31 This is per Collins, Bosworth, and Soto-Class (2006) and the Barro-Lee data set, using GDP for Cuba in 1950. The result is more extreme if the comparison is based on GDP, because people and businesses outside Puerto Rico have substantial claims on the production occurring there. 32 This is likely an underestimate because, as Johnson and Brooks acknowledge, their research project was made possible through cooperation with the Soviet government. 33 Conquest (1986, 301) cites 7 million. 396 | Chapter 8 ' чҊрѵ"-$0'/0-'-*0/$*)$)0!/ -/# /$*)'$5/$*)*! -(. $1 ./*& ! Change from 1957–58 to 1963–64 җ+ - )/Ҙ -*+ Change from 1957–58 to 1963–64 җ+ - )/Ҙ –45 0"- –35 *-& –84 *-) –39 *0'/-4 –36 $ –5 $.# –14 ')" –75 "". –40 0 –56 $'& –39 *//* . –50 *0- ѷ'5-Ҋ--$''*)*-. Ҋ *)җспрфҘѵ mally fertile region from which the Soviet planners had been exporting food.34 Figure 8-2 shows the time series for Ukrainian deaths by sex, along with births. This time series also appears to show that millions more people were not born because of the famine. Mao’s government implemented the so-called Great Leap Forward for China from 1958 to 1962, including a policy of mass collectivization of agriculture that provided “no wages or cash rewards for effort” on farms.35 The per capita output of grain fell 21 percent from 1957 to 1962; for aquatic products, the drop was 31 percent; and for cotton, edible oil, and meat, it was about 55 percent (Lin 1992; Nolan 1988).36 During the Great Chinese Famine from 1959 to 1961, an estimated 45 million people died (Dikӧtter 2010). Figure 8-3 shows the time series for deaths and births, which form a pattern similar to Ukraine’s, except that the absolute number of deaths was an order of magnitude greater. Failed agricultural policies are not the only way that civilians died at the hands of their highly socialist state. Rummel (1994), Courtois and others (1999), Pipes (2003), and Holmes (2009) document noncombatant deaths in the Soviet Bloc, Yugoslavia, Cuba, China, Cambodia, Vietnam, Laos, North Korea, and Ethiopia. These deaths exclude deaths in military combat but include deaths in purges, massacres, concentration camps, forced migration, and both escape attempts and famines. The death rate in famines was particularly high in North Korea, where about 600,000 people died from starvation in the late 1990s out 34 In fact, the USSR as a whole was exporting grain at that time (Dalrymple 1964, 271; Courtois et al. 1999, 167). Note that there were also starvation deaths elsewhere in the USSR (Conquest 1986). In contrast to the famines associated with highly socialist regimes, Ó Gráda (2000) and Goodspeed (2016, 2017) find that one important margin of adjust