The full text on this page is automatically extracted from the file linked above and may contain errors and inconsistencies.
FEDERAL RESERVE BANK OF DALLAS Third Quarter 1994 An All Economy Ecol/omy at Risk? The Sodal Costs of ofSchool Inefficiency 71Je Social ScboollnejJiciellCY Lori L l Taylor Monetary Policy and Recent Business-Cycle Experience R. W. W. Hafer, Hafer. Joseph H. H. Haslag, Haslag, and Scott Scoll E. Hein GATT and the tbe New Protectionism David M. M. Gould and William C. C. Gruben The Tbe Saving Savil/g Grace Richard Richard Aim Aim and David David M. M. Gould This publication was digitized and made available by the Federal Reserve Bank of Dallas' Historical Library (FedHistory@dal.frb.org) Economic Review Federal Reserve Bank of Dallas Robert D. McTeer, Jr . ...... 1I!da.t~on- Tony J. Selvaggio ''''1b~''f)wt~0t' HlrvlY Rosenblum s.-Va.........,IIfIIDwnl:wd~ W. Michael Cox Ya ~ IhJ ffiwiooo<_ Sleph,n P. A. Brown AIMr"" 1'1<_"""""'" w.so.- £...- Res.. rch Dtlicen John Ouca Robert W Gilmer Wil liam C Gruben Evan F Koentfl Economists RotJert T Clall Kennelfl M Emery Beverly J Fox Fiona 0 Sigalla Ll)fll Taylor lucmda Vargas DaVid M GauKj Joseph H Haslag O'Ann M Petersen Kelty A W'healan Mark A. Wynne Kevin J VealS Kerth R Phillips Stephen 0 Prowse Mille K Vocel Carlos E Zarazaga Res..n::h AsscK:iltH PlOfe:ssor Nathan S Balke Southem MethodlSf Uti/versify ProIBSSOf Thomas 8 Fomby Sovrhem Melhodisl UniverSity Prcfessor Gregory W Huffman Southern Methodrsl University Professor Finn E Kydland Universiryaf rexas al Austin Professl)( Roy J Ruff in University of Housron Editon Rhonda Hams Judith Fioo MoolCa Reeves Design G9I'le Autry Graphics Ind Typogr. plly lauraJ Bell The fawIomc RevIOM' IS ptbIlShecI by It-. f~ ReseM! BP: of 0I11a$ The .. ews eJ:P'eued at Ihoseollh! iIUIhcrs and do I'I01I1e(~oly reflect rt..Jm!\IOI'I$ of I;heFederaI ~a.nk of D,II.s Of the f. .,] Ru.rYe S'f$lem SubiaopllOnS 1ft ~11bIe free 01 cN~ F'le.se send requests Ia ~ n rTUlJpIII· ..., ""b$(l~toons. bid Issues. and IIdI)"m ~ III the PublIC Affal~ Oepartll'l!flt. fflderiJI ReseM s.nt 01 Dallas, P0 80>. 655n, Dalias. TX 1526S-59Ji (2U) 9:22·5251 Andes may be ~ 011 !hi! tllndollOfl \till the so.m! IS creOrred and tile AesNrdl [)ecllrur.M II pnlVIdi!Id WITh • toP'!' or !hi! plbhallOfl con1allllllQ the repf,,"ed ma!el ..1 On Ih ~O'I.r:.n .rchltectural rendering 01 lb. Ft'el1l Rnem hlk 01 D.I .... , Contents Page 1 An Economy at Risk? The Social Costs of School Inefficiency Lori L. Taylor A preponderance of economic evidence demonslrmes that the public school .~ys1t!m in the United States is less dTiciem than it could he. However, few researchers have examined the economic consequences of sllch inefficiency Lori Taylor finds that, although school inefficiency can c rowd out consumption and investment in the remainder of the tc'cnnomy anti c:m reduce the rate of return to investments in eduCill ion. indficiency has only a limited impact on economic activity. Sile estimates that, even compounded over twentyfive years, plausible dewees of school ineffic ie ncy reduce consumption and potential GOP by less than 1 percent. As such. the social cost.., of schoo! inefficiency are simil:lr in magnitude to the soci:11 cost..., of monoJXl!y o r the corporate in(.·ome lax. Page 14 Monetary Policy and Recent Business-Cycle Experience R W. Haler, Joseph H, Haslag , and Scott E Hein Some critics of recent monetary policy have focused on slow M2 growth , claiming that the Federal Heserve is 100 interested in price stability and is forsaking its growth mandate. Others criticize the Fed for achieving price stability too cautiously and urge the adoption of a rule that seeks to e liminate inflation more quickly. R.W. Hafer, Joseph Hasiag, and Scott Hcin examine two alternative monetary policies and gauge their expected impacts on economic activity. Both rolicies are simulated over the period 1987-92. One policy, a GNP-targeting rule similar to one proposed hy Bennett ,\-lcCallutll, slows nominal GN P growth substantially. Simulated nominal GNP, however, is quite volatile under the GNP-targeting rule. The other policy, referred to in the article as the M2-targeting arproach, seeks to hit the midpoint of the M2 target cones. The authors find that although adopting the M2-targeting approach would have resulted in somewhat faster average nominal GNP growth comp:tred with what actually occurred, the start-andstop partern exhibited during the recent U.S. recovery would still be present. Thus, the evidence indirectly supports the notion that real shocks were the driving force behind recent weakness in economic activity. Contents Page 29 GATT and the New Protectionism Oavid M. Gould and William C. Gruben The Uruguay Round of the General Agreement on Tariffs and Trade (GAIT) is the first agreement of its kind that reduces or eliminates tariffs on many goods and addresses issues related to jntellectwll propeny rights, trade in services, and agricuhuml subsidies. With good reason, it has generalcd much optimism about the future of free world trade. But docs GA'ITs l~l(1c liberalization today mea n that trade will remain liberalized tomorrow? Increasingly, govern ments are counteracting the perceived u nfai r trade p ractices of olher nations with their own trade barriers. While concerns about fairness are legitimate. r.:l ising tmde barriers to counteract actual o r perceived unfair trade practices of others is another form of protectionism that restricts world t[',lde. This new protectionism has most often taken the form of antidumping a nd countervailing duties. Because the use of antidumping a nd countervailing duties has grown dramatically in recent years across ma ny countries, David Gould .!Od William Grulx:n :malY7.e w hethe r the recent changes to the GArr accord will discourage the most protectionist aspects of these widely used trade barriers. Gould and Gruben fi nd that while the new GAIT agreement does not eliminate the ability of such countries to misuse antidumping and countervailing duties, the accord delineates the rules of such duties much more dearly and provides mechanisms that will likely limit their abuse. Page 43 The Saving Grace Richard AIm and David M. Gould Many econom ists agree that a country·s [,,lte of saving can be a key factor in the growth rate and living standards the country achieves. Analysts are less cenain about which factors have positive and negative innuences o n saving, what role government s hould have in creating a better environment for saving. and the extent to which a country can offset the effects of low domestic saving by tapping into other countries' savings Economists, bankers, and officials discussed these and other aspects of saving earlier this year at a symposium sponsored by the Federal Reserve Bank of Dallas. Richard Alm and David Gould recap much of that discussion in this article. Lori L. Taylor Senior Economist Federal Reserve Bank of Dallas An Economy at Risk? The Social Costs of School Inefficiency M any economists have studied the public school system of the United States, and most of them have reached the same conclusion: reducing expenditures would not reduce student achievement. Eric Hanushek (1986) analyzed sixty-five studies that examined the relationship between expenditures per pupil and student achievement on standardized tests. Only thirteen of the sixty-five studies indicated that lower expenditures produced significantly lower student achievement. (For an explanation, see the box entitled “You Get What You Pay For.”) If we assume, as do most economists, that a school system’s primary objective is to produce measurable academic skills, then this economic evidence suggests that the U.S. public school system is inefficient. The inefficiency could arise from an inappropriate mix of inputs, a less effective use of resources than otherwise comparable schools, or the pursuit of unmeasured objectives (such as drug education) that consume school resources.1 Inefficiency could be caused by regulatory constraints, a lack of competitive pressures, or incomplete information on the part of the producers and consumers of educational services. School system inefficiency could be more than an academic concern. Edward Denison (1979) attributes 11 percent of U.S. economic growth over the years 1948 –73 to increases in the educational attainment of the labor force. John Bishop (1989) estimates that gross national product would now be at least 2 percent higher if student test scores had continued to rise during the 1970s instead of experiencing their well-documented decline. Few researchers, however, have directly examined the economic consequences of school inefficiency. I find that although school ineffiEconomic Review — Third Quarter 1994 ciency can crowd out consumption and investment in the remainder of the economy and can reduce the rate of return to investments in education, it has only a limited impact on economic activity. I estimate that, even compounded over twenty-five years, plausible degrees of school inefficiency reduce consumption and potential GDP by less than 1 percent. As such, the social costs of school inefficiency are similar in magnitude to the social costs of the corporate income tax (Feldstein 1979) or of monopoly (Harberger 1954). The degree of school inefficiency Although the production-function studies described by Hanushek (1986) indicate that the typical U.S. school is inefficient, they are not designed to quantify that inefficiency. Thus, while they indicate that the typical school could cut spending without harming achievement, they do not indicate how much the school could cut spending without doing so. To measure the degree of educational inefficiency (how much could be cut), I turn to another form of research—frontier analysis. Frontier analyses measure school inefficiency by identifying the most efficient schools in a study The author thanks Zsolt Besci, Stephen P. A. Brown, Kathy J. Hayes, and Harvey Rosenblum for comments and suggestions. Special thanks to Roselyn Boateng, Margie Evans, and Kay Kutis for their assistance on this project. 1 While society may value these objectives highly, they are difficult to quantify and have an uncertain relationship with our measures of economic output. Therefore, the economics literature has generally relied on standardized tests to measure the outputs of the educational process. 1 You Get What You Pay For To a large extent, reducing educational expenditures would not reduce student achievement because the primary determinants of educational expenditures— teacher salaries and pupil – teacher ratios — are uncorrelated with student achievement. Hanushek’s (1986) survey of the literature identifies sixty studies that analyze the relationship between student achievement and teacher salaries and 112 studies that analyze the relationship between student achievement and class size. In both cases, only nine studies suggest that higher salaries or smaller classes have a positive effect on learning. The vast majority of studies indicate that small changes in salary or class size would have no systematic effect on student achievement. The survey evidence does not imply that teachers are unimportant to learning. Economic research and basic common sense indicate that teachers are very important. (For an example of research on the question, see Hanushek 1971.) However, the analysis does indicate that teachers who earn higher salaries are generally no more effective than teachers who learn lower salaries. One reason for the missing link between a teacher’s ability and salary is that the observable characteristics for which teachers are commonly compensated—their educational background and experience—are uncorrelated and using their characteristics to define a production possibilities frontier against which the remaining schools are measured.2 The most efficient schools are the schools that either need the fewest resources to produce a given level of student achievement or that produce the most student achievement with a given level of resources. The remaining schools are deemed inefficient because they use more resources or produce less achievement than comparable frontier schools. Research- 2 2 By this methodology, virtually every industry will show some degree of inefficiency. 3 This example, drawn from Grosskopf et al. (1994), measures inefficiency along a ray from the origin. Other studies use different measures of distance from the frontier. 4 For more information on LP, see Chiang (1984). or only weakly correlated with student achievement. Hanushek (1986) found only six studies indicating that teachers with advanced degrees are more effective than teachers with less education. He found five studies indicating that highly educated teachers are less effective in the classroom and ninety-five studies indicating no effect from the teacher’s educational background. Similarly, only one-third of the relevant studies in Hanushek’s survey indicate a positive effect from teacher experience; more than two-thirds of the studies found no such relationship. Furthermore, some of the studies indicating a positive correlation between teacher experience and student achievement may simply reflect the ability of experienced teachers to avoid students who are difficult to teach. Intuitively, it is not surprising that researchers find no systematic relationship between student achievement and teacher characteristics like educational attainment and experience. After all, a person with a doctorate in mathematics may know more about the subject than a person with a bachelor’s degree, but that does not mean that the Ph.D. is any more (or less) able to communicate that knowledge to students. Similarly, experience could help teachers hone their skills, but it could also cause them to burn out and become less effective. ers quantify that inefficiency by measuring the distance between the school’s output and the production possibilities frontier. Figure 1 illustrates a production possibilities frontier for schools that produce two outputs ( y1 and y2 ).3 Schools T and S help define the educational frontier. School A is inefficient. If school A behaved like school T, it could produce more of both outputs without any additional resources. Ratio OT/OA represents the proportion by which school A could expand both outputs. If OT/OA equals 1.1, then school A could expand both outputs by 10 percent if it used its resources efficiently. Thus, in this example, school A is 10 percent inefficient. Most researchers use linear programming techniques to construct the educational frontier. Linear programming (LP) is a mathematical optimization strategy that finds the frontier by repeatedly solving a system of linear equations.4 Federal Reserve Bank of Dallas Because the technique is mathematical rather than statistical, LP is especially vulnerable to omittedvariables bias and measurement errors. A few researchers use statistical estimation techniques to define the educational frontier. Steven Deller and Edward Rudnicki (1993) make strong assumptions about the distribution of inefficiency that allow them to use a maximum likelihood function to estimate the frontier.5 Subhash Ray (1991) and Therese McCarty and Suthathip Yaisawarng (1993) use a two-step procedure that combines LP and regression analysis. In the first step, they use LP to construct an educational frontier that does not control for student and family characteristics. In the second step, they use regression techniques to adjust for the demographic characteristics that were omitted from the first step. The two-step procedure reduces problems associated with mismeasurement and outliers in the data, but it could yield biased measures of efficiency if the omitted student and family characteristics influence the optimal allocation of school resources.6 Most studies of the educational frontier in the United States suggest that primary and secondary schools are less than 15 percent inefficient, on average.7 Four studies find school inefficiency of less than 5 percent (Bessent and Bessent 1980; Bessent et al. 1982, 1984; and Färe et al. 1989). Another four studies find inefficiency in the 5-percent to 10-percent range (Bessent et al. 1984, Sengupta and Sfeir 1988, Deller and Rudnicki 1993, and Grosskopf et al. 1994).8 Ray (1991) finds an average inefficiency of 13 percent. The remaining study by McCarty and Yaisawarng (1993) suggests an average inefficiency of 77 percent, but the sample of schools is deliberately unrepresentative, making their extreme results a questionable indicator of the typical U.S. experience.9 It is important to remember that all of these studies base their description of the educational frontier on the “best practice” observed in the data. Thus, they yield relative, rather than absolute, estimates of inefficiency. It is possible that schools judged relatively efficient in these analyses are not reaching their full potential. Therefore, these estimates of inefficiency should be considered lower bounds on the absolute inefficiency in the public school system. Economic Review — Third Quarter 1994 Figure 1 Production Possibilities Frontier Output y2 T A P.P.F. S O Output y1 5 Specifically, Deller and Rudnicki argue that OLS estimates of the production function have a compound error term ( – ), where represents production inefficiency and represents noise. They generate a conditional expected value for by using maximum likelihood estimation and assuming a normal distribution for and a half-normal distribution for . 6 McCarty and Yaisawarng find that their two-step procedure yields efficiency estimates that are statistically similar to those produced by an LP model that incorporates demographics but treats them as exogenous inputs that schools cannot control. 7 To be included in this discussion, a study of the educational frontier must have used data on primary or secondary schools in the United States, have attempted to control for student and family characteristics, and have reported its findings in such a way that a measure of technical inefficiency could be inferred. 8 Bessent et al. (1984) is cited twice because it reports separately on school efficiency in 1981 and 1983. The higher inefficiency estimate reflects their study of 1981 data. 9 Inefficiency for the two-step McCarty and Yaisawarng analysis is inferred relative to the most efficient school in their sample by adding a constant (the absolute value of the most negative residual) to their measure of “pure” technical efficiency ( u^ k ). Their LP calculations indicate an average inefficiency of 39 percent. 3 The economic impact of school inefficiency School inefficiency can influence the economy in two ways. First, it can reduce the resources available for consumption and investment in the noneducational sector of the economy.10 Second, school inefficiency can reduce the return to investments in the educational sector of the economy. It probably has both effects in unknown proportion. However, by estimating how much faster the economy could have grown if all of the resources lost through school inefficiency had instead been allocated to the noneducational sector (the pure first effect), and estimating how much faster the economy could have grown if the resource allocation had remained unchanged but inefficiency had not reduced the rate of return to education (the pure second effect), one can set bounds on the estimates of economic impact. As demonstrated below, using these two estimation approaches leads to very similar results and a reasonably narrow range for the estimated effect. Although the two approaches attack the measurement problem from different directions, they both rely on the concept of social rates of return. The first estimation approach, which assumes that school inefficiency crowds out other productive activities, relies on estimates of the social rate of return to investments in physical capital. The second estimation approach, which assumes that school inefficiency reduces the rate of return to investments in education, relies primarily on estimates of the social rate of return to 4 10 I define the noneducational sector as gross domestic product excluding the public primary and secondary educational sector. Because the national income and product accounts use educational expenditures to represent the output of the education sector, this approach is equivalent to subtracting public expenditures on primary and secondary education from gross domestic product. 11 Let rT be the rate of return to noneducational investment. Then, rT = rE SE + rNE (1 – SE ), where rE is the rate of return on equipment investment, rNE is the rate of return on nonequipment investment, and SE is equipment’s share of total investment. investments in primary and secondary education in the United States, although the return to physical capital also plays a role. The social rate of return to any investment is the interest rate at which the present value of social benefits from an investment exactly equals the present value of social costs of that investment. The social benefits and costs equal the private benefits and costs plus any measurable benefits or costs to society in general. For example, public high school students do not pay tuition or for books, so their private cost of education is essentially the opportunity cost of their time. However, the government does pay the teachers and buy the books, so the social cost of an investment in education equals the private costs of the students’ time plus the government’s expenditures on education. Similarly, any tax revenues generated by an investment are a benefit to government and thus a part of the social benefits of that investment. The social rate of return to physical capital. Considerable economic research suggests that the social rate of return to physical capital (that is, the rate of return gross of taxes and investment subsidies) is between 6 and 12 percent. Edwin Mills (1989) uses payments to capital, imputed rents, and capital gains to estimate rates of return to housing and nonhousing physical capital in the United States. He finds that since the 1950s, private, nonhousing capital (equipment and business structures) has earned a social rate of return (15 percent) that is roughly triple the social rate of return to housing (5 percent). Given the relative shares of housing and nonhousing capital investment since 1967, Mills’ estimates imply an average return to physical capital of 12 percent. Crosscountry analysis by J. Bradford De Long and Lawrence Summers (1991) suggests that the social rate of return to investments in manufacturing equipment exceeds 30 percent but that the social rate of return to investments in structures is negligible. Because their data indicate that equipment represents only 36 percent of U.S. investment, the De Long and Summers estimates would also be consistent with a 12-percent return to physical capital.11 Psacharopoulos (1981) notes that 10 percent is a common rule of thumb for the opportunity cost of capital. However, some economists use a rate of return as low as 6.5 percent (for example, see King, Plosser, and Rebelo 1988). Federal Reserve Bank of Dallas Figure 2 Annual Rates of Return to Secondary Education Real rate of return .15 .14 .13 .12 .11 .10 .09 .08 .07 ’67 ’72 ’77 ’82 ’87 ’92 The social rate of return to education. Research suggests that the social rate of return to primary and secondary education is comparable to the social rate of return to physical capital. Walter McMahon (1991) calculates internal rates of return to education over time and finds that the real social rate of return to investments in secondary education averaged 12.8 percent over the period 1967– 88.12 Using the same approach, I find that the rate of return to education for males averaged 11.9 percent over the period 1967–92 (Figure 2 ). (For a description of the data and the internal rate of return methodology, see Appendix A.) The most recent estimates of the internal, social rate of return for countries in the Organization for Economic Cooperation and Development (OECD) indicate an average rate of return to secondary schooling of 10.2 percent (Psacharopoulos 1993). Most other estimates of the rate of return to education in the United States follow an estimation relationship developed by Jacob Mincer (1979). However, Mincerian rates of return equal social rates of return only when the social costs of an additional year of schooling equal one year of potential earnings for the person receiving the education.13 If social costs exceed potential earnings, then the Mincerian rate of return exceeds the social rate of return. Similarly, if potential earnings exceed social costs, then the social rate Economic Review — Third Quarter 1994 of return exceeds the Mincerian rate of return. Over the last twenty-five years, the social costs of secondary education have averaged 1.2 percent of potential earnings, suggesting that researchers using Mincerian rates of return overestimate the social rate of return by 20 percent.14 On the other hand, because investments in education exhibit diminishing returns and Mincerian rates of return seldom distinguish between secondary and postsecondary education, the Mincerian approach tends to underestimate the rate of return to secondary education.15 In general, Mincerian rates of return fall between 7 and 11 percent (for example, see Mincer 1979, Izraeli 1983, Angrist and Krueger 1991, and Card and Krueger 1992), although recent estimates have ranged as low as 2 percent (Low and Ormiston 1991) and as high as 16 percent (Ashenfelter and Krueger 1992). Correcting for the measurement of social costs (but not for the problem of diminishing returns) suggests a social rate of return to secondary education of between 6 and 10 percent. Thus, adjusted estimates of the Mincerian rate of return and direct estimates of the internal rate of return suggest that the social rate of return to 12 Because so few Americans have less than a primary school education, it is not possible to estimate the rate of return to primary education. International analyses suggest that the rate of return to primary education exceeds the rate of return to secondary education (Psacharopoulos 1984, 1993). 13 In a Mincerian estimation equation, the coefficient on years of schooling equals rskt where rs is the rate of return to schooling, and kt is the ratio of total educational costs in period t divided by potential earnings in period t (Mincer 1979). Because cost information can be difficult to acquire, most researchers assume (as did Mincer) that kt = 1 and interpret the coefficient on years of schooling as the rate of return (rs ). However, if kt > 1 then the Mincerian rate of return (rs k t ) overestimates the true rate of return (rs ), and vice versa. 14 Potential earnings and social costs are derived as in Appendix A. 15 The Mincerian approach does not yield credible estimates of the rate of return to primary education in the United States because potential earnings are zero for this group (see note 12). 5 Table 1 The Resource Value of School Inefficiency Resources available for reallocation Inefficiency Year Real expenditures (billions) 1965 1970 1975 1980 1985 1990 $ 92.42 129.63 143.50 145.22 157.42 202.24 primary and secondary education lies between 6 and 13 percent. In deriving these estimates, economists generally presume that wages reflect all of the benefits to education. If there are other benefits, such as the externality effects described in Lucas’ (1988) model of economic growth, then researchers will underestimate the true rate of return. Similarly, if the wage increases that are associated with more education reflect greater innate abilities in addition to school effects, then researchers will overestimate.16 The costs of crowding-out. Assuming that school inefficiency crowds out investment and consumption in the noneducational sector, one can use estimates of the social rate of return to physical capital to estimate the growth consequences of school inefficiency. Because the educational frontier research suggests that U.S. public schools are less than 15 percent inefficient, I consider three cases—5-percent inefficiency, 10-percent inefficiency, and 15-percent inefficiency. As Table 1 indicates, billions of dollars could be lost through school inefficiency. If those resources were allocated instead to the noneducational sector, then both consumption and investment 16 6 For a further discussion of biases in estimates of the rate of return to education, see Weale (1993). 5 percent $4.6 6.5 7.2 7.3 7.9 10.1 10 percent (billions) 15 percent $9.2 13.0 14.3 14.5 15.7 20.2 $13.9 19.4 21.5 21.8 23.6 30.3 could increase substantially. On average, investments in physical capital account for 16 percent of spending in the noneducational sector. Assuming that this tendency persists, each dollar reallocated from primary and secondary education would increase investment in physical capital by 16 cents. If such a reallocation had begun in 1967, and school inefficiency were 5 percent, then by 1992 the U.S. capital stock would have been between $34 billion and $38 billion greater than it actually was, depending on the rate of return to physical capital (see Appendix B). In turn, any increase in the capital stock would have augmented future economic output. Given the range of estimates for social rates of return, each $1 increase in capital investment would have increased GDP by 6 cents to 12 cents per year. By 1992, a persistent 5-percent inefficiency in the school system would have reduced GDP by $2 billion to $5 billion per year, depending on the presumed rate of return (Table 2 ). A persistent 15-percent inefficiency would have reduced GDP by up to $13.8 billion. Higher output and the redistribution of resources away from education would translate into higher consumption. Assuming that consumption’s share of noneducational output remains unchanged, I estimate that consumption in 1992 would have been between $9 billion and $32 billion higher if the school system had been efficient (Table 3 ). Because consumption is a rough proxy for welfare, I estimate that persistent school Federal Reserve Bank of Dallas inefficiency reduced economic well-being in 1992 by between 0.3 and 1 percent.17 The costs of a lower rate of return to education. Rather than thinking of school inefficiency as crowding out other productive activities, one can think of it as reducing the social rate of return to education. After all, economists calculate social rates of return to education using the opportunity costs of student time plus actual expenditures on education as the measure of social costs and increased future wages as the measure of social benefits. However, an efficient school system would have spent less than the actual system spent. If the actual system were 5 percent inefficient, then an efficient system would have spent 5 percent less. Reducing expenditures by 5 percent reduces social costs by 2 percent, on average. In turn, lower social costs lead to higher rates of return. I estimate that the efficiency-adjusted rate of return to education is between 1.4 and 4.3 percent higher than the observed rate of return (see Appendix A). To measure how much faster the economy would grow if investments in primary and secondary education earned a higher rate of return, Table 2 The GDP Effect of Twenty-Five Years of School Inefficiency (Billions of dollars) GDP loss Assuming inefficiency of Social rate of return 5 percent 10 percent 15 percent Method 1: 12 percent 6 percent $4.6 2.0 $9.2 4.1 $13.8 6.1 Method 2: 13 percent 6 percent $14.6 6.5 $29.2 13.0 $44.8 19.9 Table 3 The Annual Welfare Loss After Twenty-Five Years of School Inefficiency (Billions of dollars) Welfare loss Assuming inefficiency of Social rate of return 5 percent 10 percent 15 percent Method 1: 12 percent 6 percent $10.7 8.9 $21.3 17.8 $32.0 26.7 Method 2: 12 percent 6 percent $9.8 4.3 $19.6 8.7 $30.1 13.4 NOTES: Method 1 measures potential GDP through reallocating resources to the noneducational sector. Method 2 measures potential GDP through improved efficiency in primary and secondary education. I calculate annual returns to educational investments using a plausible range of values from the literature on social rates of return to education (6–13 percent). I then compare those returns to annual returns calculations that use the efficiencyadjusted rates of return in Table 4. The difference between the two calculations represents most of the additional output that could have been produced each year if the school system were efficient and therefore earning the higher rate of return (see Appendix B). For example, the United States spent $109 billion on primary and secondary education in 1967. Together with the opportunity costs of the students’ time, this represents an educational investment of $206 billion. Assuming a 13-percent rate of return, such an investment would add $26.8 billion to GDP each year. However, if the school system were 5 percent inefficient, then the NOTES: Method 1 measures potential GDP through reallocating resources to the noneducational sector. Method 2 measures potential GDP through improved efficiency in primary and secondary education. 17 Economic Review — Third Quarter 1994 Real consumption for 1992 was $3,342 billion (Council of Economic Advisers 1994). 7 Table 4 Efficiency-Adjusted Rates Of Return to Secondary Education (Average rate for the period 1967–92) Efficiency-adjusted rate of return Assuming inefficiency of Observed rate of return 5 percent 10 percent 15 percent 13 percent 6 percent 13.2% 6.1 13.4% 6.2 13.6% 6.3 rate of return could have been 1.4 percent higher. With a higher rate of return, the original $206 billion investment would have added $27.1 billion to GDP each year. Thus, assuming that the students in 1967 have an average working life of forty years, a 5-percent inefficiency in 1967 alone would reduce GDP by more than $300 million each year until 2007. If all investments in primary and secondary education since 1967 had earned a higher, efficiency-adjusted rate of return, and the proceeds of those higher returns had been reinvested according to historical experience, then by 1992 GDP would have been between $6.5 billion and $44.8 billion higher, depending on the degree of educational inefficiency and the social rate of return to education (Table 2 ). Consumption, and therefore welfare, would have been up to $30 billion higher (Table 3 ). Conclusions A preponderance of the economic evidence demonstrates that the public school system in the United States is inefficient. Studies of the educational frontier quantify that inefficiency and suggest that the U.S. system is up to 15 percent inefficient, on average. As demonstrated above, school inefficiency in the 5-percent to 15-percent range costs billions of dollars per year in foregone output. I calculate that twenty-five years of 5-percent inefficiency in primary and secondary education would have 8 reduced GDP by between $2 billion and $15 billion. A persistent 15-percent inefficiency would have reduced GDP by between $6 billion and $45 billion. I find the lower bound on these ranges by assuming that school inefficiency crowds out other productive activities. I find the upper bound on this range by assuming that school inefficiency reduces the rate of return to investment in primary and secondary education. The impact of such losses on a $5 trillion economy with nearly $3.4 trillion in consumption would seem rather minimal. By my calculations, twenty-five years of school inefficiency would have reduced annual output and consumption by less than 1 percent. However, Arnold Harberger (1954) found that the distortions induced by monopolies amounted to only 0.1 percent of output, and Martin Feldstein (1979) found that the distortions induced by the corporate income tax amounted to approximately 1 percent of output. The social costs of school inefficiency, therefore, cannot be dismissed. References Angrist, Joshua D., and Alan B. Krueger (1991), “Does Compulsory School Attendance Affect Schooling and Earnings?” Quarterly Journal of Economics 56 (November): 979–1014. Ashenfelter, Orley, and Alan Krueger (1992), “Estimates of the Economic Return to Schooling from a New Sample of Twins,” NBER Working Paper Series, no. 4143 (Cambridge, Mass.: National Bureau of Economic Research). Bessent, Authella M., and E. Wailand Bessent (1980), “Determining the Comparative Efficiency of Schools Through Data Envelopment Analysis,” Educational Administration Quarterly 16 (Spring): 57–75. ———, ———, J. Edam, and D. Long (1984), “Educational Productivity Council Employs Management Science Methods to Improve Educational Quality,” Interfaces 14 (November/December): 1–8. ———, ———, J. Kennington, and B. Reagan (1982), “An Application of Mathematical Programming to Assess Productivity in the Houston Independent School District,” Management Science 28 (December): 1355–67. Bishop, John H. (1989), “Is the Test Score Decline Responsible for the Productivity Growth Decline?” American Economic Review 79 (March): 178 –197. Federal Reserve Bank of Dallas Card, David, and Alan B. Krueger (1992), “Does School Quality Matter? Returns to Education and the Characteristics of Public Schools in the United States,” Journal of Political Economy 100 (February): 1– 40. Chiang, Alpha C. (1984), Fundamental Methods of Mathematical Economics, 3rd edition (New York: McGraw-Hill Book Company). Council of Economic Advisers (1994), Economic Report of the President (Washington, D.C.: U.S. Government Printing Office). Deller, Steven C., and Edward Rudnicki (1993), “Production Efficiency in Elementary Education: The Case of Maine Public Schools,” Economics of Education Review 12 (March): 45–57. De Long, J. Bradford, and Lawrence H. Summers (1991), “Equipment Investment and Economic Growth,” Quarterly Journal of Economics 56 (May): 445–502. Denison, Edward F. (1979), Accounting for Slower Economic Growth: The United States in the 1970s (Washington, D.C.: Brookings Institution). Färe, Rolf, Shawna Grosskopf, and William L. Weber (1989), “Measuring School District Performance,” Public Finance Quarterly 17 (October): 409–28. Feldstein, Martin (1979), “The Welfare Cost of Capital Income Taxation,” Journal of Political Economy 86 (April, pt. 2): S29–51. Grosskopf, Shawna, Kathy Hayes, Lori Taylor, and William Weber (1994), “On the Political Economy of School Deregulation,” Federal Reserve Bank of Dallas Research Paper no. 9408 (Dallas, May). Hanushek, Eric A. (1986), “The Economics of Schooling: Production and Efficiency in Public Schools,” Journal of Economic Literature 24 (September): 1141–77. ——— (1971), “Teacher Characteristics and Gains in Student Achievement: Estimation Using Micro Data,” American Economic Review, Proceedings 61 (May): 280–88. Harberger, Arnold C. (1954), “Monopoly and Resource Allocation,” American Economic Review, Proceedings 44 (May): 77–87. Izraeli, Oded (1983), “The Effect of Variations in Cost of Living and City Size on the Rate of Return to Schooling,” Quarterly Review of Economics and Business 23 (Winter): 93–108. King, Robert G., Charles I. Plosser, and Sergio T. Rebelo (1988), “Production, Growth and Business Cycles I: The Basic Neoclassical Model,” Journal of Monetary Economics 21 (March/May): 195–232. McCarty, Therese A., and Suthathip Yaisawarng (1993), “Technical Efficiency in New Jersey School Districts,” in The Measurement of Productive Efficiency: Techniques and Applications, eds. Harold O. Fried, C.A. Knox Lovell, and Shelton S. Schmidt (New York: Oxford University Press). McMahon, Walter W. (1991), “Relative Returns to Human and Physical Capital in the U.S. and Efficient Investment Strategies,” Economics of Education Review 10 (4): 283–96. Mills, Edwin S. (1989), “Social Returns to Housing and Other Fixed Capital,” AREUEA Journal 17 (Summer): 197–211. Mincer, Jacob (1979), “Human Capital and Earnings,” in Economic Dimensions of Education (Washington, D.C.: National Academy of Education). Psacharopoulos, George (1993), “Returns to Investment in Education: A Global Update,” World Bank Working Paper no. WPS 1067 (Washington, D.C., January). ——— (1984), “The Contribution of Education to Economic Growth: International Comparisons,” in International Comparisons of Productivity and Causes of the Slowdown, ed. John W. Kendrick (Cambridge, Mass.: American Enterprise Institute/Ballinger Publishing Company). ——— (1981), “Returns to Education: An Updated International Comparison,” Comparative Education 17 (3): 321– 41. Reprint, Washington, D.C.: World Bank, Department of Education. Ray, Subhash C. (1991), “Resource-Use Efficiency in Public Schools: A Study of Connecticut Data,” Management Science 37 (December): 1620–28. Sengupta, Jati K., and Raymond E. Sfeir (1988), “Efficiency Measurement by Data Envelopment Analysis with Econometric Applications,” Applied Economics 20 (March): 285–93. U.S. Bureau of the Census (1991), Statistical Abstract of the United States: 1991, 111th edition (Washington, D.C.: U.S. Government Printing Office). U.S. Bureau of the Census, Current Population Reports, Series P60– 184 (1993 and earlier years), Money Income of Households, Families and Persons in the United States: 1992 (Washington, D.C.: U.S. Government Printing Office). U.S. Department of Education (1993), Digest of Education Statistics 1993 (Washington, D.C.: U.S. Government Printing Office). Weale, Martin (1993), “A Critical Evaluation of Rate of Return Analysis,” Economic Journal 103 (May): 729–37. Low, Stuart A., and Michael B. Ormiston (1991), “Stochastic Earnings Functions, Risk and the Rate of Return to Schooling,” Southern Economic Journal 57 (April): 1124 –32. Lucas, Robert E. (1988), “On the Mechanics of Economic Development,” Journal of Monetary Economics 22 (July): 3–43. Economic Review — Third Quarter 1994 9 Appendix A Rate of Return Calculations The internal, social rate of return to education is the interest rate at which the present value of the social benefits from education equals the present value of the social costs. In general, economists use earnings differentials at age t (Et ) to measure the social benefits. Perpupil expenditures plus the opportunity cost of student time equal the social costs (Ct ). Therefore, the social rate of return is the interest rate (r ) that solves equation A.1, T (A.1) E T C t t =∑ , ∑ (1+r) t (1+r)t t=1 t=1 where T is retirement age (65).1 Population surveys provide data on the annual earnings of males according to education levels and age groups (U.S. Bureau of the Census, 1968–93). For example, the survey of current population for 1992 indicates that men ages 18–24 years old who had a secondary school education earned $11,805 on average, while men in the same age group who had a primary school education earned $8,447 on average. The difference ($3,358) approximates the social benefit of education (Et ) because it represents the additional earnings associated with additional education. The social cost of education (Ct ) has two components. The Digest of Education Statistics (U.S. Department of Education 1993) provides annual information on enrollments and expenditures for public primary and secondary education. As in McMahon (1991), I approximate the opportunity costs of student time as 75 percent 10 of the annual earnings of an 18-year-old male with a primary school education. I find that the social rate of return to secondary education for males averaged 11.9 percent over the period 1967–92. As Figure 2 in the text illustrates, higher earnings differentials in the 1980s more than compensated for the increased expenditures on education and led to increasing returns to education over the period. Equation A.1 can also produce efficiency-adjusted rates of return. For example, suppose that the public school system is 5 percent inefficient. Then the per-pupil expenditures could have been 5 percent lower without having any negative effect on the benefits of education. To estimate the efficiency adjusted rate of return, I reduce Ct by 5 percent of expenditures and recalculate. If school inefficiency is 10 percent, then I reduce per-pupil expenditure by 10 percent before calculating r. Thus, the efficiency-adjusted rate of return is the interest rate at which the present value of social benefits equals the present value of efficiency-adjusted social costs. I find that over the period 1967– 92, the efficiencyadjusted rate of return is between 1.4 and 4.3 percent higher than the observed rate of return, depending on the degree of inefficiency. Because expenditures’ share of education costs has been rising, I also find that the gap between observed rates of return and efficiency-adjusted rates of return has been rising. 1 For a further discussion, see McMahon (1991). Federal Reserve Bank of Dallas Appendix B Calculating Inefficiency’s Effect on GDP Method 1 Each year, school inefficiency crowds out consumption and investment in the noneducational sectors of the economy. If E0 is school spending in the initial period, and υ is the degree of inefficiency, then υE0 represents the resources available for redistribution in that period. Let S0 represent investment’s share of the noneducational economy in the initial period. Thus, S0υE0 is the increase in investment that results from the initial redistribution. The increased investment means that the capital stock in the next period will also increase (∆k 1 = S0υE 0 ). If the social rate of return to physical capital is r, then output in the next period (period 1) increases by r∆ k 1.1 Table B1 Data for Method 1 (Inefficiency = 0.05, rk = 0.12) Year 1967 1972 1977 1982 1987 1992 School spending ($) Investment share (%) ∆Capital stock ($) ∆GDP ($) 108.8 116.2 122.2 129.6 129.9 133.6 137.9 144.4 143.5 141.9 144.6 143.8 146.5 145.2 140.9 141.3 146.2 150.6 157.4 166.0 172.7 185.7 195.5 202.2 206.7 213.0 .15 .16 .16 .15 .16 .17 .18 .16 .15 .15 .17 .18 .18 .17 .16 .15 .16 .17 .18 .17 .17 .17 .17 .16 .15 .15 — .83 1.74 2.75 3.80 4.92 6.16 7.51 8.85 10.06 11.33 12.77 14.30 15.93 17.45 18.95 20.39 21.93 23.69 25.57 27.52 29.49 31.62 33.83 36.07 38.22 — .10 .21 .33 .46 .59 .74 .90 1.06 1.21 1.36 1.53 1.72 1.91 2.09 2.27 2.45 2.63 2.84 3.07 3.30 3.54 3.79 4.06 4.33 4.59 NOTE: All monetary values are in billions of dollars. Economic Review — Third Quarter 1994 In subsequent periods, any additional output is available for consumption and investment, and any additional capital created in the previous period continues to generate returns.2 Thus, in period t, ∆ kt = St –1(υEt –1 + r∆ kt –1) + ∆ kt –1, and ∆GDPt = r∆kt . For example, consider the data in Table B1, and let 1967 be the initial period. In 1967, real expenditures for public primary and secondary schools totaled nearly $109 billion (U.S. Department of Education 1993). Assuming that the school system was 5 percent inefficient, $5.4 billion could have been redistributed to the noneducational sector without reducing future GDP. Because investment’s share of noneducational spending was 15 percent (Council of Economic Advisers 1994), investment would have increased by approximately $0.8 billion. Thus, at the beginning of 1968, the U.S. capital stock could have been $0.8 billion greater than it actually was. Assuming that the rate of return to capital was 12 percent, the additional $0.8 billion in capital would have added $0.1 billion to GDP in 1968. In 1968, school spending totaled $116.2 billion, and the resources available for redistribution would have been $5.9 billion (.05 • $116.2 billion + $0.1 billion). Because 16 percent of noneducational resources were allocated to investment, investment in 1968 would have been $0.9 billion greater. By the beginning of 1969, the additional investments in 1967 ($0.8 billion) and 1968 ($0.9 billion) would have added $1.7 billion to the capital stock. Thus, GDP would have been $0.2 billion higher in 1969. If the pattern of inefficiency persisted for twentyfive years, then in 1992 the capital stock would have been $38.2 billion higher and GDP would have been $4.6 billion higher. 1 I assume that most government spending is not investment spending so that the return on government spending (excluding primary and secondary education) is negligible. 2 These calculations are gross of depreciation and do not include any costs imposed by distortionary school taxes. If depreciation were included, the estimates of social costs to inefficiency would be somewhat smaller. If tax distortions were included, the estimates of social cost would be somewhat larger. (Continued on the next page) 11 Appendix B Calculating Inefficiency’s Effect on GDP— Continued Method 2 School inefficiency can also reduce GDP by reducing the rate of return to investments in education. To measure this effect, I calculate the annual return to investments in education using credible bounds on the observed rate of return (6 percent and 13 percent) and compare them with the annual return implied by the corresponding efficiency-adjusted rates of return in Table 4. The difference represents part of the losses in GDP that can be attributed to school inefficiency. To be complete, I also consider the fact that some of the additional returns to education would have been invested in either physical capital or additional education and that any such investments would also augment GDP.3 In each time period, investments in primary and secondary education (It ) represent the sum of actual expenditures and the opportunity costs of student time. The Digest of Education Statistics 1993 provides annual information on total expenditures for public primary and secondary education. As in the calculations for the internal rate of return to education, I approximate the opportunity cost of time for secondary school students as 75 percent of the annual earnings of an 18-year-old male with a primary school education. Because they are generally too young to work legally, I assume that the opportunity cost of time is zero for primary school students. I use the GDP deflator to adjust for inflation. The data on real opportunity costs, real expenditures, and total costs can be found in Table B2. To illustrate, consider the data in Table B2, let 1967 be the initial period, and let the observed return on investments in primary and secondary education (re ) be 13 percent. In 1967, real expenditures were $109 billion, the total opportunity cost of the students’ time was $97 billion, and total educational investment was $206 billion. In 1968, that $206 billion investment would have earned $27.1 billion if schools were efficient but only $26.8 billion if schools were 5 percent inefficient. The difference ($0.37 billion) represents the additional output that could 12 have been produced in 1968. Assuming that expenditure shares were stable, that additional output would have produced an additional $0.01 billion in educational investment and an additional $0.06 billion in physical capital investment. Assuming no change in educational efficiency, educational investments since the initial period (1967) would earn an annual return of T y^re , T = ∑ re lt –1 . t =1 In 1969, ŷ r , T = $55.6 billion (re • ($206 billion + $222 e billion)). However, if the system were efficient, then output and investments in previous periods would have been greater, and the annual return would have been T y^re*, T = ∑ re* lt –1 + Se,t–1∆ GDPt –1 ( ) t =1 + rk Sk,t–1∆ GDPt –1 , where re* is the efficiency-adjusted rate of return, Se,t –1 is education’s share in output, ∆GDPt –1 is the additional output in period t –1, rk is the return to physical capital and Sk ,t –1 is capital’s share in output. If the school system were 5 percent inefficient, then in 1969, ŷ r * ,T = $56.4 billion e (re* • ($206 billion + $222 billion + $0.01 billion) + rk • ($.06 billion)). The additional output in period t would be ∆ GDPt = y^re*,T − y^re ,T . If the school system were 5 percent inefficient and the social rate of return to education were 13 percent, then ∆GDPt = $0.8 billion in 1969 and ∆GDPt = $14.6 billion in 1992. 3 I assume that investments in physical capital earn a 12-percent rate of return and that noneducational government expenditures earn a negligible rate of return. (Continued on the next page) Federal Reserve Bank of Dallas Appendix B Calculating Inefficiency’s Effect on GDP— Continued Table B2 Data for Method 2 (Inefficiency = .05, rk = .12, re = .13) Year 1967 1972 1977 1982 1987 1992 Opportunity cost ($) School spending ($) Ie ($) Se (%) Sk (%) Yre ($) Yre * ($) ∆ GDP ($) 96.95 105.52 120.03 113.98 107.05 112.93 138.47 113.64 90.83 98.47 94.57 95.25 85.52 81.91 71.89 66.60 61.45 73.18 71.84 68.53 62.52 64.26 60.90 66.11 65.78 61.25 108.8 116.2 122.2 129.6 129.9 133.6 137.9 144.4 143.5 141.9 144.6 143.8 146.5 145.2 140.9 141.3 146.2 150.5 157.4 166.0 172.7 185.7 195.5 202.2 206.7 213.0 205.8 221.7 242.2 243.6 236.9 246.6 276.4 258.1 234.3 240.3 239.2 239.1 232.0 227.1 212.8 207.9 207.7 223.7 229.2 234.6 235.2 250.0 256.4 268.3 272.5 274.2 .00 .04 .04 .05 .04 .04 .04 .04 .04 .04 .04 .04 .04 .04 .04 .04 .04 .04 .04 .04 .04 .04 .04 .04 .04 .04 .00 .15 .15 .15 .15 .16 .17 .16 .14 .15 .16 .17 .17 .16 .16 .15 .15 .17 .17 .16 .16 .16 .16 .15 .14 .15 — 26.75 55.57 87.06 118.73 149.53 181.58 217.51 251.06 281.52 312.77 343.86 374.94 405.10 434.63 462.30 489.33 516.32 545.41 575.21 605.70 636.28 668.78 702.11 737.00 772.41 — 27.13 56.35 88.31 120.45 151.72 184.28 220.78 254.88 285.86 317.64 349.29 380.94 411.68 441.78 470.00 497.59 525.15 554.86 585.32 616.49 647.76 680.99 715.08 750.76 786.99 — .37 .79 1.25 1.72 2.19 2.70 3.27 3.82 4.33 4.87 5.43 6.00 6.58 7.15 7.71 8.26 8.83 9.46 10.11 10.79 11.48 12.21 12.97 13.76 14.57 NOTE: All monetary values are in billions of dollars. Economic Review — Third Quarter 1994 13 R. W. Hafer Joseph H. Haslag Professor of Economics Southern Illinois University–Edwardsville Senior Economist Federal Reserve Bank of Dallas Scott E. Hein First National Bank at Lubbock Distinguished Scholar Texas Tech University Monetary Policy and Recent Business-Cycle Experience I n recent years, economists have rekindled the debate over whether price stability should be the sole objective of monetary policy or if output growth and full employment should be included as additional objectives. In some theories, eliminating inflation is associated with economic dislocation—rising unemployment and slower economic growth—and increased economic volatility, at least temporarily. Those advocating a broad scope for monetary policy objectives argue that making price stability the sole objective is a far too onesided trade-off. Instead, they contend, the Federal Reserve also should be concerned with promoting output growth and smoothing fluctuations in the economy. In this vein, critics of recent Federal Reserve policy contend that monetary policy has been too restrictive. In a series of Wall Street Journal editorials, Martin Feldstein (1992), Milton Friedman (1992), and James Buchanan and David Fand (1992) asserted that slow M2 growth indicates a Federal Reserve policy that is overly restrictive and cited the failure of the Federal Reserve to keep M2 growth within its target growth range in recent years as evidence of this. Thus, critics reasoned, the Fed must be responsible (at least partly) for weak economic growth. Both Friedman and Buchanan and Fand suggested that letting M2 grow at the midpoint of its target growth range The authors thank John Duca, Ken Emery, Evan Koenig, Jerry O’Driscoll, and Harvey Rosenblum for helpful comments on earlier drafts of this article. 14 would be an acceptable strategy. Feldstein urged an even more aggressive approach: increase M2 growth to make up for past weakness. In each of these critiques, M2 was the gauge of monetary policy and, more importantly, was identified as the appropriate target for the Fed to hit. Ironically, other critics claim that progress toward an average inflation rate of zero has been virtually immeasurable. Price stability proponents argue that the gradual elimination of inflation leads to uncertainty, which impedes output growth. Bennett McCallum (1987, 1988) proposes a rule that seeks to eliminate inflation quickly and, on average, would deliver output growth consistent with full employment. In McCallum’s setup, the target for monetary policy is nominal gross national product (GNP). McCallum presents evidence from in-sample experiments comparing actual nominal GNP with a simulated GNP generated by his strategy. McCallum’s results show that simulated GNP stays fairly close to its desired target path, and he therefore deems his proposal a successful strategy for monetary policy. In this article, we address both sets of critics. To do so, we examine two alternative monetary policies and gauge their possible impacts on economic activity. Our particular focus is how nominal GNP would have behaved between the fourth quarter of 1986 and the fourth quarter of 1992, which is the period approximately spanning the last half of the business-cycle expansion that ended in 1990 and the early recovery. We describe simulations of nominal GNP in cases in which policymakers chose one of the two policies. Simulating economic activity for this period Federal Reserve Bank of Dallas (1986:4 – 92:4) covers different phases of a business cycle and allows us to assess how nominal GNP might have fared under each monetary policy, especially by comparing the shape and duration of the simulated business cycle with what actually occurred. More importantly, our results address complaints lodged by both sets of monetary policy critics. For those who believe that recent monetary policy has been too restrictive, we provide evidence that a policy focused on maintaining more rapid M2 growth would not have increased economic growth greatly. For those who emphasize price stability, our results provide a glimpse of the path nominal GNP growth would have experienced had a zero-inflation policy been implemented cold turkey. This article presents two main findings. First, the evidence suggests that the average growth rate of nominal GNP would have been only onequarter to one-half a percentage point higher had the Federal Reserve implemented a feedback rule designed to maintain M2 growth.1 In particular, fluctuations in GNP growth would have had approximately the same amplitude as what actually occurred, and the timing of changes in nominal GNP growth would have been roughly identical to what actually happened. Second, our findings indicate that implementing a McCallum-style antiinflationary policy (hereafter referred to as the GNP-targeting rule) would have been successful in more rapidly slowing nominal GNP growth. This particular simulation exercise, however, shows that nominal GNP growth would have been more volatile compared with what actually occurred. This extra volatility appears to be the price paid for the particular set of nonmonetary shocks that occurred during the simulation period and follows from the fact that the GNP-targeting rule only partially accommodates real shocks to the economy. Outcomes from two alternative monetary policies In this section, we simulate the path that nominal GNP growth would have followed under two alternative monetary policies. In particular, we compare simulated nominal GNP growth with what actually occurred during the 1986:4 – 92:4 period. (See the appendix for a detailed descripEconomic Review — Third Quarter 1994 Figure 1 Relationships Between Monetary Base, M2, And Nominal GNP Under Alternative Policies GNP-Targeting Rule Nominal Income Objective Prices Monetary Monetary Base Velocity Base Base Rule Nominal Income Output M2-Targeting Approach Prices Money Multiplier M2 Target Monetary Monetary Base Velocity Base Nominal Income Output tion of each monetary policy and how each simulation was implemented.) A general outline of the two alternative policies is presented in Figure 1, which shows that they share many features. The premise in both is that the policies aim at the same ultimate goals, measured in terms of output growth and the inflation rate. Moreover, both policies are implemented through changes in the quantity of monetary base. The two policies differ, however, in their intermediate goals. The GNP-targeting rule, depicted in the top panel of Figure 1, alters the volume of monetary base to achieve a targeted value of nominal GNP growth. The link between the policy instrument and the intermediate target is base velocity growth. Under the M2-targeting approach, depicted in the bottom panel of Figure 1, changes in the volume of monetary base are aimed at 1 Over the period in question, actual Federal Reserve policy was formulated partly with an eye toward keeping M2 growth within preannounced target growth ranges, but also partly with an eye toward real and financial market conditions. Furthermore, policy was implemented primarily through adjustments in the federal funds rate. In our simulations, consistent with Friedman (1992), we assume that policy is implemented through adjustments in the monetary base and that it focuses on keeping M2 growth at the middle of the target growth range, to the exclusion of all other considerations. 15 achieving the midpoint of the M2 target range. With an M2 target, the link between policy instrument and intermediate target is the M2 money multiplier, the ratio of M2 to the monetary base. Our primary focus in both policy experiments is the behavior of nominal GNP growth. Consequently, it is necessary to establish the link between the policy instrument and nominal GNP growth. We follow McCallum in specifying the following model describing how nominal GNP growth is generated: ∆Yt = a 0 + a 1 ∆Yt –1 + a2 ∆ Bt –1 + et , (1) where Y is the log of nominal GNP, B is the log of the monetary base, et represents random shocks, and ∆ is the difference operator (that is, ∆ xt = xt – xt –1 ). The variables in equation 1 are interpreted as rates of growth indexed by time. The error term in this model is an amalgam of various real shocks, such as aggregate supply shocks, aggregate demand shocks, money demand shocks, and so on, that affect the realized value of nominal GNP growth. Why focus on nominal GNP growth when the ultimate goals of policy are in terms of output growth and the inflation rate? Nominal GNP growth is the sum of output growth and the inflation rate. Because nominal GNP is a summary measure of the two ultimate policy goals, a substantial literature has developed advocating nominal GNP targeting. By definition, if one knows the average growth rate of full employment output, a nominal GNP growth rate target implies an inflation rate target. Or, alternatively, there is a nominal GNP growth target that corresponds directly to the natural rate of output growth and the target inflation rate. Nominal GNP targeting has some disadvantages relative to monetary targeting, however, the most obvious of which is the fact that the monetary aggregates are available in a more timely manner than are the national income and product accounts. 2 16 The 3-percent target rate follows the work of McCallum, who selected this rate because it is close to the historical average of (trend) output growth. Estimation. The data for this study are quarterly observations of seasonally adjusted nominal GNP (Y ), the St. Louis definition of the monetary base adjusted for reserve requirement changes (B ), and seasonally adjusted M2. Equation 1 is estimated using data for the period 1955:1– 92:4. The results from the nominal GNP growth equation are as follows (standard errors in parentheses): (2) ∆Yt = 0.0083 + 0.2864 ∆Yt –1 + 0.3335 ∆ Bt –1 (0.002) (0.084) (0.120) adj R 2 = 0.17 SEE = 0.010 BG = 1.25. The estimation results, which are quite close to those of McCallum (1988), indicate that both lagged GNP growth and base growth are significantly related to current GNP growth. (Note, however, that a substantial fraction of the variation in GNP growth is left unexplained by this equation.) A Breusch–Godfrey (BG) test for serial correlation yields an F-statistic of only 1.25, indicating that we fail to reject the null hypothesis that no serial correlation is present in the residuals. GNP-targeting rule simulations. We use the GNP-targeting rule along with equation 2 to generate simulated growth rates for nominal GNP. Two versions of the GNP-targeting rule are used to simulate nominal GNP for the 1986 –92 period; one version targets the log level of nominal GNP, whereas the other version targets the growth rate of nominal GNP. In the first simulation, the GNP-targeting rule presumes that full employment output increases at a 3-percent annual rate each quarter.2 The target level of nominal GNP is stipulated to increase at the same 3-percent annual rate. The GNP-targeting rule includes a feedback term in which deviations from target log level of nominal GNP affect the quantity of base growth. Accordingly, the GNPtargeting rule dictates that the Federal Reserve undertake open market operations to alter the volume of the monetary base. Figure 2 plots simulated and actual log level nominal GNP, plus the target path under the GNP-targeting rule. The simulation results suggest that adopting the GNPtargeting rule would have been successful in two respects. One is that such a rule effectively slows nominal GNP growth. The other is that variation around the presumed 3-percent target nominal Federal Reserve Bank of Dallas GNP path is reduced.3 The slowing of simulated nominal GNP growth is not smooth, however. In particular, simulated nominal GNP falls sharply relative to actual GNP in the period 1990:2–1991:2. This sharp deceleration in simulated nominal spending growth suggests that adoption of McCallum’s GNP-targeting rule would have resulted in a more severe recession. In a second simulation experiment, we assume that past deviations from the level of nominal GNP are forgiven; that is, the GNP-targeting rule stipulates that base growth responds (with a lag) to deviations from the target growth rate of nominal GNP.4 The objective each period is not a particular (log) level of nominal GNP but a growth rate. Figure 3 plots actual and simulated nominal GNP growth for the case in which the growth-rate version of the GNP-targeting rule is used to generate the simulated path. Simulated nominal GNP grew at an average 2.5-percent annual rate, while actual GNP increased at an average 5.9-percent annual rate. As Figure 3 shows, the simulated growth rate is always below the actual growth rate of nominal GNP. The plot further suggests that simulated nominal GNP growth would have been more volatile than actual nominal GNP growth. With reference to a 3-percent target growth rate, the RMSD for simulated nominal GNP Figure 2 Nominal GNP Path, 1986:4–92:4, Using the GNP-Targeting Rule With a 3-Percent Target (Level) Log level 8.75 Figure 3 Nominal GNP Growth Rate Path, 1986:4–92:4, Using the GNP-Targeting Rule With a 3-Percent Target (Growth Rate) Percent 10 8 Actual 6 4 Target 2 0 Simulated –2 –4 ’86:4 ’87:4 ’88:4 ’89:4 ’90:4 ’91:4 ’92:4 growth is 0.7 percent, compared with 0.6 percent calculated using actual GNP growth. The implication, therefore, is that applying the GNP-targeting rule would have been somewhat less successful than what actually occurred in terms of average variation around the 3-percent target growth rate. The evidence, therefore, suggests that adopting the growth-rate version of the GNP-targeting rule would have resulted in slower growth and more variability around the 3-percent target path than what the economy actually experienced during the 1986–92 period. The increased variability of GNP growth under the GNP-targeting rule is largely the result of two factors: a short simulation period and 8.7 8.65 Actual 3 On average, the GNP-targeting rule produces a nominal GNP path that increases at a 2.5-percent annual rate during the period 1986:4 –1992:4, compared with actual nominal GNP, which grew at a 5.9-percent annual rate. Relative to the target level of GNP, the root-mean-squared deviation (RMSD) is 0.035 under the GNP-targeting rule but is 0.051 using the actual history of nominal GNP. 4 The term forgiven refers to a policy in which past deviations from the target level are not relevant for current policy. In other words, the policymaker is forgiven for these past misses from the target level. 8.6 8.55 8.5 Simulated 8.45 Target 8.4 8.35 8.3 ’86:4 ’87:4 ’88:4 ’89:4 ’90:4 Economic Review — Third Quarter 1994 ’91:4 ’92:4 17 sizable real shocks hitting the U.S. economy. Note that the largest difference between what actually occurred and what the GNP-targeting rule generates occurs in fourth-quarter 1990. This date corresponds with the run-up in oil prices that occurred between August and October 1990. During this period, actual nominal GNP growth did not fall as sharply as the simulation suggests it would have under the GNP-targeting rule. Overall, our simulation results suggest that a GNP-targeting rule would have slowed nominal GNP growth compared with what actually occurred. This finding is not surprising in that many people would expect slower nominal GNP growth if the Federal Reserve abruptly switched to a zeroinflation goal when the environment has a positive inflation rate. In addition, our results suggest that compared with what actually occurred, a GNP-targeting rule would have been moderately more successful in reducing variation around a 3-percent nominal GNP level target, but somewhat less successful in reducing variation around a growth-rate target. Under either rule, there is a deceleration in GNP growth in 1990 that is much sharper than what actually occurred, suggesting that these rules, had they been implemented, could well have resulted in a more severe recession. M2-targeting approach simulations. Next we simulate GNP using the M2-targeting approach. In this setup, we compare nominal GNP growth, M2 growth, and monetary base growth. Our objective is to gain some insight into whether monetary policy aimed (exclusively) at hitting the midpoint of the target M2 growth cones would have been sufficient to avoid the sharp deceleration of nominal GNP growth that occurred in 1990 and to instigate a stronger recovery during the 1991–92 period. This simulation exercise addresses the criticisms that slow M2 growth was a major factor in the recent downturn and slow recovery. 18 5 In the monetary targeting literature, what we call “target drift” is sometimes called “base drift.” 6 In this first set of simulations, the forecast for the M2 money multiplier is simply the previous quarter’s value. The key assumption is that the money multiplier is not related to the monetary base. This issue is discussed in the box on page 25 entitled “Caveats to Interpreting the Results.” The simulations use the fourth-quarter simulated values of M2 to establish the target growth range for the next year. The Federal Reserve has the option every fourth quarter to establish its target growth ranges using either the realized fourth-quarter value of M2 or the fourth-quarter target value consistent with the midpoint of the previous year’s target range. In the former case, in which the target “drifts,” the Federal Reserve does not try to make up for missing the previous year’s M2 target. The latter case expressly requires the Federal Reserve to stay on a course directly linked to previous targets. Thus, “no target drift” is exhibited in the latter policy course.5 In the first set of simulations, we examine the case in which target drift is allowed to affect the M2 target growth ranges. Permitting target drift in these simulations is consistent with the Federal Reserve’s historical procedure. The target-drift approach also seems to be in line with Friedman’s and Buchanan and Fand’s prescription for Federal Reserve policy.6 The target growth ranges and their midpoints are presented in Table 1. As can be seen, the midpoints have generally been ratcheted down over the 1987–92 period, albeit modestly. This is consistent with the notion that the M2targeting approach seeks to gradually lower trend money growth and, hence, the inflation rate. Under certain conditions, the longer-run goals of the M2-targeting approach and the GNP-targeting rule would exactly coincide. If average M2 velocity growth is zero, the M2-growth midpoint could be set equal to 3 percent. Under these velocity growth assumptions, the M2-targeting rule examined here presumes that Federal Reserve policy seeks to slow nominal GNP growth at a less dramatic pace than the GNP-targeting rule. For this reason, a direct comparison of simulations from the M2-targeting approach and GNP-targeting rule is not made in this article. Given the short simulation period, the objectives of the two policy approaches are simply too different, making a direct comparison virtually meaningless. Instead, the simulations from the M2-targeting approach will provide some measure of how successful a “soft-landing” strategy might have been. We begin our examination of the M2-targeting approach by looking at how nominal GNP growth would have evolved under this monetary policy. Figure 4 plots the growth rates of actual Federal Reserve Bank of Dallas Table 1 Federal Reserve M2 Target Range 1987 1988 1989 1990 1991 1992 Upper bound (Percent) Lower bound (Percent) Midpoint (Percent) 8.5 8.0 7.0 7.0 6.5 6.5 5.5 4.0 3.0 3.0 2.5 2.5 7.0 6.0 5.0 5.0 4.5 4.5 nominal GNP, together with the growth rate of nominal GNP generated under the M2-targeting approach. In contrast to the evidence from the GNP-targeting rule, simulated nominal GNP growth using the M2-targeting approach looks quite similar to actual nominal GNP growth. Simulated nominal GNP growth follows the same cycle that actual nominal GNP growth followed during the 1986 –92 period, with a somewhat more exaggerated downward swing evident in the simulation. The average annual growth rate of simulated nominal GNP growth is 6.1 percent, only slightly higher than the 5.9-percent annual rate actually recorded. From Figure 4, we see that the higher average growth rate comes primarily from higher than actual growth in the 1987–89 period. The plots indicate, however, that after 1990, nominal GNP growth would have been stronger in 1991:3 and 1992:1, but weaker in 1992:2. Overall, the simulations indicate three similar features. Nominal GNP growth would have fallen just as much as actual nominal GNP growth did in 1990, the average growth rate of simulated nominal GNP is nearly identical to what actually occurred in 1991– 92, and the stop-and-go pattern present in actual nominal GNP growth in 1991– 92 is also present in simulated nominal GNP growth. Hence, there is little evidence in these simulations to support the argument that GNP growth would have been substantially stronger after 1990, the recession and recovery period, had the Federal Reserve followed an M2-targeting approach. Figure 5 plots the actual and simulated path Economic Review — Third Quarter 1994 for M2 growth during the simulation period. From Figure 5 we see that, with a couple of exceptions, the M2-targeting approach results in M2 growth that is roughly similar to what actually occurred. There is one episode during 1991 in which M2 growth experienced a dramatic swing under the M2-targeting approach. From Figure 5, one could infer that the M2-targeting approach may not have resulted in M2 growth that would have been substantially different from its actual behavior. Summary statistics largely support this inference. The standard deviation is 4 percent under the M2targeting approach, while the historical standard deviation is much lower, 2.3 percent. On average, M2 would have grown at a 4.4-percent annual rate if the Federal Reserve had adopted this version of the M2-targeting approach, slightly higher than the 4.1-percent rate actually recorded. Critics of the Federal Reserve argue that deficiencies in M2 growth relative to its target were policy mistakes. One can judge whether a policy aimed at hitting the midpoint of the target ranges would have been superior to what actually occurred by plotting the outcome and the midpoint target line. This is done in Figure 5a. Note that the target drift in the simulated path of M2 (measured in log levels) differs from the target drift actually experienced. Consequently, the plot uses one target line for the actual path of M2 and Figure 4 Nominal GNP Growth Rate Path, 1986:4–92:4, Using the M2-Targeting Rule Percent 12 Simulated 10 8 6 Actual 4 2 0 ’86:4 ’87:4 ’88:4 ’89:4 ’90:4 ’91:4 ’92:4 19 Figure 5 Figure 5a M2 Growth Rate Path, 1986:4–92:4, Using the M2-Targeting Approach M2 Path, 1986:4–92:4, Using the M2-Targeting Approach (Target Lines with Target Drift) Percent Log level 16 8.25 14 8.2 Simulated 12 10 Actual Simulated Target 8.15 8 8.1 6 4 2 8.05 Actual 8 0 –2 7.95 –4 –6 ’86:4 ’87:4 ’88:4 ’89:4 ’90:4 ’91:4 ’92:4 another for the simulated path. All comparisons are based on the difference between actual or simulated M2 and the corresponding target value. The plots do not strongly indicate that one path is substantially better than the other in terms of being closer to the target value. The RMSD is 0.95 percent for simulated M2 growth and 0.90 percent for actual M2 growth. The evidence suggests that the Federal Reserve would have done slightly worse in minimizing variation around its M2 target path had the Fed adopted the M2-targeting approach.7 In summary, the M2-targeting approach would have resulted in slightly higher growth rates for nominal GNP growth and M2 growth. However, the evidence does not support the 7 20 One potential criticism of the M2-targeting approach as implemented here is the method used to forecast the M2 money multiplier. Recall, we use last quarter’s actual value of the M2 money multiplier as the forecast of this quarter’s value. Since M2t = mm2t + Bt (where mm2 denotes the log value of the M2 money multiplier), the variability in the M2 money multiplier is solely responsible for our finding that the M2-targeting approach would have been less accurate in hitting the M2 target than what actually occurred. We address forecasting concerns and discuss their impacts on nominal GNP growth in the next section. 7.9 ’86:4 ’87:4 ’88:4 ’89:4 ’90:4 ’91:4 ’92:4 claim that nominal GNP growth would have been substantially stronger during the 1990 – 92 period had the Federal Reserve simply focused exclusively on hitting its M2 target growth rate. In addition, because of forecasting errors permitted in this simulation, it is not clear that the Federal Reserve would have been substantially more successful in hitting its M2 target growth rates than it actually was. Some extensions We now present some extensions to the basic simulations considered above. In particular, we reconsider the GNP-targeting rule when the target growth path is allowed to more closely mimic the soft landing sought by the Federal Reserve. We also consider two extensions to the M2-targeting approach. First, we consider a case in which the Federal Reserve eliminates target drift. This extension is motivated by Feldstein’s (1992) call for the Fed to “make up” for past deficiencies in M2 growth. Second, we examine the case in which the Federal Reserve perfectly hits its M2 target growth path, thus abstracting from M2-control problems. The GNP-targeting rule with a softer landing. One might view the GNP-targeting rule as being too harsh, in the sense that the changeover to the Federal Reserve Bank of Dallas 3-percent nominal GNP growth target is too abrupt. Suppose the Federal Reserve seeks a softer landing to its zero-percent inflation goal. How would the simulations differ if the target growth rate for nominal GNP were gradually lowered? This alternative is motivated largely by a reading of the FOMC minutes during the simulation period: the Federal Reserve was seeking a gradual approach toward its long-run goals, rather than the abrupt move toward zero inflation. We assume that the nominal GNP growth target is equal to the midpoint of the M2 target growth range.8 As we did in the GNP-targeting simulations, we examine cases in which the level and growth rate of nominal GNP serve as alternative targets. Figure 6 plots the simulated and actual values of nominal GNP for the soft-landing approach to the GNP-targeting rule. The target level of nominal GNP is included in the plot for reference. Figure 6 shows that simulated nominal GNP would have been virtually identical to actual nominal GNP until 1990. Beginning in 1990, simulated nominal GNP declines sharply toward the target level until second-quarter 1991, when simulated and actual nominal GNP begin once again to increase at about the same rate. Under the softlanding approach to the GNP-targeting rule, simulated nominal GNP would have increased, on Figure 6 Nominal GNP Path, 1986:4–92:4, Using the GNP-Targeting Rule with a Soft-Landing Target (Level) Log level 8.75 Figure 7 Nominal GNP Growth Rate Path, 1986:4–92:4, Using the GNP-Targeting Rule with a Soft-Landing Target (Growth Rate Target) Percent 10 8 Actual 6 Target 4 2 Simulated 0 –2 ’86:4 ’87:4 ’88:4 ’89:4 ’90:4 ’91:4 ’92:4 average, at a 4.8-percent annual rate from 1986 through 1992. (Recall that actual nominal GNP increased at a 5.9-percent average annual rate.) The RMSDs are 2.6 percent and 1.5 percent for simulated and actual nominal GNP, respectively. Thus, the evidence suggests that actual nominal GNP was closer to the target path than nominal GNP would have been under a GNP-targeting rule aimed at a soft landing. Figure 7 plots simulated nominal GNP growth when the target is the soft-landing nominal GNP growth rate. In addition, actual nominal GNP growth and the target line are plotted in Figure 7. Figure 7 reveals that simulated nominal GNP growth falls more sharply in the third and fourth 8.7 Actual 8.65 8.6 8 Simulated 8.55 Target 8.5 8.45 8.4 8.35 8.3 ’86:4 ’87:4 ’88:4 ’89:4 ’90:4 Economic Review — Third Quarter 1994 ’91:4 ’92:4 Simulated paths for monetary base and M2 are available from the authors upon request. There are any number of ways to identify the target growth rate for nominal GNP. The target growth rate is assumed to gradually approach the long-run rate. This means that the constant term and the target value in the feedback expression must be changed from the case in which nominal GNP was assumed to grow at a 3-percent annual rate each quarter. This identification is simple to implement. However, it presumes that the Federal Reserve believed that M2 velocity growth would be zero over the simulation period. 21 quarters of 1990 than does actual GNP growth. The average growth rate of simulated nominal GNP is 5.3 percent, about one-half a percentage point below the average growth rate of actual nominal GNP. The RMSD for simulated nominal GNP growth is 0.7 percentage points, higher than the RMSD of 0.5 percentage points using actual nominal GNP growth. This evidence again suggests that actual nominal GNP growth would have been closer, on average, to the soft-landing target than simulated nominal GNP growth would have been using the GNP-targeting rule. In short, the extensions result in a much sharper decline in simulated nominal GNP in 1990 compared with what actually occurred. In addition, the evidence indicates that the average deviation from target nominal GNP is smaller if calculated using actual nominal GNP rather than simulated GNP. This finding is robust whether one chooses a level or growth rate target for nominal GNP. As with the 3-percent version of the GNP-targeting rule, the evidence suggests that a much sharper recession would have occurred. The evidence further indicates that the GNPtargeting rule with the soft-landing target would have been less successful in terms of hitting the targets paths than what actually occurred. M2 targeting revisited. We now consider two modifications to the M2-targeting approach developed above. First, we eliminate drift in the M2 target path. In this case, we assume that the Federal Reserve uses its fourth-quarter target value as the starting point for the next year’s target path. The no-drift approach was suggested by Feldstein in his prescription for monetary policy.9 By eliminating target drift, past deficiencies in monetary policy are not forgiven. Second, we examine the situation in which 22 9 The issue of target drift versus no target drift has a long history in the debate over monetary policy. 10 Under the perfect-foresight assumption, target drift is implicitly eliminated. Because the Federal Reserve hits its target every fourth quarter under perfect foresight, the starting point for next year’s target path is, by construction, the fourth-quarter target value. Thus, the perfect-foresight case is identical to jointly assuming no drift and no forecast error. Figure 8 Nominal GNP Growth Rate Path, 1986:4–92:4, Using the M2-Targeting Approach (No Target Drift) Percent 13.5 12 Simulated 10.5 9 7.5 6 Actual 4.5 3 1.5 0 ’86:4 ’87:4 ’88:4 ’89:4 ’90:4 ’91:4 ’92:4 the Federal Reserve has perfect foresight with respect to forecasts of the M2 money multiplier. This assumption removes the forecast error present in our earlier simulations. Moreover, it is straightforward to show that the perfect-foresight assumption is a strong version of the no-drift case.10 How much does nominal GNP growth change when the M2-targeting approach is implemented without target drift? Under the no-drift case with random-walk forecasts of the M2 money multiplier, the average growth of nominal GNP over the 1986 – 92 period is 6.3 percent, with a standard deviation of 2.7 percent. Simulated nominal GNP growth is 0.2 percentage points higher, on average, when drift in the M2 target is eliminated. Figure 8, which plots actual and simulated nominal GNP growth under the M2-targeting approach without target drift, shows that the simulated path is nearly identical to that generated with target drift. In particular, the slowdown in 1990 and the moderate, uneven growth during 1991– 92 is present even when target drift is eliminated. One subtle difference occurs in secondquarter 1991. With target drift present, nominal GNP growth would have been substantially below what actually occurred (see Figure 4). In Figure 8, however, simulated nominal GNP growth in second-quarter 1991 would have been almost Federal Reserve Bank of Dallas equal to what actually occurred. Thus, the evidence shows that eliminating target drift does generate somewhat stronger nominal GNP growth after 1990 than is generated when target drift is present. Any substantive gain, however, appears limited to second-quarter 1991. Figure 9 plots the log level of M2 under the M2-targeting approach with target drift eliminated, along with the target path and actual M2. The vertical distance between the target line and the outcome from the M2-targeting approach in Figure 9 is due solely to forecast error of the M2 money multiplier. Figure 9 shows that the M2-targeting approach yields an M2 path consistently below the target path since 1990. The implication is that the random-walk forecasts of the M2 money multiplier consistently overpredict the multiplier value.11 In almost every quarter, simulated M2 is closer to the target value than actual M2. Despite the run of underpredicting the M2 money multiplier, simulated M2 would have been closer, on average, to a no-drift target than what actually occurred. To get an idea of the extent of the difference between the no-drift target value and the actual quantity of M2, by fourth-quarter 1992, M2 was $3,495.4 billion, whereas the no-drift target value would have been $3,789.9 billion. Hence, the actual value of M2 was roughly $294.5 billion, or 8.4 percent, below what the target would have been in the absence of target drift.12 Finally, how would nominal GNP have grown if one could perfectly forecast the M2 money multiplier? Here we use the actual value of the M2 money multiplier, implicitly assuming that this path is independent of the path for monetary base. Under the M2-targeting approach with perfect foresight, the average growth rate of nominal GNP is 6.5 percent, 0.6 percentage points higher than what actually occurred. Figure 10 plots nominal GNP growth generated under the perfect-foresight assumption, along with actual nominal GNP growth. In general, the path of simulated nominal GNP growth is roughly the same as what actually occurred. Even under the perfect-foresight assumption, simulated nominal GNP growth has the erratic stop-and-go pattern that characterizes actual GNP growth. In sum, nominal GNP growth with the perfect-foresight assumption does only modestly better than what actually occurred. ContraEconomic Review — Third Quarter 1994 Figure 9 M2 Path, 1986:4–92:4, Using the M2-Targeting Approach (No Target Drift) Log level 8.3 Actual Simulated Target 8.25 8.2 8.15 8.1 8.05 8 7.95 7.9 7.85 7.8 ’86:4 ’87:4 ’88:4 ’89:4 ’90:4 ’91:4 ’92:4 dicting the critics, the evidence provided here does not support the notion that the M2-targeting approach would have resulted in a smaller decline in nominal GNP in 1990 than what actually occurred or that simulated nominal GNP growth would have experienced a smoother, stronger recovery compared with what actually occurred. 11 In a related article, John Duca (1993) suggests that standard M2 money demand functions systematically overpredicted M2 during the 1990 – 92 period. One interpretation is that some real shock was influencing M2. In our setup, such a shock would be picked up as changes in the M2 money multiplier. 12 The astute reader may wonder why the sizable difference between actual and simulated M2 does not translate into a greater distinction between actual and simulated nominal GNP growth in Figure 8. The relationship between M2 growth and nominal GNP growth is M2 velocity growth. Note that M2 velocity growth is the difference between monetary base velocity growth and M2 money multiplier growth. With monetary base growth present explicitly in the nominal GNP growth equation, deviations from trend in monetary base velocity growth are present in the simulation. To the extent that deviations from trend in both M2 velocity growth and M2 money multiplier growth are negatively correlated, movements in M2 growth may not translate into movements in nominal GNP growth. 23 Figure 10 Nominal GNP Growth Rate Path, 1986:4–92:4, Using the M2-Targeting Approach (Perfect-Foresight Assumption) Percent 12 10 8 Actual 6 4 2 Simulated 0 –2 ’86:4 ’87:4 ’88:4 ’89:4 ’90:4 ’91:4 ’92:4 Summary and conclusions We present simulations in this article that correspond to two alternative monetary policies proposed by critics of the Federal Reserve. We focus on how nominal GNP under each of these two policies would have behaved compared with what actually occurred. As such, the simulated paths for nominal GNP provide us with a measure of the costs and benefits of each strategy compared with what the U.S. economy actually experienced during the 1986 – 92 period. We offer two main conclusions. First, our results suggest that a GNP-targeting rule of the type advocated by Bennett McCallum would have been effective in slowing nominal GNP growth relative to what was experienced between 1986 and 1992. The evidence also suggests, however, that such a GNP-targeting rule would have been less successful in terms of minimizing variability around the target value of nominal GNP. Indeed, except for the case in which the Federal Reserve targets the level of nominal GNP increasing at a fixed 3-percent annual rate, deviations from the target path are smaller for actual nominal GNP than what would have been generated under the GNP-targeting rule. The apparent cause behind nominal GNP’s bumpy path is a series of real 24 shocks influencing the economy. We interpret these findings as a first-pass attempt to measure the economic costs, in terms of business-cycle fluctuations, that a policymaker faces by adopting the GNP-targeting rule. The benefit of the rule is that the economy more quickly achieves its longrun goal of zero inflation. The costs, at least over the 1986–92 period, include slower growth and moderately greater volatility around nominal GNP target values and, possibly, a much steeper recession than that which actually occurred. Our second conclusion is that using a monetary policy in which the Federal Reserve seeks only to hit the midpoint of its annual M2 target ranges, nominal GNP growth would have been roughly the same as that which actually occurred. Our simulations reveal that the average growth rate of GNP would have been only between 0.1 and 0.6 percentage points higher (depending on the assumptions underlying the simulations) than what actually occurred. Critics of recent monetary policy fault the Federal Reserve’s failure to achieve its M2 targets for the evolution of economic activity in the 1991– 92 period. Our results show that hitting the midpoints of the M2 target might well have not materially altered either the reduction in nominal GNP in 1990 or the moderate, stop-andgo pattern of nominal GNP growth experienced in 1991– 92. Indeed, this outcome suggests that the culprit was not Fed actions, but real shocks affecting nominal GNP growth that the M2-targeting approach would largely have been unable to offset. In this sense, our simulations suggest that the slow growth of M2 is not the sole reason for the slow nominal GNP growth since 1986. Our results explore issues raised by critics of Federal Reserve policy. Those who advocate a policy more oriented toward achieving zero inflation get a glimpse of what the implied costs are — slower nominal GNP growth but also greater volatility in nominal GNP growth for periods as long as twenty-five quarters. For those who want more robust monetary growth, specifically aimed at hitting the M2 target midpoints, the results show that very little would have been achieved in terms of promoting faster nominal GNP growth. A question for future research is whether this episode represents the typical monetary policy contribution to nominal GNP growth or whether the 1986 – 92 period was an aberrant one in some sense. Federal Reserve Bank of Dallas Caveats to Interpreting the Results For the purposes of our research, we assume that the Federal Reserve uses the monetary base as the instrument of monetary policy beginning in the fourth quarter of 1986. Indeed, the equations in this article treat history as if the monetary base were the exogenous policy variable in determining nominal GNP growth between 1954 and 1992. Policy history, however, suggests that the Federal Reserve did not use the monetary base as its primary instrument during the postwar period. While the simulations follow the methodological approach adopted by McCallum (1987, 1988), important caveats could affect the results presented in this article. The Lucas critique In his criticism of econometric policy evaluation, Robert Lucas (1972) demonstrated how changing monetary policy rules would probably change the parameter estimates in reduced-form equations. The Lucas critique applies to both our nominal GNP growth equation and, implicitly, to our M2 money multiplier forecasts. We assume that equation 2 is not affected when monetary policy switches from its (average) postwar behavior to the base rule or the M2-targeting approach. The fact that the equation is statistically stable over the 1986–92 period is not sufficient to rule out the possibility that the parameter estimates in equation 2 would change due to a change in the policy rule. The Lucas critique casts doubt over the simulated paths for nominal GNP growth. In partial defense of our approach, it should be noted that McCallum estimates several atheoretical models and some structural models to consider the robust- Economic Review — Third Quarter 1994 ness of the rule. Overall, the outcome of the rules-based policy is consistent across a variety of models. He (appropriately) recognizes the parameter estimates would not be the same under alternative policy rules but that such simulations provide a useful starting point. The money multiplier forecasting equation We assume that the path of the money multiplier is independent of the monetary base. Not only do we assume that changes in monetary policy do not affect the reduced-form model of the M2 money multiplier, we further assume that movements in the monetary base do not affect the multiplier. Work by Daniel Thornton and Michele Garfinkel (1991) suggests that the money multiplier may be sensitive to changes in the monetary base. If true, our simulations may have been affected by changes in the monetary base. Even so, our conclusions would be changed only if the M2 money multiplier would have been much lower as a result of adopting these policies. Under the M2-targeting approach, monetary base would grow at a faster rate to offset the decline in the money multiplier. Accordingly, the faster monetary base growth in equation 2 implies that nominal GNP growth would be higher. Interestingly, Thornton and Garfinkel’s results suggest a positive association between changes in the monetary base and changes in the M2 money multiplier. Since the M2-targeting approach results in faster monetary base growth in our simulations, Thornton and Garfinkel’s results suggest that the speedup in base growth would be moderated by faster growth in the M2 money multiplier. 25 References Bomhoff, Eduard J. (1977), “Predicting the Money Multiplier: A Case Study for the U.S. and the Netherlands,” Journal of Monetary Economics 3 (July): 325– 45. Johannes, James M., and Robert H. Rasche (1987), Controlling the Growth of Monetary Aggregates (Boston: Dordercht and Lancaster). Buchanan, James M., and David I. Fand (1992), “Monetary Policy: Malpractice at the Fed,” Wall Street Journal, December 21, Southwest Edition, A8. Lucas, Robert E., Jr. (1972), “Expectations and the Neutrality of Money,” Journal of Economic Theory 4 (April): 103 –24. Brunner, Karl (1968), “The Role of Money and Monetary Policy,” Federal Reserve Bank of St. Louis Review, July, 3 – 8. Duca, John V. (1993), “RTC Activity and the Missing M2,” Economic Letters, 41(1): 67–72. Feldstein, Martin (1992), “Goose the Money Supply,” Wall Street Journal, February 3, Southwest Edition, A12. Friedman, Milton (1992), “Too Tight for a Strong Recovery,” Wall Street Journal, October 23, Southwest Edition, A12. Hafer, R. W., and Scott E. Hein (1984), “Predicting the Money Multiplier: Forecasts from Component and Aggregate Models,” Journal of Monetary Economics 14 (November): 375 – 84. McCallum, Bennett T. (1988), “Robustness Properties of a Rule for Monetary Policy,” Carnegie–Rochester Conference Series on Public Policy 29: 173 –204. ——— (1987), “The Case for Rules in the Conduct of Monetary Policy: A Concrete Example,” Federal Reserve Bank of Richmond Economic Review 73 (September/October): 10 –18. Meltzer, Allan H. (1984), “Overview,” in Price Stability and Public Policy, Federal Reserve Bank of Kansas City, 209 –22. Thornton, Daniel L., and Michele Garfinkel, (1991), “The Multiplier Approach to the Money Supply Process: A Precautionary Note,” Federal Reserve Bank of St. Louis Review, July/August, 47– 64. Hafer, R.W., Joseph H. Haslag, and Scott E. Hein (1992), “Evaluating Monetary Base Targeting Rules,” Federal Reserve Bank of Dallas Working Paper no. 9104, April. 26 Federal Reserve Bank of Dallas Appendix In this appendix, we describe the two alternative monetary policies examined in this article. In particular, the approaches used to simulate the GNP-targeting rule and the M2-targeting approach are discussed in detail. The GNP-targeting rule The GNP-targeting rule used in our simulations is similar to the one proposed by Bennett McCallum (1987, 1988). The GNP-targeting rule is written as (A.1) ∆Bt = 0.00739 – 1/16 [(Yt –1 – Yt –17 ) – (Bt –1 – Bt –17 )] + λ (Y *t –1 – Yt –1), where ∆B is the growth rate of the monetary base (B is the log of the monetary base), Y is the log of nominal GNP, Y * is the target value for (the log of) GNP, and λ (0 ≤ λ ≤ 1) is a parameter relating the current period’s base growth to past deviations from the target growth rate of nominal GNP. Following McCallum, we assume that potential output increases at a constant 3-percent annual rate, or roughly the historical trend rate of real GNP growth. In a noninflationary environment, Y * increases at the same 3-percent annual rate. Equation A.1 has three components. The constant term — 0.00739 — stipulates that the base should increase at a quarterly value equal to a 3-percent annual rate. The second component is that base growth responds to changes in velocity growth. This aspect of the GNP-targeting rule has also been advocated by Allan Meltzer (1984). More specifically, each percentage point increase in the sixteen-quarter moving average of velocity growth, for example, is matched by a percentage point decrease in base growth. Lastly, the base responds to differences between realized nominal GNP and its target. In other words, there is a λ -percentage point increase in base growth for each percentage point that nominal GNP is below the previous quarter’s target of GNP, all else being equal. In our simulations, the nominal GNP target is defined in both levels and growth rates. In the growth rate version, ∆Y *t –1 – ∆Yt –1 replaces the terms inside the parentheses in the feedback component. The GNP-targeting rule with a growth rate target differs from McCallum, who specifies that deviations from last quarter’s target level affects the current quarter’s base growth. By specifying a GNP-targeting rule in which base growth responds to deviations from nominal GNP’s target growth rate, an economy in which the Economic Review — Third Quarter 1994 average rate of inflation is zero is the appropriate notion of price stability for monetary policy’s goal. In McCallum’s level specification of the GNP-targeting rule, a stronger version — a constant long-run price level — is the price-stability notion adopted. With equation A.1, one more equation is needed to implement the GNP-targeting rule; that is, the (stochastic) law of motion for nominal GNP. We assume that movements in nominal GNP are driven by equation 2 in the text. The M2-targeting procedure. Following Friedman’s (1992) suggestion, the monetary base is used as the instrument to hit the midpoint of the Federal Reserve’s stated target ranges for the M2 aggregate. The M2 targeting approach is implemented by using a link between monetary base and M2. In the simple money multiplier model (Brunner 1968), M2 is represented as (A.2) M 2t = Bt + mm 2t, where M 2 is the log of the M2 aggregate, B is the log of the monetary base, and mm 2 is the log of the M2 money multiplier (M 2/B). Equation A.2 indicates that a desired M2 level objective can be achieved by simply supplying the quantity of monetary base consistent with the M2 target, given the M2 money multiplier. In practice, however, the Federal Reserve may miss the M2 target value because the value of the money multiplier is not known with certainty at the time it determines the quantity of monetary base. The M2 target value and the practical aspect of forecasting the money multiplier suggests rewriting equation A.2 as (A.2′ ) Bt = M 2Tt – mm 2ft , where M 2Tt is the target (log) level for M2 this quarter, and mm 2ft is a forecast of this quarter’s money multiplier. How closely the policymaker hits the M2 target depends in large part on how accurately the multiplier can be predicted. To implement the M2-targeting policy, it is necessary to identify two values: the M2 target value and the forecast of the M2 money multiplier. The target value of M2 is derived using the midpoint of the Federal Reserve’s announced target range. The starting point is the value of (Continued on the next page) 27 Appendix— Continued M2 in the fourth quarter of 1986. Because the target range is updated each year in the fourth quarter, there also arises an issue regarding the treatment of starting points in the fourth quarter of each successive year. Two approaches specify the first-quarter target value of M2 for each year. These two approaches are characterized as follows: (A.3) M 2Tt = M 2t –1 + g t , or (A.4) M 2Tt = M 2Tt– 1 + gt , where g is the quarterly value of the midpoint of the target growth range. Equation A.3 specifies that the first-quarter target uses the actual value of M2 in the fourth quarter of the previous year. Since actual M2 can differ from its target value, A.3 permits deviations from fourth-quarter target to persist, thus introducing target drift into the policy. In contrast, equation A.4 specifies the first-quarter target value of M2 using the target value from the preceding fourth quarter. This approach requires that deviations from the target path are not permanent. Because fourth-quarter deviations from the target value are not passed on to the first-quarter target in A.4, this latter specification is referred to as the no target drift case. Both A.3 and A.4 are used in this article to identify the target path for M2 in the simulations. Once the path for M2 is identified, the money multiplier is left to forecast. In general, the path for the money multiplier can be described by the equation (A.5) (A.5′ ) mm 2ft = A (L )X t –1. We use two alternative methods to forecast the money multiplier. First, we assume that the M2 money multiplier follows a random walk.1 Second, we consider a perfectforesight model where mm 2ft = mm 2t . After identifying the path for the M2 target and obtaining the M2 money multiplier forecast, the path for monetary base is constructed using equation A.2′. Assuming equation 2 is the data-generating function for nominal GNP growth, the values of base money generated by both the base rule and M2-targeting approach are used to simulate a path for nominal GNP. The path for nominal GNP growth also includes a nonmonetary-policy shock term. To measure the nonmonetary-policy shock, we estimate the nominal GNP growth equation (equation 1) over the period 1954:2–92:4, interpreting the residuals from this regression as the nonmonetary-policy shocks. The nonmonetary-policy shocks are denoted et . Let ∆Yt = 0.0083 + 0.2864 ∆Yt –1 + 0.3335 ∆ Bt –1 be the value of nominal GNP growth consistent with the path for monetary base growth generated by the monetary policy. For the period 1986:4–92:4, the simulated value of nominal GNP growth is ∆Yts = ∆Yt + et . By the properties of regression analysis, the nonmonetary-policy shock is orthogonal to movements in the monetary base. The idea here is to measure those parts of nominal GNP growth that are not explained by movements in monetary base. mm 2t = A (L )Xt –1 + ⑀t , where X is a 1 × K vector of exogenous (including predetermined) variables, A(L) is the q th degree matrix polynomial in the lag operator L, and ⑀ is a random-error term with mean zero and finite variance, σ⑀. Suppose the 28 conditions are satisfied such that optimal multiplier forecast is given as 1 We also used Box–Jenkins methods to forecast the M2 money multiplier, following work by Bomhoff (1977), Hafer and Hein (1984), and Johannes and Rasche (1987). The simulations with the time-series approach are negligibly different from those reported here. Federal Reserve Bank of Dallas David M. Gould William C. Gruben Senior Economist Federal Reserve Bank of Dallas Research Officer Federal Reserve Bank of Dallas GATT and the New Protectionism T he successful completion of the Uruguay Round of the General Agreement on Tariffs and Trade (GATT) has generated much optimism about the future of world trade, and with good reason. If ratified, the accord will not only eliminate tariffs on many goods, but will be the first GATTround accord to address intellectual property rights, trade in services, and agricultural subsidies. An important question, however, is how much this new accord can limit future protectionism. When trade liberalization curtails one form of protectionism, new forms appear almost routinely. While GATT agreements steadily reduced tariffs on manufactures (from an average of 40 percent in 1947 to about 5 percent now, as shown in Figure 1), the United States and many other countries were developing other, more arcane administrative and legal barriers. Figure 1 Tariffs in Industrial Countries Average tariff rates (Percent) 50 40 30 What these barriers imply for free trade are sometimes difficult to understand because often they have touched on fairness issues. In many countries policymakers — and their supporters in industries that face foreign competition—have devoted much effort to counteract what they define as unfair trade practices by foreign countries. Unfair trade practices are typically thought to include: 1) subsidies on exports by foreign governments and 2) dumping, which is the act of selling goods for a lower price abroad than in the home or other markets. To offset foreign subsidies to exports the government of the importing country sometimes erects special tariffs to raise the artificially low prices of these goods. These tariff barriers are referred to as countervailing duties. Antidumping duties are typically imposed when the government of an importing country suspects that the exporting country is dumping goods on its markets. The particular circumstances under which countervailing and antidumping actions are used— and the procedures developed to assess the “unfairness” of others’ trade practices —have raised suspicions about policymakers’ motivations. Perhaps, some have argued, these “fairness” doctrines are vehicles for disguised protectionism. Allegations of disguised protectionism have become more common as efforts to preserve “fair” trade have multiplied. During the 1960s, GATT member countries initiated fewer than 20 10 0 1940 1950 1960 1970 SOURCE: Stoeckel, Pearce, and Banks (1990). Economic Review — Third Quarter 1994 1980 1990 Ken Emery offered extremely helpful comments as the reviewer for this article. We also benefited from the discussion and comments of Steve Brown, Michael Finger, Seth Kaplan, David Mueller, Tracy Murray, and Lori Taylor. All remaining errors are solely our responsibility. 29 30 Federal Reserve Bank of Dallas Table 1 NA 3 1 32 0 0 36 36 26 17 37 1 0 117 Jan. ’80 Jun. ’80 SOURCE: Finger 1993. Australia Canada European Community United States Other developed countries Developing countries All countries NA 29 18 69 1 0 117 Antidumping plus countervailing duty cases Australia Canada European Community United States Chile Other countries All countries Countervailing duty cases Australia Canada European Community United States Other developed countries Developing countries All countries Antidumping cases Country/Group NA 51 37 41 3 0 132 NA 3 0 17 0 0 20 61 48 37 24 3 0 173 Jul. ’80 Jun. ’81 NA 64 40 126 2 61 293 NA 0 1 75 61 0 137 54 64 39 51 2 0 210 Jul. ’81 Jun. ’82 77 36 29 54 1 33 230 6 2 3 35 33 1 80 71 34 26 19 0 0 150 Jul. ’82 Jun. ’83 73 29 34 68 2 20 226 3 3 1 22 20 1 50 70 26 33 46 1 0 176 Jul. ’83 Jun. ’84 68 37 34 121 0 10 270 5 2 0 60 10 0 77 63 35 34 61 0 0 193 Jul. ’84 Jun. ’85 57 28 23 106 2 14 230 3 1 0 43 11 0 58 54 27 23 63 2 3 172 Jul. ’85 Jun. ’86 43 28 17 52 6 1 147 3 4 0 11 0 1 19 40 24 17 41 5 4 131 Jul. ’86 Jun. ’87 Number of Antidumping and Countervailing Duty Cases Initiated, January 1980 – June 1989 20 20 30 44 13 3 130 0 0 0 13 0 4 17 20 20 30 31 9 13 123 Jul. ’87 Jun. ’88 21 15 29 33 12 14 124 2 1 0 8 0 0 11 19 14 29 25 12 14 113 Jul. ’88 Jun. ’89 359 337 291 714 42 156 1,899 22 19 6 316 135 7 505 488 318 285 398 35 34 1,558 Jan. ’80 Jun. ’89 twelve antidumping actions per year. By the second half of the 1970s, the United States alone averaged more than thirty-five per year. As Table 1 shows, in the 1980s the total of cases initiated by GATT signatory countries exceeded one hundred per year. Concerns that “fair” trade laws are vehicles for protectionism have become even more acute with the advent of the Uruguay Round. While rough guidelines for using antidumping and countervailing duties have appeared in past GATT agreements, the Uruguay Round accord has introduced much more formalization and detail to accommodate and codify such retaliation. Moreover, these codifications greatly resemble those of the United States, a principal exponent of antidumping and countervailing measures. The Uruguay Round’s various approaches to addressing government trade policy—lowering tariffs here, sanctioning some types of antidumping actions and countervailing duties there— raise questions about the accord’s overall implications for free trade. The related central question addressed in this article is whether the recent changes in GATT will discourage the most protectionist aspects of these administered trade regulations. Because the accord adopts many aspects of U.S. laws and administrative procedures concerning antidumping and countervailing duties, we use the U.S. experience of recent years to assess what may be in store for the world trading environment under the new GATT. We begin by examining what has been seen as unfair trade, and we discuss the economic arguments for imposing antidumping and countervailing duties. We then outline how fair trade laws have been applied in the United States and discuss why some analysts have claimed that these laws are biased toward protectionism. Finally, we assess the impact of the Uruguay Round of GATT on the application of fair trade laws. We conclude with an outlook for the future of the world trading environment. When is trade unfair? The express intention of fair trade laws is to prevent foreign sellers from pricing and selling anticompetitively or predatorily in your country. If foreign exporters sell for less in the United Economic Review — Third Quarter 1994 States than at home, or if foreign governments subsidize exports to the United States, U.S. laws and rules accommodate U.S. efforts at retaliation. But is unfair trade really unfair? Economists often deny that below-cost prices or foreign export subsidies mean unfair trade. After all, if foreign firms want to sell cheaply in the United States, why should U.S. consumers not be allowed the obvious benefit? While this argument recognizes the benefits to consumers, it dismisses the effects of unfair trading on some domestic producers and ignores other arguments against unfair trading practices. Moreover, as Bhagwati (1988) notes, “a free trade regime that does not rein in or seek to regulate artificial subventions will likely help trigger its own demise.” Conversely, in the more concrete world of government policy, both arguments and government policies in support of antidumping and countervailing duties typically place the interests of import-competing industries over the interests of consumers and also over those of producers who use imported inputs. (See the box entitled “Do Fair Trade Laws Protect the Economy?”) An analysis of eight antidumping duties imposed by the United States between 1989 and 1990 showed that for each $1 gained by the protected industries, the U.S. economy as a whole lost $3.60, on average (Anderson 1993). Moreover, according to the same study, the cost per job created in the protected industries was $113,800, which is substantially higher than the $14,300 average salary paid for these jobs. The extra cost comes from the higher price consumers must pay for these domestic goods and the less efficient use of domestic resources. Another argument against unfair trade is that foreign nations can act predatorily to capture domestic markets. But this argument is also subject to criticism. The argument is based on the assumption that once foreign producers capture domestic markets, competitors will not re-enter domestic markets when prices begin to rise. But if foreign producers cannot block domestic producers from re-entering a market after it is captured, they will have to keep their prices at a competitive level to maintain their market share. Some analysts claim that certain high-tech industries can develop natural barriers to entry that allow them to capture a particular market and 31 Do Fair Trade Laws Protect the Economy? While antidumping and countervailing laws may protect particular industries from foreign competition, broader arguments based on the benefits of these measures to the whole economy are typically ill founded. There are basically three arguments for antidumping and countervailing duties. The first is simply that subsidized or dumped imports of textiles, consumer electronics, and automobiles cost domestic textile workers, electronics workers, and auto workers their jobs. In other words, imports cost Americans their jobs and subsidized or dumped imports cost even more jobs. While it is certainly true that imports of textiles or cars can displace American textile or automobile jobs, it is not true that trade can reduce the number of jobs in a country for any sustained period. The argument that import subsidies or dumping reduces overall employment reflects an error known as the fallacy of composition—the mistaken belief that what is true for the part is true for the whole. As a matter of simple arithmetic, large increases in imports inevitably cause either an increase in exports or in foreign investment. Generally speaking, if imports of Japanese cars dramatically increase, American exports increase to pay for these goods. Unless foreigners are giving away what they make, Americans cannot get foreign products unless they sell products to foreigners. As a result, the jobs lost in one industry are replaced by jobs gained in another. Jobs would only be lost if foreigners gave everthing away. Using data on unemployment, imports, and exports, for twenty-three developed countries, Gould, Ruffin, and Woodbridge (1993) find no simple causal link between unemployment and import penetration or export performance. Within countries, imports had the same correlation to unemployment as did exports. Second, there is the argument that foreign producers sell abroad at below cost because they have a predatory intent to drive out domestic competition. The idea is that once they drive out the competition in the domestic market, they will raise prices and reap monopolistic profits at the expense of the target country. This argument, however, assumes that competitors will not reenter the market once prices rise. If foreign producers cannot block domestic producers from re-entering the market once it is captured, they will have to keep their prices low in order to maintain market share. Prices that cannot be raised obviate the benefits to predatory pricing. Finally, some arguments are based on new theories of international trade that emphasize monopolistic competition and international oligopolies. These theories focus on international economies of scale, learning curves, and innovation and down play the assumption of perfect compe- 32 tition that lies behind the classical arguments for free trade. In a real world environment, some have argued, other countries might subsidize their industries and capture U.S. markets at the expense of future U.S. income. Although economists have long recognized the importance of economies of scale, innovation, and international oligopolies, countries have rarely, if ever, been able to capture excess profits from other markets for long. The difficulties with such strategic trade policy arguments are twofold. First, most arguments for subsidies assume they are implemented by a benevolent dictator, rather than political parties representing special interest groups. Most trade policy decisions are not typically made in the best interest of the whole country; usually they are the result of competing political interests. Because of the nature of the policymaking game, it is hard to argue that foreign industry subsidies are a concerted effort to capture domestic markets. Rather, they often reflect some foreign industry’s power in capturing its own country’s budget. Second, strategic trade policies are based on theoretical models, but their implementation relies heavily on empirical estimates of industry demand and supply that can vary substantially over time. Rarely have countries acted in a deliberate fashion that actually managed to capture these advantages. For example, some of Japan’s biggest success stories (TVs, stereos, and VCRs) were not the industries most heavily targeted by the Japanese government. Moreover, as these products have become even more standardized, production has moved out of Japan to Korea and other Southeast Asian countries. The inability of governments to pick winners is evidenced by some of Japan’s failures: • The Ministry of International Trade and Industry (MITI) first wanted the Japanese automobile industry to produce only trucks and later wanted to limit the number of automobile companies to a few giants in particular, attempting to keep Honda out of the car business. Of course, market forces eventually led MITI to abandon these plans, but the intervention generated costs that could have been avoided. Had MITI been successful, Japan would have paid an enormous price for this policy. • The Japanese heavily targeted an analog version of high definition television (HDTV), but it appears that digital HDTV—the product of U.S. research and development—will be the industry standard. • MITI is now investing in cold fusion, a procedure for creating nuclear power that has been debunked by most of the scientific establishment. These examples and others suggest that even Japan has done a poor job of picking the winning industries. Federal Reserve Bank of Dallas Table 2 Affirmative Findings by Product in Antidumping and Countervailing Duty Investigations, 1988–92 1988 Stainless steel pipes and tubes Atlantic salmon Color picture tubes Butt-weld pipe fittings Forklift trucks Electrical conductors Aluminum rods Brass sheet and strip Nitrile rubber Granular polytetrafluoroethylene resin Forged steel crankshafts 1989 Cellular mobile phone 3.5-inch microdiscs Antifriction bearings Electrolytic manganese dioxide Light-walled rectangular pipes and tubes Industrial belts New steel rails Pork 1990 Aluminum sulfate Telephone systems Mechanical transfer systems Drafting machines Industrial Nitrocellulose Sweaters Gray portland cement 1991 Fresh Atlantic salmon Industrial nitrocellulose Mutiangle laser light scattering instruments Handtools Polyethylene terephthalate film Gray portland cement Benzyl paraben Sparklers Sodium thiosulfate Flat panel displays and subassemblies Silicon metal Chrome-plated lug nuts Word processors 1992 Magnesium Softwood lumber Electric fans Tungsten ore Shop towels Fresh kiwifruit Ophthalmoscopy lenses Steel pipe fittings Rubber thread Magnesium Rayon filament yarn Sulfanilic acid SOURCE: International Trade Commission Annual Reports. keep it. Because of what they learn in the production process, producers may permanently gain a cost advantage as production expands. In other words, by protecting or subsidizing certain industries, a country may gain a permanent cost advantage and, therefore, create a natural barrier to entry. Although this argument is appealing, there is little evidence to suggest that firms or countries actually have been able to take advantage of these benefits or have acted in a manner consistent with pursuing them. As Table 2 indicates, many products subject to antidumping and countervailing duties, such as stainless steel pipes, gray portland cement, or pork, are not typically high-tech industries. But whether or not government subsidization and predatory pricing are practices that fair trade laws are supposed to address — as the fair trade rhetoric suggests — fair trade laws have been so broadly applied that they sometimes seem to have been used simply to avoid competition. In other words, antidumping and countervailing duties share many attributes of pure protectionism. Economic Review — Third Quarter 1994 Why fair trade laws do not always work as intended: A look at the United States Contrary to popular notions about dumping, in U.S. law and under present GATT law, dumping is not defined as selling below cost with the intent to capture U.S. markets. Dumping is simply selling at a lower price in the United States than in other markets or selling at below average total costs. Dumping is not defined as predatory behavior. Antidumping actions do not require any evidence of intention to monopolize or otherwise drive competitors out of business. Opportunities for using the antidumping laws have not always been so unrestricted. Seventyfive years ago dumping remedies required proof that a foreign producer was practicing predatory pricing. That is, the foreign producer had to be selling at a loss and with the intention of driving competitors out of business so as to secure a monopoly. Early U.S. antidumping regulations were, in substance, extensions of antitrust law (Finger 1992). The antidumping laws of the past placed the 33 burden of proof on the accusing industry. Over time, Congress has dropped the requirement of intent and instead has focused on the prevention of injury to domestic firms (Murray 1991). The burden of proof no longer falls on the accusing industry but upon the industry or firm that is accused. Foreign firms are presumed guilty until proven innocent. Under current U.S. law, any industry can approach the Department of Commerce and the International Trade Commission (ITC) and claim foreigners are subsidizing exports or are pricing them lower in the United States than at home. The Department of Commerce investigates the cases, and the ITC determines whether material injury has occurred. Antidumping duties are imposed when foreign merchandise is sold in the United States for less than “fair” value. A duty is assessed equal to the amount by which the estimated foreign market value exceeds U.S. price. Countervailing duties are imposed when a foreign country directly or indirectly subsidizes exports to the United States. A duty is assessed equal to the amount of the subsidy or the amount by which the estimated foreign market value exceeds the U.S. price. (See the box entitled “U.S. Antidumping and Countervailing Duties.”) While antidumping and countervailing duty laws are not inconsistent with the desire to keep trade fair, their current application permits liberal interpretation of what is and what is not fair trade. Below we discuss some of the procedural problems with antidumping and countervailing duties. Problems with the application of U.S. fair trade laws In the application of antidumping and countervailing duty laws, small changes can make big differences. Juggling the procedures for constructing fair market prices, for identifying injury to a domestic industry, or for gathering information from foreign firms can substantially change their impact. Over the years, in response to domestic pressures to protect particular industries, these procedures have often changed so as to increase the likelihood of finding against foreign producers and for domestic complainants. This pattern is not isolated to the United 34 States and, with the passage of time, countries as diverse as Canada, Poland, and Mexico have converged in their procedures for determining antidumping and countervailing duties. By considering how antidumping and countervailing laws are applied in the United States, it may be possible to assess how the new GATT accord will affect their use and effects. Antidumping laws Pricing below average costs. Although pricing below average total costs is legal for domestic U.S. firms, the 1974 Tariff Act broadened antidumping law to prohibit foreign exporters from doing the same. It is not unusual for U.S. firms to price below average total cost (but above average variable cost) because of weak sales. This practice allows firms to cover labor costs during periods of weak demand and to avoid shutting down production completely. Moreover, firms that sell new products involving high-tech research and development costs typically price below average total costs during early stages of marketing. As the product becomes more established and volume increases, firms recoup their earlier losses. For example, the new General Motors Saturn factory only became profitable after five years of losses (Bovard 1993). Under U.S. antidumping law, if General Motors were a foreign firm, it would have been prohibited from selling its cars at a competitive price. Constructed prices. When foreign firms are suspected of pricing at below-average total costs, the Department of Commerce is directed by law to ignore market information about foreign prices. For example, in calculating foreign market value, the Department of Commerce is expected to use a completely constructed foreign market price if it believes that 10 percent of the foreign firm’s sales are below the firm’s average total costs of production. In such cases, all the market information on actual sales is thrown out and an artificial price is constructed. One protectionist aspect of this methodology derives from how foreign costs of production are calculated. The law requires that not less than 10 percent be included in such calculations for general expenses plus a minimum of 8 percent for profits. At the very least, an exporter earning less than 8 Federal Reserve Bank of Dallas U.S. Antidumping and Countervailing Duties1 The U.S. government imposes antidumping duties if foreign merchandise is sold in the United States for less than fair market value. Less than fair market value sales are those priced below the foreign producer’s average total costs or below the price of the good in the home market. The U.S. industry must also be materially injured, which means it has lost sales to foreign producers. Antidumping duties equal the amount by which the estimated foreign market value exceeds the U.S. price. To determine dumping, agents in the Department of Commerce compare the price charged in the home market (or a third country market if no sales take place in the home country) to the price charged in the United States and the average total cost of production in the foreign market. The home country prices are determined using the value of the exchange rate prevailing at the time the foreign goods are first sold in the United States, rather than the exchange rate prevailing at the time the goods are exported to the United States. If the Department of Commerce suspects that at least 10 percent of domestic sales are below average total costs, data on foreign market prices are not used and a constructed foreign percent on its U.S. sales will automatically be found to be dumping. This methodology of calculating costs tends to penalize foreign producers during slack periods when profits may be squeezed. In addition, it punishes foreign producers who simply have lower overall profit margins for the products they sell. Opportunities for substantial error in calculating foreign costs also arise when foreign exchange rates are used to convert foreign prices to U.S. prices. In a 1989 U.S. case against Venezuela, the United States found a 259.71-percent dumping margin on imports of Venezuelan aluminum sulfate. To reach this finding, U.S. officials calculated prices using Venezuela’s official exchange rate of 14.5 bolivars per dollar, rather than the free market exchange rate of 39.5 bolivars per dollar that the company actually used (Bovard 1992, 3). Even when foreign firms are not suspected of pricing below average total costs, constructed foreign prices can be used. If a foreign-made product is not sold in its home country, comparing home-country with foreign-country prices becomes difficult. In this case a fair market price must also Economic Review — Third Quarter 1994 market price is created. In determining injury, the ITC assumes that a foreign firm that sells in the United States at prices below that country’s domestic prices will cause injury to the U.S. industry. Countervailing duties are imposed if a direct or indirect foreign subsidy (referred to as a bounty or grant in U.S. trade laws) is paid for the production or exportation goods to the United States. If the foreign country is a signatory of the GATT antisubsidy code, an injury test is applied. The countervailing duties are set to equal the amount of the net subsidy. The injury test for countervailing duties consists of studying current and potential harm by imports to an existing U.S. industry. The ITC examines increases in plant closings and unemployment and decreases in capacity utilization and profitability. The ITC also studies general U.S. economic conditions to determine whether imports are responsible for an industry’s decline. 1 We derived these definitions from P.K.M. Tharakan (1991) and Carper and Mann (1994). be constructed. For example, Polish made golf carts sold in the United States were a problem for U.S. officials because the Poles did not play golf and did not sell golf carts in Poland. The United States mounted a search for comparable countries whose wages and other costs could be used to reconstruct Poland’s hypothetical market price. The choice for a comparable country was Spain, despite its different economic structure and wages that were substantially higher than Poland’s (Bhagwati 1988, 5). Data requirements. Constructed prices are also legally allowed when accused foreign firms do not respond quickly with information requested by the Department of Commerce. It is important to understand the circumstances surrounding these requests because of the implications they have for the continuation of protectionism. When a U.S. firm charges a foreign competitor with dumping, the Department of Commerce requests detailed cost information from the foreign competitor. The Department of Commerce does not simply compare the U.S. and foreign prices but subtracts a number of items from the price of the 35 foreign good sold in the United States—including U.S. tariffs, insurance, ocean freight, handling and port charges, as well as brokerage and freight charges in the home country. It is not unusual for dumping to be found even when the price of the foreign product is higher in the United States than in the country of origin. The price appears lower during the procedure because the Department of Commerce makes subtractions to the foreign exporter’s price but not the U.S. producer’s price. Moreover, just the volume of data the U.S. government requires of foreign firms in such cases can be a deterrent to trade. The Department of Commerce may present an accused foreign firm with a questionnaire as long as one hundred pages that requests specific accounting data on individual sales in the home market, data on sales to the United States, and all the detailed data needed to adjust for tariffs, shipping, selling, and distribution costs. Information must be recorded and transmitted to the Department of Commerce in English and in a computer-readable format within a short deadline stipulated under the U.S. statutes (Murray 1991, 34).1 Compliance with these information requests can be difficult, particularly for small firms. If the firm or industry fails to satisfy all requests for information or fails to submit by the specified deadlines, the U.S. law authorizes use of what is called best information available (BIA). BIA typically consists of information provided by the U.S. complainant firm. Arguing that BIA is biased, Baldwin and Moore (1991) show that the average dumping duty based on information from foreign firms was 27.9 percent, compared with a 66.7-percent average with BIA. Averaging foreign and domestic prices. Even when Congress removes rules that seem to offer a protectionist cast toward governmental determina- 1 36 Using data requests as a from of harassment is not peculiar to the United States. In 1991, Mexico filed an antidumping case against U.S. denim producers and gave U.S. producers fifteen days to fill out a twenty-fivepage detailed report on accounting and production processes. The report had to be in Spanish and transported in computer-readable format. tions of dumping or subsidies, the use of the old procedures may persist anyway. A particularly instructive example involves the determination of dumping through an apples-to-oranges price comparison that was waived in the Trade Act of 1984 but that is sometimes still used anyway. The procedure involves averaging foreign prices and comparing this average with individual U.S. domestic transactions to determine dumping. Comparing average foreign prices with individual U.S. domestic transactions turns out to mean that even if domestic and foreign prices are exactly the same every day, instances of dumping can be found if prices change at all. To understand why, suppose that Korean toasters sold in Korea for $23 on Monday, $25 on Tuesday, and $27 on Wednesday and the same kind of toasters were sold in the United States at the identical prices on those same days —$23 on Monday, $25 on Tuesday, and $27 on Wednesday. The average price in Korea over those three days would be $25. But by a comparison of the average price of $25 with the average daily sale in the United States, Monday’s price of $23 turns out to be $2 below the average price of $25 in Korea (and also $2 below the average price in the United States, since it is also $25). Under the averaging rule, the discovery that a toaster sold for $23 on Monday when the average MondayTuesday-Wednesday price was $25 is grounds for the finding of a $2 (8-percent) “dumping margin.” This means that Korea is guilty of dumping and subject to punishment, even though there is no price difference between toasters in Korea and toasters in the United States on any given day. Price margins. Considering the substantial room for error in calculating foreign prices, the price differentials, or margins, that define dumping are strikingly small. In the United States, a foreign industry is subject to antidumping findings if it sells its products for less than 99.5 percent of what is estimated to be fair market value. Because 99.5 percent of fair market value is 0.5 percent less than 100 percent, this rule is called the 0.5percent de minimis rule. Review. Despite legitimate questions about the methodology of calculating antidumping duties, once a dumping duty is imposed, it may remain in force for years without periodic review of whether the foreign country has ceased dumping. Federal Reserve Bank of Dallas Countervailing duty laws Countervailing duty regulations are also susceptible to protectionist biases. The United States imposes countervailing duties on foreign exports that receive government subsidies. Like the antidumping rules, countervailing duty laws represent attempts to create fair trade. But also like the antidumping laws, countervailing duty laws involve procedures that can impede trade. Some of the same biases are common to both antidumping and countervailing actions, such as the use of best information available. Other biases that are unique to countervailing duty laws are described below. Defining a countervailable subsidy. Many foreign governments, and the United States, subsidize their industries. According to U.S. laws applicable to countervailing subsidies, foreign subsidies are countervailable only if they affect a country’s exports. Although a foreign subsidy to restaurants would, therefore, probably not prove countervailable in the United States, some complications over what affects exports do sometimes emerge. The complications arise when subsidies to an exporter are indirect. Suppose, for example, that the exporter purchases water from a water authority that is subsidized, and that the water authority passes some of its subsidy benefits on to customers in the form of lower water prices. Passing on part of a subsidy in the form of lower prices is typical in many countries, but the foreign producer who benefits may be subject to U.S. countervailing duties. Interestingly, if foreign nations applied these same rules to the United States, agricultural exports from California would be subject to countervailing duties because of de facto federal water subsidies to California farmers. Accounting procedures. In some cases, simple accounting procedures followed by the United States do not accommodate offsetting foreign taxes that dissipate the effects of foreign subsidies. In the 1983 countervailing duty case against Argentine wool, the United States chose to ignore a 17percent export tax that more than offset the 6-percent export subsidy that had been deemed actionable. The United States had argued that the two programs — the export subsidy and the export tax— had been enacted under separate laws and that Economic Review — Third Quarter 1994 only the subsidy was worthy of attention (Bovard 1992, 17). To sum up, U.S. antidumping and countervailing duty laws make it easy for domestic firms to seek protection from foreign competitors, even if the behavior of these competitors is not predatory. Recall that the law formerly focused on foreign producers who sold at a loss so as to drive domestic firms out of business. Now U.S. producers have much more liberal grounds for redress. Below, we discuss some of the ways in which the new GATT accord changes the protectionist bias in antidumping and countervailing duty laws. The new GATT agreement: A new direction? The Uruguay Round of GATT has adopted antidumping and countervailing duty regulations much like those found in the United States. In countries whose antidumping and countervailing rules were not fully developed, this carryover may lead to more protectionism. But many developing countries have already begun to follow the example of developed countries. In 1992, Brazil imposed 21-percent countervailing duties on powdered milk products from the European Community (GATT 1993). Nevertheless, harmonization of regulations can also lead to greater transparency and less arbitrary implementation. The European Community is now contesting Brazil’s action to GATT, claiming that Brazil did not prove that material injury had occurred because of subsidized powdered milk imports. Besides moving toward more uniformity, the new GATT accord erects roadblocks on some of the United States’ and other countries’ favorite avenues for protectionism. Although opportunities for protectionist pressures persist under the new GATT agreement, the new accord represents a step toward freer trade. Antidumping The new antidumping rules make administered protectionism a bit more difficult to implement, but opportunities remain. One problem, as noted by Finger (1994), is that the new rules (like the old ones) are ambiguous. Indeed, dumping is not defined; only antidumping is. The definition of antidumping, however, is simply a list of specific 37 restricted actions rather than a complete description of practices that should be followed.2 But despite its poorly articulated purpose, the agreement does address some procedural problems that have appeared in past antidumping practices. Areas where protectionist bias is likely to fall. Among the most widely criticized practices in the application of U.S. antidumping laws is the proclivity to construct fair market prices when actual market prices are available. As noted, the Department of Commerce can use a completely constructed foreign market price if it suspects that 10 percent of a foreign firm’s sales are below some estimate of average total costs of production. Past use of such constructed prices has been shown to increase, by a substantial margin, the likelihood of a finding of illegal dumping. Under the new GATT rules, such prices may still be constructed but are subject to more restrictions. U.S. officials, for example, would be permitted to use a completely constructed foreign market price but they must claim that 20 percent (as opposed to the present 10 percent) of a foreign firm’s sales are below some estimate of its costs of production. The Uruguay Round agreement will also affect the current U.S. 0.5-percent de minimis rule. Under this rule, a foreign firm that is found to have sold its products in the United States for as little as 0.5 percent less than some estimate of fair market value could be subject to antidumping duties. The new GATT accord contains a 2-percent de minimis rule that supersedes the 0.5-percent rule, which may limit the most frivolous actions. The Uruguay Round agreement also addresses the problem of comparing average foreign market prices to individual domestic sales. Recall that, according to this procedure, individual prices in the United States are compared with average foreign prices. This means that any price fluctua- 2 38 Finger (1994, 2), moreover, notes that, because the Uruguay Round agreement “wraps specific disputes with a distracting legalese, it represents a distancing from not a step toward, negotiating to reach agreement on the trade restrictions that are now sanctified—falsely sanctified—by the label ‘antidumping.’ ” tions at all during the investigation can generate an affirmative dumping finding. If a product’s prices happened to change during an investigation, prices of the foreign product imported into the United States would, on at least one day, fall below the total-period average. Just as every human cannot have above-average intelligence, there will likewise always be one price that is below the average price that, hence, could result in a dumping finding. In most cases under the Uruguay Round accord, governments pursuing antidumping investigations agree to compare average foreign prices with average domestic prices and individual foreign sales with individual domestic sales. However, even under the new Uruguay Round agreement, some provisions sanction the apples-to-oranges comparison of average prices to individual prices. The Uruguay Round accord sanctions this practice when a government investigates charges of spot dumping, a dumping category that involves brief dips below fair market prices. Another detail of the new accord, the dispute settlement mechanism, also warrants attention. The new dispute settlement mechanism may have only a marginal impact in thwarting protectionism overall, but it does contain elements that can thwart protectionism in some cases. Previously, when a country illegally imposed an antidumping or countervailing duty on another country, GATT had little power to investigate the case, let alone discipline the country. Any country, including the country acting illegally, could stop the investigation process. Moreover, even if the case proceeded to a finding of illegality, no discipline could be imposed upon the offending country unless the country itself agreed. The Uruguay Round accord, however, does not require the offending country to agree either to its investigation or discipline. Moreover, if a country does not implement a GATT panel’s recommendations within a certain period, the country that was harmed can seek authorization to retaliate. Among the most significant moves toward limiting administered protection is a new sunset rule that requires a review of injury each five years after an antidumping order is issued. That is, antidumping actions can no longer continue indefinitely without further review, as has been common Federal Reserve Bank of Dallas in some countries, including the United States.3 More generally, the Uruguay Round enhances freer trade through greater transparency and due process. The agreement makes the antidumping and countervailing duty laws more specific, permitting exporters to form more concrete and accurate expectations about the criteria for fair pricing. The agreement more fully defines avenues for dispute settlement, which also will increase the likelihood of freer trade and can lower the risk to traders. These last details are important because, while U.S. firms have been very active in levying antidumping charges during the past fifteen years, this avenue of combating import competition has become widely used in other countries only more recently and can be expected to increase in the future.4 Areas where protectionist bias is unlikely to change. While the new GATT agreement is likely to reduce the protectionist bias in the areas mentioned above, in other areas it will have a smaller effect. The extensive documentation that the U.S. Department of Commerce and other countries impose on foreign producers accused of dumping is not addressed in the Uruguay Round agreement. In the United States, for example, the requirement that foreign firms complete around one hundred pages of documentation in a tight time frame, in English, and in a computer-readable format, is not likely to change. GATT does not reduce countries’ opportunities to impose what some have charged are unreasonable and arbitrary demands on foreign producers. Despite outward appearances to the contrary, another area in which the new agreement is unlikely to change much is in the determination of injury. Traditionally, if the amount of “dumped” imports is not great enough to inflict some measure of material injury to an industry, then antidumping duties are not legal under GATT. But the definition of material injury has been left ambiguous up to now, and broadly subject to each country’s interpretation. The typical interpretation is that any foreign sales that displace domestic sales are cause for injury. In contrast, the new accord defines the line at which dumped imports are to be considered negligible (that is, too small to be injurious and therefore not subject to antidumping duties). The volume of dumped imports defines as negligible Economic Review — Third Quarter 1994 (and therefore not subject to antidumping duties) is less than 3 percent of total imports of the product or, if more than one country is subject to a dumping complaint, 7 percent of total imports. If a Japanese automobile maker is selling inexpensive cars in the United States and is alleged to be dumping, but sales of its cars are less than 3 percent of total imports, no duties will be assessed against its exports. There is reason to suspect that the new negligibility rule will rarely prove much of a constraint upon judgments of injury— and that the rule may prove less restrictive to protectionists than current U.S. rules. Consider the case of a foreign firm that is sole exporter of some product to the United States. Suppose, in this hypothetical case, that U.S. manufacturers make so much of a similar product that the foreign exporter’s sales account for only a 0.0001 percent share of the U.S. market. Under the new accord, a dumping suit could be filed against this firm because its share of total imports of this product is 100 percent, even though its share of the domestic market is only 0.0001. That is, the negligibility requirement is 3 percent of total imports, not 3 percent of the total market. It is hard to know if firms will file complaints about dumping at such a trivial level in the future. It does appear possible that, if such a 3-percent of imports negligibility requirement had been deemed sufficient to determine injury in the past, the number of injury determinations would have been greater than they, in fact, were. Finger (1994, 7) suggests that, had the new GATT 3-percent criterion been the sole standard for evaluating the steel dumping petitions, injury would have been ruled in every case that the United States International Trade Commission rejected in July 1993. 3 Indeed, some antidumping actions have been in force without reconsideration for decades. 4 In the past, both the World Bank and the International Monetary Fund have sanctioned and — at times — even encouraged assistance-seeking developing countries to enact antidumping rules, although antidumping is no longer encouraged. 39 Although the new GATT accord simplifies the process of disciplining countries that abuse the antidumping and countervailing duty laws, there is little GATT can actually do besides make recommendations. As with the old GATT agreement, even a recommendation to discipline may not be implemented. Moreover, the dispute settlement mechanism will preclude GATT panels from imposing their own judgments of fact or law on national antidumping authorities when the authorities have acted according to their own laws (U.S. Department of Commerce 1994). Finger and Fung (1993, 1) note that since July of 1993, only five GATT panels were able to determine illegal antidumping actions, but not one these actions has since been lifted. This problem is unlikely to change under the new GATT agreement. Subsidy countervailing actions Although the Uruguay Round agreement does not define dumping, it does define subsidy, and it differentiates clearly between subsidies that may be countervailed and those that may not. This transparency represents a significant step toward encouraging trade because it lowers the risk of retaliatory surprises. Under the new accord, some subsidies warrant out-and-out prohibition (those that are contingent on export performance or on using domestic inputs), while other subsidies 40 5 As a reflection of the powerful agricultural lobby found in many countries, the Uruguay Round agreement’s actionable subsidies section does not apply to agricultural product subsidies, as mentioned in Article 13 of the Agreement on Agriculture. In past judgments, this exclusion of agricultural subsidies has resulted in peculiar findings. For example, a Canadian program directed toward subsidizing the poorest 5 percent of the population was judged an unfair trading practice, while U.S. federal water subsidies to agriculture in California’s Central Valley have not been judged unfair (see Francis, Palmeter, and Anspacher 1991). 6 For a much fuller elucidation of this issue, see Magee, Brock, and Young’s discussion of the voter information paradox. According to their theory, as voter opposition to protectionism becomes increasingly sophisticated, political parties respond with higher equilibrium levels of more opaque distortions. may be grounds for taking actions.5 This clarification of prohibited, actionable, and nonactionable subsidies may curtail arbitrary actions that governments could otherwise choose to explain away as subject to their needs for flexibility and discretion. While the transparency of this portion of the accord moves governments toward freer trade, the accord’s peculiar perspective on whom and how subsidies benefit foreign producers does not. The accord focuses on the subsidy’s benefit to the recipient, without conditioning this focus on the trade impacts of the benefits. Subsidies do not, in fact, necessarily distort trade just because they benefit trading firms. As Francis, Palmeter, and Anspacher (1991) show, subsidies do not distort trade unless they lower the marginal cost of production. That is, subsidies can benefit shareholders without materially influencing the output produced by the firm or the prices it charges. Table 3 presents a summary of the likely effects of the new GATT accord on U.S. antidumping and countervailing duty actions. As the table summarizes, the overall effect of the accord on U.S. fair trade laws appears to be a modest reduction in the opportunities they offer for outand-out protectionism. Conclusions Despite their limitations, the countervailing duty and antidumping portions of the Uruguay Round accord generally move nations toward freer trade, and it is important to clarify the context in which they do it. Like any broad trade accord, the Uruguay Round accord represents a synthesis of pressures for and against protectionism and, therefore, it includes rules whose effects on trade seem contradictory. One of the most serious problems in trade liberalization is that, as more transparent forms of protectionism are noticed and then negotiated away, rent-seeking groups devise replacements that are less transparent.6 An important incarnation of this phenomenon is administered protection, which often takes the form of countervailing and antidumping actions. This claim should surprise no one, considering that the antidumping and countervailing duty portions of the accord correspond so closely to Federal Reserve Bank of Dallas Table 3 The Uruguay Round of GATT: Effects on U.S. Antidumping And Countervailing Actions New Rule Effect Five-year sunset rule on antidumping duties. After five years, dumping duties will be terminated unless a new review takes place. Reduces the likelihood of permanent protection being granted to industries when foreign dumping is no longer present. The level below which dumping margins will be ignored (the de minimis rule) rises from 0.5 percent to 2.0 percent. Slightly reduces the number of the most frivolous antidumping investigations. Level at which sales below cost are considered substantial rises from 10 percent to 20 percent. Slightly decreases the number of cases in which foreign market price information is disregarded. May limit frivolous antidumping findings. Defines a preference for comparing average domestic prices with average foreign prices or individual domestic prices with individual foreign prices. However, countries can still compare averages with individual prices when spot dumping is alleged. Slightly decreases opportunity to find dumping when prices are identical in the foreign and home markets. GATT panels cannot impose their judgments on a country when the country, in its finding of dumping, has acted in an unbiased and objective manner. May slightly increase the opportunity to find dumping. Dumped imports from all countries will not be considered injurious to domestic firms if they constitute less 7 percent of total imports. Unlikely to have a significant effect on dumping actions. Specifically defines those subsidies and that are prohibited, those that are countervailable, and those that are not countervailable. Decreases the scope of countervailing actions, reduces the possibility of frivolous cases. the United States’ legal expressions on the same subjects. After all, the U.S. process involves such detail and obscurity that it in one month has involved seventy-two different investigations just on steel imports. Such a process represents far more opportunities for disguised protectionism than tariffs would, even if forty of the seventy-two investigations did not lead to antidumping or countervailing duty actions. It is in this context that we may see the administered protection portion of the Uruguay Round as liberalizing trade. Countervailing and antidumping actions often represent abstruse attempts to redistribute welfare from consumers to producers. While consumers benefit from the Economic Review — Third Quarter 1994 lower prices of foreign suppliers, domestic producers typically can make more money by charging higher prices, and they typically can charge higher prices when they have less competition from foreigners. But if all this is true, how can it be argued that, on the whole, antidumping and countervailing duty rules in the Uruguay Round accord of GATT represent a move toward freer trade? The Uruguay Round agreement more fully codifies what protectionism is permissible and what is not. The accord provides for dispute settlement and, in a number of cases, offers explicit boundaries between what may and what may not be actionable. 41 As a result, while the accord includes what Finger refers to as “trade restrictions that are now sanctified,” it also constitutes trade restrictions that are now specified. The transparency of the agreement lowers, as we have argued, the risk of what may otherwise be surprise retaliations. The fuller authority of the dispute settlement mechanism— regardless of how limited this authority remains— increases the likelihood that these rules will be followed. There is always the possibility that new, even more fully disguised forms of protectionism will replace the old ones. However, despite the attempts of firms to disguise protectionism, world trade has been increasing. World trade as a share of total world gross domestic product grew from 27 percent in 1970 to nearly 40 percent in 1992. Given the increasing importance of trade to most economies, political momentum is likely to favor more open markets. By making any remaining protectionism more transparent, the new GATT accord reinforces the trend toward a more globalized market. References Anderson, Keith B. (1993), “Antidumping Laws in the United States— Use and Welfare Consequences,” Journal of World Trade 27 (April): 99 –117. Baldwin, Robert E., and Michael O. Moore (1991), “Political Aspects of the Administration of the Trade Remedy Laws,” in Down in the Dumps: Administration of the Unfair Trade Laws, Richard Boltuck and Robert E. Litan, eds. (Washington, D.C.: Brookings Institution). Bhagwati, Jagdish (1988), Protectionism (Cambridge, Mass.: MIT Press). Bovard, James (1993), “Clinton’s Dumping Could Sink GATT,” Wall Street Journal, December 6, A16. ——— (1992), “The United States’ Protectionist Antidumping and Countervailing Subsidy Laws,” Liberty in the Americas: Free Trade and Beyond, Conference sponsored by the CATO Institute, Mexico City (May 19 –22). Carper, Virginia, and Catherine Mann (1994), “Trade Disputes Under the U.S.– Canadian Free Trade Agreement,” mimeo, Board of Governors of the Federal Reserve System. Cooper, Richard N. (1972), “Trade Policy is Foreign Policy,” Foreign Policy 9 (Winter): 18 – 36. 42 Finger, J. Michael (1994), “The Subsidies – Countervailing Measures and Antidumping Agreements in the Uruguay Round Final Act,” mimeo, World Bank. ——— (1992), “Dumping and Antidumping: The Rhetoric and the Reality of Protection in Industrial Countries,”World Bank Research Observer 7 (July): 121–43. ———, and K.C. Fung (1993), “Will GATT Enforcement Control Antidumping,” Policy Research Working Paper no. 1232, World Bank, December. ———, H. Keith Hall, and Douglas R. Nelson (1982), “The Political Economy of Administered Protection,” American Economic Review 72 (June): 452 – 66. Francis, Joseph F., N. David Palmeter, and Jeffrey C. Anspacher (1991), “Conceptual and Procedural Biases in the Administration of the Countervailing Duty Law,” in Down in the Dumps: Administration of the Unfair Trade Laws, Richard Boltuck and Robert E. Litan, eds. (Washington, D.C.: Brookings Institution). GATT (1993), GATT Activities 1992: An Annual Review of the Work of the GATT (Geneva: General Agreement on Tariffs and Trade). Gould, David, Roy Ruffin, and Graeme Woodbridge (1993), “The Theory and Practice of Free Trade,” Federal Reserve Bank of Dallas Economic Review, First Quarter. Hufbauer, Gary Clyde, and Kimberly Ann Elliott (1994), “Measuring the Costs of Protection in the United States,” (Washington, D.C.: Institute for International Economics). Magee, Stephen, William Brock, and Leslie Young (1989), Black Hole Tariffs and Endogenous Policy Theory: Political Economy in General Equilibrium (New York: Cambridge University Press). Moore, Michael O. (1992), “Rules or Politics? An Empirical Analysis of ITC Antidumping Decisions,” Economic Inquiry 30 (July): 149 – 466. Murray, Tracy (1991), “The Administration of the Antidumping Duty Law by the Department of Commerce,” in Down in the Dumps: Administration of the Unfair Trade Laws, Richard Boltuck and Robert E. Litan, eds. (Washington, D.C.: Brookings Institution). Palmeter, N. David (1991), “The Antidumping Law: A Legal and Administrative Nontariff Barrier,” in Down in the Dumps: Administration of the Unfair Trade Laws, Richard Boltuck and Robert E. Litan, eds. (Washington, D.C.: Brookings Institution). Stoeckel, Andres, David Pearce, and Gary Banks (1990), Western Trade Blocs (Canberra, Australia: Center for International Economics). Tharakan, P.K.M. (1991), Policy Implication of Antidumping Measures (Amsterdam: North – Holland). U.S. Department of Commerce (1994), Uruguay Round Update (Washington, D.C.: Office of Multilateral Affairs, International Economic Policy, U.S. Department of Commerce), January. Federal Reserve Bank of Dallas Richard Alm David M. Gould Journalist Dallas, Texas Senior Economist Federal Reserve Bank of Dallas The Saving Grace A nation’s savings matters. Money set aside from today’s consumption can be invested, through financial markets, in productive assets embodying the latest innovations. A newer and better capital stock can provide the fuel to sustain higher rates of growth and improve living standards. In a nutshell, that’s the current view of many economists, one that places savings among the most important pillars of a nation’s long-term economic health. Michael Mussa, economic counselor and director of research at the International Monetary Fund, summed it up: “Why is saving important? Primarily because investment is important.…Growth tends to be high in economies where savings and investments are both high and reasonably well deployed, and growth tends to be poor in economies in which savings and investments are low or not well deployed.” The past decade has brought a greater appreciation of the beneficial role of savings in an economy. Before that, many economists, using Robert Solow’s 1956 work, argued that saving didn’t contribute all that much to growth. Additional saving might increase the capital stock and raise living standards, but it didn’t boost long-run growth prospects. In the 1980s, endogenous growth theorists began to see that additional capital, both physical and human, gave society a growth bonus, usually related to more rapid technological progress. Now, most economists recognize saving as an important factor in growth as well as standards of living. This shift in the profession’s views on saving’s role in economic growth puts a spotlight on a number of related issues. They include: • What are the key factors that have positive and negative influences on saving? • Should government’s role in creating a better environment for saving be one of Economic Review — Third Quarter 1994 intervention or one of financial liberalization? • To what extent can a country make up for low domestic saving by tapping into the savings of other countries? To explore these topics, the Federal Reserve Bank of Dallas invited economists, bankers, and officials from the United States, Latin America, and Europe to a symposium on “The Role of Saving in Economic Growth,” held March 18–19, 1994, at Houston’s Woodlands Conference Center. This article summarizes the proceedings. In a statement to open the conference, Federal Reserve Chairman Alan Greenspan said: “We need to understand better the role of various factors determining saving currently and in the past so we can shape policies that encourage rather than discourage saving and investment. Only in that event will we be able to achieve sustainable increases in real output and standards of living for our respective countries.” America’s low saving rate Real-world observations on savings vary over time and among countries. The national saving rate, defined as a percentage of gross national product (GNP), is low in the United States compared with other industrial countries and most of the developing world. In recent years, U.S. saving hovered around 15 percent of GNP. In European nations, the rates are slightly higher. In Japan and other Asian economies, the figure often exceeded 30 percent in the past two decades. The low U.S. saving rate troubles William J. McDonough, president of the Federal Reserve Bank of New York. “The saving decline has occurred across the board—by households, by 43 business in the form of retained earnings and by government, federal as well as state and local,” he said. McDonough cited Fed research estimating that the U.S. economy would have gained $300 billion a year, or 5 percent of potential output, if the saving rate of 1961 to 1981 had prevailed during the 1980s. Interestingly, the United States hasn’t always been a low-saving country. From the end of the Civil War to World War II, saving and investment were much higher in the United States than in Europe or Japan. The U.S. postwar experience isn’t unique, however. Other Western industrial countries show a similar decline in saving. For both the United States and Europe, a slight slippage of private saving has been worsened by growing public-sector dissaving, or deficit spending. Looking at developing countries, the situation is much different. There is a general upswing in saving and investment, which has become associated with a quickening of the pace of economic growth, particularly in East Asia. The keys to saving A nation’s savings includes money individuals put into banks or invest in stocks, bonds, and other financial instruments. Companies also save in the form of retained earnings, but for the most part, the business sector taps into society’s savings for capital spending. The public sector contributes to a nation’s saving rate, either positively or negatively. Government borrowing for current spending can siphon funds away from productive investment, reducing the benefits of saving and investment to the economy. Conventional statistics often miss other spending that might properly count as saving. The list includes the acquisition of consumer durables and investments in human capital, especially education and training. Infrastructure projects add to a nation’s productive assets, too. Saving depends on myriad factors, many beyond easy control of policymakers. In his remarks, Greenspan identified many of the influences. Demographic characteristics, such as age and the population’s average income, play a role. So might the riskiness of available assets. Uncertainty about jobs and future income can lead to extra saving as individuals seek additional secu44 rity. Inflation can induce households to save more to make up for the erosion of nominal assets’ values, but it redirects funds into such unproductive activities as land speculation. Financial institutions’ stage of development will determine how efficiently the savings of the private sector can be channeled to its best uses. The openness of the financial sector will affect how well an economy can attract savings from around the world. Lamberto Dini, director general of the Bank of Italy, added psychological and cultural factors to the list of factors that influence saving. In ethics and religion, saving is praised. Personal experience is also probably important: the survivors of the Great Depression or World War II “developed a deeply rooted sense of prudence which led them to be more frugal than those who have no memory of the hardship brought by these tragic events,” Dini said. Guillermo Barnes, director general of development planning at Mexico’s Ministry of Finance, said culture shapes saving behavior, too. In Mexico, he said, the father of a bride often will deplete his life savings on a threeday wedding party. Institutional arrangements might also affect saving. In Asia, postal savings systems do a better job than banks in collecting the funds of small savers. In Chile in 1980 and Mexico in 1992, fully funded pension plans for individuals replaced taxpayer-funded schemes, simultaneously increasing the propensity to save and reducing the tendency for public-sector dissaving. Dini offered an explanation for the decline in saving in the Western industrial countries. In Italy, where private saving fell from 18 percent to 12 percent of net domestic product in three decades, studies show that a slowdown in economic growth explains nearly two-thirds of the erosion of private saving. A threefold increase in benefits for the elderly accounted for an additional third. Another factor, more prevalent in other countries, might be financial liberalization, which allows households greater access to credit. As people buy more, their saving falls. In Italy, consumer credit is still rather thin, and Dini estimates that freeing it up would slice 2.5 percentage points off the saving rate. Household saving dropped 6 percentage points in the United Kingdom with financial deregulation between 1984 and 1988. Society’s changing institutions, moreFederal Reserve Bank of Dallas over, might help explain declining saving rates. Greater availability of insurance and welfare programs may lead individuals to spend more freely, believing they are protected against most misfortunes. Dini rejected aging of the population as a significant factor in the decline in saving. Overall, he concluded, “I do not believe that private saving rates will recover significantly in the industrial countries.” If saving fosters economic growth, governments will be tempted to try to induce more saving by offering incentives, often in the form of tax breaks. Dini contended that such rainmaking programs are more likely to alter the allocation of savings from one market to another rather than increase its aggregate level. Economic policy is more likely to stimulate saving by pursuing general objectives of stability and financial market flexibility than by offering specific incentives to savers, he concluded. Latin America’s experience The economies of Latin America provide a prism for viewing saving and investment. Over the past 15 years, the region went through crisis and recovery. Many economic relationships were convoluted by bad policies, then restored by good ones. What’s perhaps most intriguing, from a policy perspective, is saving’s relationship to macroeconomic performance. In general, stability, with low inflation and responsible fiscal policies, favors saving and investment, both foreign and domestic. A wild ride with inflation and excess spending skew saving and investment decisions, eventually strangling growth. Axel Leijonhufvud, a professor at the University of California, looked at Latin America’s record in the 1980s, finding that high inflation created massive uncertainty. The responses included shortening the length of contracts and avoiding certain types of transactions altogether, particularly long-term ones. In Argentina in the late 1980s, when inflation approached 30 percent a month, it was difficult to find much lending beyond 30 days. Capital flight is another way of dropping out of a risky, chaotic market. Leijonhufvud drew another, more hopeful lesson from Argentina’s recent history: once stability returned to the economy, functional financial markets reemerged very quickly. Economic Review — Third Quarter 1994 Vittorio Corbo, professor of economics at Catholic University of Chile, pointed out that his country’s recent experience shows that saving does swing as a result of an economy’s ups and downs. Chile’s gross national saving rate fell from 12 percent in the late 1970s to 7 percent in 1981 and to an all-time low of 2.4 percent in 1984. The country suffered from external shocks and a sharp recession. The economy recovered in the mid1980s, becoming the strongest in Latin America, and the national saving rate rose to a historical peak of 22.5 percent in 1992. Indeed, most empirical work suggests a strong correlation between a nation’s saving and the growth rate of income, although the direction of causality can’t be easily untangled. “It is a virtuous circle,” Corbo said. “A higher saving rate makes possible a higher investment rate, and [a] higher investment rate in a low distorted policy environment results in a higher growth rate, and the higher growth rate results in a higher saving rate, and so on. The challenge is to get the process started.” Ariel Buira, director of international relations at the Bank of Mexico, presented a study of factors shaping saving in his country. Mexico shares many of the characteristics of developing countries, especially those in Latin America. Its saving rate is relatively high now—at 22 percent to 25 percent of GDP—and the economy suffered through several financial shocks in the 1980s. Buira finds savings positively correlated with income when looking at data for the period since 1965. The public sector also has a big effect. Each 1 peso reduction in deficit spending led to a 44to 54-centavo increase in national saving. However, private saving fell by 46 to 54 centavos, revealing a trade-off between government and private saving that’s less than one to one. “Public saving only partially crowds out private saving,” observed John Welch, vice president and market analyst at Lehman Brothers. In Mexico, as in many other countries, there’s an inverse relationship between saving and rising wealth, and between saving and the proportion of the population over age 65. These results support a lifecycle explanation for saving, which holds that people save to ensure adequate consumption after their working days end. The Mexican experience shows a higher saving rate for earners of nonwage income than for workers, 45 but the difference may not be all that important. Laborers still put away 19 percent of their pay, a figure not too far below the general rate. Inflationadjusted interest rates have only a marginal impact on saving. Buira’s research provided some insight into some unusual aspects of Mexican saving. The first involved effects of financial upheaval. Saving was higher than it probably should have been from 1981 to 1985. Buira suggested Mexicans realized income gains in 1981 were transitory, thus they saved. A severe contraction of credit and real wages boosted saving after 1982. Saving fell below its predicted path in 1986, largely because of a rise in public dissaving. The second phenomenon is a troublesome decline in Mexico’s saving in the late 1980s and early 1990s. A host of factors might be at work, but Buira stressed two of them. An increase in wealth from a boom in stock market and real estate prices led to more consumption and less saving. A cut in the budget deficits left private-sector incomes lower, thus reducing the ability to save. Barnes added that it will be important to determine whether some of the factors affecting Mexico’s saving and investment would be temporary or permanent. In his mind, the reduction in public-sector dissaving will last. The increased consumption of durable goods from abroad owes itself to pent-up demand and may not endure. “In Mexico, we are convinced that savings are a necessary but not sufficient condition for growth, and those savings have to come from the country itself,” Barnes said. “In Mexico, we are also convinced that the financial sector plays a crucial role in the saving and investment process. Therefore, important efforts have to be made to have a more efficient and competitive financial system.” A high saving rate, by itself, isn’t enough to guarantee growth and progress. Societies must funnel resources toward productive uses. Many of the centrally planned economies established good saving performances over the past twenty-five years, but the absence of market mechanisms caused investment to inefficient and economically wasteful projects. The economies stagnated and eventually collapsed. Markets are not foolproof either. Excessively cheap lending by the U.S. savings and loan industry in the 1970s and early 1980s left an embarrassing legacy of unwanted 46 real estate. “It’s a mistake to believe that there is an automatic and inflexible link between a high level of saving and a high rate of growth,” said the IMF’s Mussa. Government’s role: Repression or liberalization? If saving and investment are a big part of what makes an economy grow, the have-not nations aspiring to join the world’s haves will possess plenty of reasons to learn what they can on the subject. Interestingly, the saving part of the equation isn’t necessarily a problem in developing nations. Most poor countries outdo the wealthier ones by setting aside a larger portion of GDP. Among the reasons for this: populations tend to be young, people don’t have the safety net of the rich nations, and consumer credit is scarce. With saving usually available, the critical element for growth will be how well a society mobilizes its savings and directs it toward productive uses. That, of course, will depend on the institutions, regulations, and practices that shape financial markets. A crucial question is whether governments can do a better job than financial markets in allocating savings and investment. If that’s the case, the creation of efficient financial markets can be left on a back burner. In the early post-World War II years, policies aimed at directing saving and investment were popular. Central banks in many poorer nations kept interest rates artificially low, with the intent of promoting additional investment. Regulations restricted capital flows in an attempt to keep resources at home. Various government schemes tried to channel money into preferred projects. Do these policies work? The real world seems to offer many contradictions. Japan in the 1950s and 1960s and Korea until the mid-1980s apparently succeeded with interventionist governments. Hong Kong and Singapore had less meddling but still developed rapidly. Many countries with interventionist policies had initial success in boosting growth rates but later paid a heavy price as economies crumbled—the former Soviet Union in its heyday; Argentina, Brazil, Mexico, and other Latin American countries in the late 1970s; Poland and Yugoslavia in the 1980s. Allan Meltzer, professor of political economy Federal Reserve Bank of Dallas and public policy at Carnegie–Mellon University, made the point that it’s difficult to make an ironclad case either for or against intervention in saving and investment. Theoretical propositions are contradictory; the evidence of experience is inconsistent. For example, relying on private capital markets instead of government borrowing or guarantees in Latin America might have produced slower growth in the 1970s. Market mechanisms, however, probably would have yielded faster growth in the 1980s, when governments had to stifle demand and investment to keep creditors at bay. The state may indeed direct resources to efficient uses, especially when investing in technologies that proved their worth elsewhere. Even so, government-directed saving and investment raises a number of problems. Low interest rates might inhibit saving and stifle development of the financial sector. Money can be diverted to less efficient or wasteful projects. Opportunities for political intervention, favoritism, and corruption increase with government meddling in financial decisions. Overall, Meltzer concludes that “repressed financial systems” haven’t offered an advantage over liberalized financial markets: “Countries that allow interest rates to respond to market forces do not pay a penalty for higher rates; they generally benefit by getting greater efficiency (or more output) per unit or dollar invested.” Meltzer sees the value of banks and other financial institutions: “Developed financial markets increase efficiency by saving transaction costs, by eliminating the costs of barter, by reducing costs of acquiring information, and increasing the efficiency of investment.” Yet, the benefits don’t make a case for activist policies to promote the expansion of the financial sector itself. “When there is sufficient demand for a particular service, a competitive market will supply the service,” Meltzer said. “Government can help to keep financial markets competitive by permitting entry and expansion of domestic and foreign intermediaries and can increase efficiency by reducing regulation, reserve requirements and interest rate controls.” Mexico’s liberalization. Agustín Carstens, director general of economic research of the Bank of Mexico, agrees. In his mind, the government’s role ought to be limited to offering efficient judicial, regulatory, and supervisory systems. “This Economic Review — Third Quarter 1994 type of government intervention is necessary to keep financial institutions from overexposing themselves, and the wealth of their depositors, to risks that might be higher than socially desirable,” Carstens said. In developing countries, intervention for many years had gone well beyond this, but a wave of financial liberalization gained momentum across Latin America in the 1980s. Mexico entered the decade with a mass of interest rate restrictions, domestic credit controls, fragmented financial markets, and high reserve requirements. Compulsory lending to the public sector crowded out credit to the private sector. Mexico ended the decade by letting markets set interest rates. It reprivatized its banks. It eliminated reserve requirements on bank deposits in 1989 and a liquidity ratio in 1991. The government encouraged development of new financial intermediaries and the establishment of new commercial banks. The country had 18 banks at the time of privatization. It will end 1994 with 55 to 60 institutions, including as many as 25 subsidiaries of foreign banks. In addition, the North American Free Trade Agreement (NAFTA) will continue the opening and liberalization of Mexico’s financial structure. Liberalization hasn’t solved all of Mexico’s financial market problems. For example, there’s still a scarcity of long-term saving, but Carstens is counting on the government’s new system of retirement saving to help. Barnes pointed to another risk: the inability of regulators to keep up with changes in the financial marketplace. “Financial innovation runs rapidly,” he said. “Regulation doesn’t run rapidly. Sometimes supervision in practice may lag behind. This is where problems start.” Did financial liberalization spur growth in Mexico? In a statistical study, Carstens did find a correlation between the new policies and a burst of economic activity in the late 1980s. Even so, the role of the reforms isn’t clear. While freeing up financial markets, Mexico also pursued an aggressive stabilization program, cutting inflation and deficit spending. “It is difficult to distinguish between the effects of financial liberalization and those of the economic adjustment program on financial variables,” Carstens said. In the end, the proof that freer financial markets make a positive contribution to growth awaits better theoretical or 47 empirical foundations. Carsten’s practical advice: “Policymakers should act as if its contribution were meaningful. The social costs of not acting accordingly can far outweigh any benefits.” Liliana Rojas–Suarez, deputy division chief for capital markets and financial studies at the International Monetary Fund, noted that initial conditions shape financial liberalization. Often, the legacies of years of government intervention in banking and finance don’t give financial reforms a solid ground on which to start. Banks might hold assets lent at below-market rates, or they may be plagued by nonperforming loans. There might be stifled demand for credit. “The problem with liberalizing financial markets is that after years and decades of financial market repressions, the financial sector didn’t know how to behave as intermediaries,” Rojas–Suarez said. “The issue is not ‘to liberalize’ or ‘not to liberalize’ but when to liberalize.” In Argentina, for example, banking problems triggered an intervention that led to hyperinflation because the overexpansion of credit did not stop. Chile faced a similar crunch, and it avoided a price explosion by interjecting money on the condition that banks restructure and reform themselves. Importantly, real interest rates remained positive in Chile, so the country did not go through the hyperinflation of excessive credit creation. According to most economists, reducing deficit spending can benefit saving in two ways. Directly, it will reduce the drain on domestic saving caused by the need to finance the public sector. Indirectly, cutting red ink will reduce excesses that often lead to high inflation. Public indebtedness plagues just about every Western industrial nation. In the United States, a succession of deficits since 1960 has left public debt at 60 percent of annual gross domestic product (GDP), with perhaps an additional 40 percent in invisible liabilities, Social Security, and public pensions. “A key issue in terms of improving the national saving performance and making room for the finance of a higher level of investment needs to focus on diminishing both the visible and invisible components of public-sector dissaving,” Mussa said. A contrarian view came from Robert Eisner, a professor at Northwestern University. He argued that most notions of saving and investment neglect the driving force of economic growth. There’s 48 little incentive for companies to invest in a stagnant economy and thus little need for additional saving. “If output stops growing, the stock of capital cannot increase,” Eisner said. “Perhaps the decline in the net saving ratio is a consequence, not a cause, of the slowing of our rate of growth.” Eisner’s emphasis on growth leads to an iconoclastic slant on deficit spending with respect to national saving. In 1992, gross saving and investment in the United States totaled $741.4 billion—$986.9 billion in private saving, less $269.1 billion on public dissaving (plus a statistical discrepancy). In the conventional view, raising taxes or cutting government spending would reduce budget deficits and thus increase saving. Eisner doubts it. Raising taxes would leave Americans less to save. Cutting spending would reduce incomes and the ability to save. Furthermore, when consumers have less to spend, they buy less, hurting businesses’ sales, production, and investment. Eisner asks: “Is the Chrysler Corp. going to invest more or less if you stop buying?” As a result, Eisner opposes cutting the budget deficit as a remedy for America’s low saving. Quite to the contrary, he sees the nation’s problem as slack growth. It would be a mistake, then, to reduce the government’s economic stimulus. Another issue arises out of the failure to recognize that some government spending is properly regarded as investment—education, infrastructure, health and research, for example. With this included, the government dissaving falls to $96.7 billion. Budget cutting will reduce public-sector investment, Eisner said, and the decline will not be offset by the private sector. The economy will suffer. In an empirical analysis, Eisner finds a positive relationship between deficit spending and investment. Each percentage point of red ink as a portion of GDP added more than 1.2 percentage points to gross private domestic investment as a portion of GDP. Eisner’s bottom line: “The solution to imagined or real problems of an insufficiency of saving would not appear…to be found in reducing or eliminating the budget deficit, or in monetary tightness to slow down the economy.” Thy neighbor’s saving Today, money can move quickly across borders. The opening of financial markets and Federal Reserve Bank of Dallas new technology have made it easier for investors to seek higher rates of return outside their own countries. Today, companies routinely invest in enterprises abroad, and individuals buy stocks or bonds on overseas markets. One view of the world envisions a great savings pool: every saver throws surplus income into a pot. Those with projects to finance dip into the pot, at least as long as the investment yields a positive value at prevailing interest rates. Conference participants doubt this is the way the world works, even in an age of highly integrated financial markets. Low-saving countries aren’t likely to make up for their shortfalls by tapping the saving of foreigners. Dini said: “My own view is that we will certainly see greater international integration and mobility of capital, but also that it would be illusory and dangerous to believe external capital can substitute [for] rather than complement domestic saving.” National savings still vital. Mussa assessed some of the evidence against the notion of a single savings pool. The international ebb and flow of capital shows up in each country’s balance of payments statistics. Capital importing countries run a current account deficit, and exporters run a current account surplus. In recent years, the United States emerged as a major magnet for foreign money, with a current account deficit as high as $168 billion in 1987. Japan has become the world’s largest capital exporter, running a current account surplus of $140 billion in 1993. Even so, current account deficits or surpluses rarely exceed 3 percent of GDP for industrial countries, meaning that net capital flows generally aren’t a dominant factor in any country’s total savings and investment. A single savings pool, moreover, would send money flowing here and there until all countries offered the same rate of return. However, inflation-adjusted returns differ from one country to another, even for publicly traded assets. Once again, the evidence is that financial markets retain a national character. What’s more, some types of investment—reinvested profits and improvements in human capital, for example— don’t generally flow through financial markets. Mussa concluded: “A national economy such as [that of] the U.S. cannot escape the implications of a low saving rate by expecting to draw on the Economic Review — Third Quarter 1994 world pool of saving. If saving is low in the U.S., that will translate into an effect on investment in the U.S. and, in effect, on growth.” With foreign investment no longer anathema, developing countries are opening their financial markets and welcoming money from overseas. In some capitals, the foreign funds are regarded as a linchpin for growth. The inability of foreign savings to compensate for low domestic savings carries a message for these nations. The emerging economies may be able to get some help from foreign investors, but their own savers will have to bear most of the burden of supplying capital to fuel growth. The same applies to the former Soviet republics, Eastern Europe, and China. They will have enormous needs for new investment. Mussa estimated it would require $8 trillion just to raise living standards in the former Soviet Union and Eastern Europe to half that of Western Europe and China to a third of those levels. In each case, virtually all of the money will have to come from the country itself, Mussa said. Moises Schwartz, the Bank of Mexico’s deputy manager for monetary policy, worries that Latin American countries in the 1990s may be relying too little on domestic savings and too much on foreign capital. “This source of financing can disappear rapidly,” he said. According to Schwartz, another problem could be the bidding up of currency values, which may dampen growth for countries that are looking to exports for economic development. Stephen H. Axilrod, vice chairman and director at Nikko Securities Co. International in New York, agreed that long-term reliance on foreign capital is a chimera, but he contended “there are moments in time where you can get real benefit from net flows of capital from abroad.” The United States in the 1980s and early 1990s might be a case in point. The country ran huge budget deficits at a time of sagging saving. Private investment didn’t suffer as much as it might have because the country was able to run current account deficits, a sign of importing capital. “I am beginning to think it helped to protect our standard of living to a degree, while we were going through a rather radical restructuring of our domestic industry, thereby in the end making us more competitive.” Axilrod acknowledged that all foreign money isn’t equal. Countries should prefer 49 direct investment, which brings skills and technology helpful to development, over portfolio investment, which may bring general savings from abroad but can be highly volatile. A herky-jerky flow of foreign money can be unsettling for an economy, especially one that’s not fully developed. The 1980s roller coaster. The Latin American debt crisis is evidence that money from overseas isn’t always a blessing. A great inflow of other people’s savings came into Mexico, Argentina, Brazil, Chile, and other countries in the wake of the oil boom in 1979, lent largely through international banks. In 1981, for example, Chile experienced a capital inflow equal to 15 percent of GDP. Economic growth did perk up, for at least a little while. When economic shocks caused international lenders to lose faith in Latin America, the money stopped, and Latin America suffered through a miserable decade of hyperinflation, stagnant output, and falling standards of living. “It was worse in some countries than the Great Depression was in the United States,” UCLA economist Arnold Harberger said. If the money flows hadn’t been so large, the problems might have been smaller, but Harberger finds structural factors and policy responses worsened the crisis. Latin America’s dependence on agriculture and mining, for example, made adjustment to the slowing of foreign investment especially difficult. Unlike manufacturing, these industries can’t quickly increase exports to generate foreign exchange. Supply is inelastic, and it takes a long time to find alternatives to foreign money. Korea, a manufacturing dynamo, also had debt problems in the early 1980s, but it didn’t suffer nearly as much. Its factories could quickly make up for any change in investment flows. Harberger also sees a “hot stove syndrome” in Latin America: citizens burned time and again by the economy’s zigs and zags adopt a short-term planning horizon that only adds to instability. On policy matters, Harberger focused on inflation-adjusted exchange rates. With flexible exchange rates, a big inflow of capital ought to force an appreciation of a country’s currency. A sharp slowdown calls for a depreciation. In Latin America, however, exchange rate policies tended to aim at stability in nominal terms against the dollar. When there are negative shocks, necessary adjustments to changes in the nominal ex50 change rate are short-circuited, leaving the adjustment to occur in falling domestic prices. If the deflation entails an economic slowdown that’s too difficult for the government to handle, then there’s usually a sudden, large devaluation. When they come, the devaluations, inflations, or other shocks are huge. Argentina in the 1980s provides an example of misguided policy. The government allowed the exchange rate against the dollar to slip on average just 1.25 percent a month. Domestic prices rose much faster, at about 6 percent a month, in part due to the stimulus from the capital inflows. In effect, Argentina pursued a policy of paying a real return of 4 percent to 4.5 percent a month to holders of its currency. Had the authorities managed the real exchange rate by allowing a devaluation of 6 percent a month, Harberger said, the later collapse and crises could have been avoided, or at least substantially reduced. The key to avoiding crises lies in stabilizing real exchange rates. A few countries have done it, but Harberger argued that monetary instruments alone will be insufficient. Many Latin American countries in the past resorted to trade restrictions. Interest rates are another tool, but they can be only partly effective in countries that aren’t fully integrated with world capital markets. These actions may not be the wisest course for a region that relies on agriculture and mining instead of manufacturing and that embodies the short-term outlook of the hot stove syndrome. Obtaining the desired results on real exchange rates, Harberger said, may take a dose of strong medicine—a 200percent tariff or a very large increase in interest rates. These, however, could have very costly side effects. “In the end, the equilibrium real exchange rate has its own life, and it’s hard to influence by instruments that we like,” Harberger said. Another debt crisis can’t be ruled out, especially in a region that’s getting a strong flow of foreign money. Yet, Harberger contended that recent changes in the region make it less likely. For starters, with more manufacturing, there’s been a diversification away from agriculture and mining. The stability of economic policy will improve long-term confidence. Finally, financial reforms are allowing markets to set interest rates and exchange values, lessening the prospects for policies that will allow pressure to build. Federal Reserve Bank of Dallas Roque Fernandez, chairman of the Central Bank of Argentina, said countries that maintain sound economic policies at home have a better chance of avoiding destabilizing capital flows. In Argentina, economic reforms of the late 1980s were negated by massive government borrowing and hyperinflation. “All of the saving and time deposits were government debt,” Fernandez said. “Any expectation of inflation that built up in nominal interest rates or produced higher interest rates was an increase in the deficit that sooner or later would have to be repaid by printing money.” The government failed to convince Argentines that it was serious about rectifying the fundamental imbalances in the economy, and the national pastime became protecting wealth by investing in dollars. Government policy became an exercise in trying to stop capital flight. The current reform effort in Argentina has the confidence to allow unlimited convertibility of pesos into dollars. “We just gave up the idea of forcing people to hold pesos,” Fernandez said. “It was impossible to control capital flight outside the country.” Most significant, in Fernandez’s view, the internationalization of the capital market has been accompanied by a fiscal reform that has eliminated crowding out of private borrowing by government debt. As a result, Argentina isn’t likely to fall into the short-term trap of raising interest rates to prevent a temporary ebb of money overseas. Capital flows into Argentina are coming to the private sector, not the government. Easy convertibility does pose risks. Argentina will import any instability that might affect the United States. There’s a chance of renewed capital flight, presenting Argentina with Harberger’s dilemma of deflation or devaluation. Fernandez contends Argentina would be better off maintaining its fixed exchange rate policy and weathering any decline in the domestic economy. Failure to honor the pledge of convertibility would carry the additional burden of eroding the hard-won credibility of the government’s fiscal policies. “We believe one sure way of having a reversal in the capital flow is to have a reversal of the structural reforms,” Fernandez said. “If we go back to the old policy of nationalization of public enterprises or running our economy with big deficits financed by printing money, surely we will have a real depreciation of our currency.” Economic Review — Third Quarter 1994 The arrival of NAFTA. The North American Free Trade Agreement will affect saving and investment in the United States, Mexico, and Canada. Eventually, it may impact other countries if free trade expands farther into Latin America. According to Edward W. Kelley, a member of the Federal Reserve System’s Board of Governors, NAFTA will facilitate the integration of the continent’s financial markets through provisions for capital mobility, unrestricted market entry, and effective but nondiscriminatory regulation. According to Barnes, Mexico expects to benefit: “NAFTA will create a better and more competitive financial system in Mexico, improve the financial technology and innovation.” The new openness should improve the allocation of resources in North America. In fact, emerging patterns of cross-border money flows can already be detected. After implementation of the U.S.–Canada free trade pact in 1989, the United States quickly became a net exporter of capital to Canada for the first time in a decade. Mexico’s inflows began rising even before the trade deal became official, and Kelley expects the movement of money to the south to continue. Edwin M. Truman, staff director of the Division of International Finance for the Fed’s Board of Governors, said NAFTA brings together nations that might not be setting aside enough money to meet their investment needs. “The first thing, perhaps, we should worry about is the fact that all three countries have declining saving rates,” he said. “From that perspective, some have suggested that this [NAFTA] is not the ideal combination of countries.” Truman noted, however, that under NAFTA’s market integration there is increased mutual interest in the success of policies that are beneficial to all partner countries. The new trade agreement might expose the United States, Mexico, and Canada to real or financial shocks beyond their immediate control. Both Kelley and Truman stressed that North American financial integration of the financial markets put a premium on policy consistency and cooperation. Truman saw a need for greater cooperation in banking supervision, including such topics as interstate banking in the United States. “Policymakers must be on their toes, alert to deal with problems, real and perceived, anticipated and unanticipated,” he said. Kelley con51 tends that sound macroeconomic policies will become more important. “A country with inappropriate or unstable policies, such as persistent fiscal deficits or low domestic savings, may have difficulty in attracting foreign investment, especially if investors perceive significant risk of repayment problems,” Kelley said. “Such an economy may also experience capital flight.” Monetary policies, moreover, need to keep price increases from diverging too far from the inflation of neighboring nations. In the past seven years, Mexico eliminated its budget deficit. U.S. attempts to reduce its red ink have been less successful. The Canadian deficit at 5 percent of GDP presents a challenge to the new government of Prime Minister Jean Chretien. If the United States and Canada can reduce their deficits, public borrowing will cease to dominate the capital flows in North America. “As the largest economy in North America, the United States must pursue sound policies, not just in its own interests but also because U.S. policy actions can have serious repercussions for its regional partners,’’ Kelley said. The banking sectors in Mexico and Canada are much smaller than that of the United States. Both countries will face the possibility of competition with the opening of their markets. John Chant, a professor at Simon Fraser University in British Columbia, expects NAFTA to have little impact on Canada’s domestic banking industry. Nationwide banking makes Canada’s institutions the size they need to compete. Extensive, geographically dispersed networks of branches makes it expensive for newcomers to gain a foothold in the market. “Canadian banks will not have to worry much about the home front,” Chant said. Rather than inroads by U.S. banks, Chant foresees opportunities for Canadian banks in the United States, especially with the removal of barriers to interstate banking. Canadian banks are already established in the 52 United States, and they understand well how to operate branch systems. Ricardo Guajardo, director general of Grupo Financiero Bancomer in Mexico City, believes that Mexico will experience a significant increase in U.S. and Canadian competition in both the consumer and corporate markets. NAFTA phases in the opening of Mexico’s market, giving domestic banks some breathing room, but eventually they will have to adjust. “We have to reorient the way we do business,” Guajardo said. “We have to obtain a high degree of specialization. We have to have a very clear focus on where we can compete and where we cannot.” Conclusions The Dallas Fed’s conference on saving coalesced around several conclusions: saving is important to economic growth because it promotes investment and technological progress. Many factors influence saving, but from a policy perspective, low inflation, sound fiscal policy, stability, and financial liberalization increase at least the efficiency of saving. Even in a world of increasingly large cross-border capital flows, nations still rely overwhelmingly on their own domestic savings. Open capital markets carry risks, but they will be minimized in countries that avoid excesses in fiscal and monetary policies. In most respects, these conclusions centered on the ideological trends shaping the 1990s. Countries in most parts of the world—especially in Latin America—are moving away from reliance on government and toward free market policies that emphasize macroeconomic stability. These policies may not be coming into favor primarily with saving in mind, but it is reassuring to know that they help with what’s now recognized as an important component of an economy’s long-term prospects. Federal Reserve Bank of Dallas