The full text on this page is automatically extracted from the file linked above and may contain errors and inconsistencies.
FEDERAL RESERVE BANK OF DALLAS Second Quarter 1993 Regulatioll, Bank Compelitiz'eness, alld EjJimdes o/Missing Money John V. Duca Wba/ Determines Economic Gmwtb? David M. Gould and Roy J. Ruffin 77Je Pricing o/Natural Gas ill Us. Markets Stephen P. A. Brown and Mine K. Yucel /711'estillg/or Grou,tb: 77Jl'il'i77g in /be World Marketplace Fiona D. Sigalla and Beverly J. Fox This publication was digitized and made available by the Federal Reserve Bank of Dallas' Historical Library (FedHistory@dal.frb.org) Economic Review Federal Reserve Bank of Dallas Robert O. McTeer, Jr. '.~ f Tony J . Salvaggio Harvey Rosenblum W. Michael Cox , I Gerald P. O'Driscoll, Jr. Stephen P. A. Brown ',' ,~<I' • Economists 7solt Becs Robert T CI,m JollI'V Ollca Kenneth M erllery Robert W Gilmer David M Gould William C GrubeI' Josepl] H Haslag Evan r Koenig O'Ann M. Pewrsen Keith R Phillips Fiona 0 Sigalia LOri L Taylor John H Welch Mark A Wynne Kevill J Yeats Mille K. YOcel Research Associates Professor Nathan S Balke Sou/llem Me/hodist Univelsit)' Professor Thomas B. Fomby Southern Methodist Umversit)' Professor Scott Freeman UIJIIBrslty of Texas at Austin Professor Gre[lory W Huffman Southern Methodist University Professor Roy J Ruffin University of Houston Profess or Ping Wang PennsylvalJla State UniverSIty Editors Rhonda Harris Virglilia M Rogers Tile [C()fIOmll Rev!I"'/1S punllslled by the feueral Reserve B,,,,, cf [Jdl as ["P ,'p",s expressed are t!lOSl: of the a~jthors and do /lC' "'cces~ar Iy refl('~t • ,e Posltl~'nc; Of 'he Fedcra Reserve B,,"< 01 Dallils or the fetleral Re,erve Syslerl Su!lscrlntlOf1~ a~e d\n'lahle ~rec of rhi:jrge Please se~d reqdl!'Sts for SH19!c-coPY iH'O n l, ':p1e copy SUh~CiutIO:IS, hi.wk ISSue., and dci(iieSS c1rangcs to tile Pl.IJIlC Affd"S Deparltl1e0t Feoeral Reser;e Ba'\K of Dilll.;,. P 0 Bux 555906.llall~s TX 10265·5906 '2' '19725257 !~fllcles may be 'ep'·n:ed all tile LOlldJl'on that tbe souce s crl~OJ:(.J a~'(j 'ii, Rpsl!arl,;, Dep(lnrnen· IS pro,>"deu \"ith copy 01the pl,r"catlor ro Italf1'cn "'e 'err.r- e" ate r l' On the cover: an architectural rendering of the new Federal Reserve Bank of Dallas headquarters. Contents Regulation) Bank Competitiveness) and Episodes of Missing Money John V. Duca What Determines Econ01nic Growth? David M. Gould and Roy J. Ruffin .Johll I )UCI rL'\ iL'\\ s thrl'c cpis(JciL-s of "Illi,~sing 1ll01K') ," durillg \\ hich one oj" the mOJK't;II'\' ~Iggrcgatl's \\,:IS lI111lslIally \\ e:iI,. DUCI linds that in l':ICh or these episode ... , an ill( reasl'd rl'guLitolY hurelen Oil IXll1k~ encouraged hou . . eholds ;llld linns to h\pas . . the il:lJlkillg systl'm in Lln)r or llonh:lI1k 1'i1l:lI1ci:iI killilitie ... ;li1d a. . "ets. l'sing :1 st:lndarcl :Inah tical rr:IIlll'\\ ()rk. Ill' .shcm·s hm\ tlll'''''' ... hifts h\ iJl\e ..,tors em il':ld to CISl'S of mhsing mOil,'> :llllllk-clim's in h:lnks' r()k in pnl\ iding lTedil. Duci I'urtlll'r sIH)\\'s th:lt incrcases in h:lI1k reglilatol'\' ilurdl'1l elll ,Tl':ltL' potl'nti:iI prohk'llls for anal) st... in usillg l'ithcr int,'I'e"t ratl'''' or re:11 time lll(lIlL'\an aggr,'gatl's as indicators or nomill:iI C'l'ollomic :Icti\·it\. CiH'n thl'sC findings, I )lIca ;Irgul'" tlut IX'C1U"'V r,'gulator> ch:ll1ge'i h;l\c implica tioll'" ror condul'lillg mOlll'Un polin. the I'ederal ]{e,"c'J'\ e ..,houklu)lltilllll' to h:l\l' a roll' in lormulating hallk regUlations, pL'J'iod.~ I )Ol'S incrca."'c'd imcstml'nt in cducation enhance longrUIl l'Conol1lic gnl\\ Ih , or docs it simply reduce currl'lll consumption' \\ ill rrCl' [r:lde stimulate gro\\th, or \\ il l it ll1ercly incrca'ie import'>' For:1 long time, ecollomis\.,> relied on an cconomic gnl\\th theory th:lt ofTcrcd lillie' scope I'm undersl:inding long- rull gJ'()\\th il10\'l'mellts. ]{ecently, IHl\\ ,'\er, the stud> or economic gnmth 11:1." hl'l'n reim'igoratccl hI' Ill'\\ dl'H'loplllents in thcor) :me! empirical findings tll:nsuggl'st hcm long run gn)\\th e\·ol\'l's. Bccause l'('onOlllic gr()\\'th detcrmines \\'hether our grand chile!rcn \\ ill h:1\ e hClier li\'cs than OUl'S or \\ hl'lher poor nations \\ ill catch up \\ ith or r:ill rurthl'r IX'hind rich nations, 1),11 id Could :Ine! I{o\ l{ulTin im'l,.,tig:lll' rlTl'nt iL'..,solls Ic:lrnnl "hout gn)\\ th anc! apph thclll to till' ahol'l' i~sucs. Coule! ami HufTin report on rCC,'llt research ,.,uggl'stillg that iml'stll1ellt. p:lrticularl), hU1ll:l1l capital iJ1\ estllll'llt, increasc,> ecolloll1ic gn)\\tiJ, They al."o illH'stigatl' e\idencl' shc)\\ ing tl1:1t political "l:Ihilit\'. \\ ell-defined properly rights, Ic)\\ tradc h:m'il'rs, and Ie)\\ gO\ ernlllL'nt consumption e'penditllres enhance grc)\\th Ihrough pmiti\l' clrcc!.s Oil ill\ l'stllll'nt. Contents Page' II The Pricing ofNatural Gas in U.S. Markets Stephen p, A, Brown and Mine K, YOcel Stephen 131'O\\'n and .\Iine YLkel examine hoI\' different natural gas users and the Ill<lrket institutions serdng them afkct the tr~lJ1smissi()n of price changes throughout \ 'arious markets for n~ltural gas, 1 ~ lectrical utilities and inclustrial users huy much of their natural gas in :1 competiti\'e spot market sl'l'I 'ed by hrokers and interstate pipeline companies, In con t rast. most commercia I :lJ1d resident ia I customers a re dependent on loci! clistrihution companies. \\ 'hich earn a regulated rate of return ancl huy their gas under long-term contracts, l 'sing time-series ml,thocls, BrO\\'n ancl 1'('lcel fincl that eH'n in the long run. changes in prices arc not transmitted uniforml\' throughout the \ 'arious markets for natural gas, Electrical and industrial customers haH' seen a greater henefit from falling natural gas prices than commercial and residential customers, Differences in market institutions and in the ahility of the end users to s\\'itch fuels may account for the lack of uniformity, Page ')3 Investing for Growth: J7Jriving in the World Marketplace Fiona 0, Sigalla and Beverly J. Fox rree trade is rapidly eXlxlnding the world marketplace, prOl'iding nCII' opportunities to raise li\ing standards and stimulate long-term economic growth, To capitalize on these opportunities, the l ' nited States must O\'ercome problems that reduce its competitil'cncss, such as rcgu lations that distort market inccnti\'cs. an cducation systcm that may not be preparing a \\ 'ork force cquipped to compete in the tll'enty- first century. and a health carc systcm \\'hose skyrocketing costs hal'e hecomc a hurdcn to taxpayers. gOl'ernment, and business, TIlc Federal l{csel'\'c Bank of Dallas acldressed the opportunities pn:scntcd by frec tracle and the concomitant need to increase L' ,S, competiti\'encss at the Bank's recent economic conkrence . entitled "JJl\'csting for Groll,th: Thri\'ing in thc \, 'orld .\ Iarketplace," In this article. Fiona Sigalla and Be\'erly Fox summarize concerns and strategies re\ 'icII'cd and debated in conf'crcnce sessions, John V. Duca Senior Economist and Policy Advisor Federal Reserve Bank of Dallas Regulation, Bank Competitiveness, and Episodes of Missing Money I n setting monetary policy, most central banks look at a number of economic indicators, including data on monetary aggregates. The motivation for monitoring monetary aggregates comes from the equation of exchange: (1) M ×V = P ×Y , where M = money, V = velocity [nominal gross domestic product (GDP) / M )], P = the price level, and Y = transactions (usually measured by inflationadjusted GDP). Typically, people reduce their holdings of money as the spread between a riskless short-term market interest rate (such as the three-month U.S. Treasury bill rate) and the average yield earned on monetary assets rises. As a result, the velocity of money rises as this spread or “opportunity cost” of money increases. If velocity is predictable, then money and its predicted velocity can be used to infer nominal GDP (P × Y ). Under these circumstances, a monetary aggregate is useful for policymakers as an indicator of nominal GDP. This is especially true because data on GDP are available after a long lag, whereas information on money and interest rates is more readily available. However, in three of the past four recessions (1973–74, 1979–80, and 1990–91), the monetary aggregate most closely monitored by the Federal Reserve has been much weaker relative to income and opportunity cost measures than previous experience predicted. This unusual weakness, or “missing money,”1 poses a serious problem for policymakers because it means that the monetary aggregate in question is less useful as an indicator of nominal activity at a critical point in the business cycle. Furthermore, analysts often need at least several quarters of data to discern whether such a Economic Review — Second Quarter 1993 money demand shock has occurred and whether any particular shock is permanent or temporary. Consider a permanent downward shift in the level of demand for a monetary aggregate; such a shift would result in a fall of that aggregate’s growth rate relative to GDP growth over a period of time at each level of opportunity cost. There are two choices that a responsible central bank would consider. If the central bank stabilized the growth rate of that aggregate at the previous average, nominal GDP growth would temporarily accelerate and then return to its previous growth rate. Eventually, the spurt in nominal GDP growth would result in a temporary acceleration in inflation. As a result, the price level would be permanently raised relative to its path had the money demand shock not occurred. While the price level would post only a once-and-for-all rise, such an episode would create uncertainty about whether the central bank was committed to controlling inflation. Such uncertainty would likely depress real economic activity for awhile because inflation uncertainty discourages firms and households from committing to long-run projects. I would like to thank, without implicating, Anne King and Steven Prue for providing excellent research assistance, and Michael Cox, Ken Emery, Evan Koenig, Ken Robinson, and Harvey Rosenblum for their suggestions during the progress of this research. Any remaining errors are my own. 1 Throughout this study, the term “missing money” describes episodes in which the level of a monetary aggregate has been smaller than predicted based on past relationships, income, and the opportunity cost of money. 1 As an alternative, the central bank may accept temporary weakness in the growth of its primary monetary aggregate. However, it is difficult in real time to know precisely how much of a slowdown is appropriate. If the central bank permitted money growth to slow too much, nominal GDP growth would temporarily be below trend. If the monetary authority underestimated the impact of a money demand shock, then nominal GDP growth would temporarily be above trend until the money demand shock passed.2 Given that cases of missing money are problematic for central banks, it is natural to ask why there have been money demand shifts. To help answer this question, this study reviews research on the three most recent episodes of missing money. Common to each of these cases is a decline in banks’ ability to compete with nonbanks that stemmed from the changing impact of banking regulations. As a result of declines in the competitiveness of banks, households and firms have shifted toward using nonbank types of “money” and credit, and researchers have found it helpful to redefine money or measures of its opportunity cost to obtain a more reliable indicator of nominal GDP.3 In establishing these findings, this study begins with a simple macroeconomic model of how rising bank regulatory taxes can contribute to weakness in overall economic activity and a decline in the share of credit provided by banks. The second section of this article reviews the mechanics of how 2 3 2 The cases in the text analyze permanent shocks to money demand. If the shocks were temporary, then by not altering its long-run monetary targets, a central bank could keep the economy growing in line with the central bank’s previous long-run nominal GDP target. In this case, there would be some temporary acceleration or deceleration in nominal GDP growth that would later be reversed. As argued later in this article, money demand shifts in the mid- and late-1970s led the Federal Reserve to change the primary monetary aggregate it monitored from M1 to M2. Indeed, when M2 was officially created in 1980, it was defined to include new financial instruments such as money market mutual funds (MMMFs) and repurchase agreements. a shift away from credit and deposits at banks to substitutes at nonbanks can also result in a missingmoney phenomenon. Within this framework, the second section then analyzes evidence on the three most recent episodes of missing money. Each case of missing money is found to have coincided with declines in the ability of banks to compete with nonbanks. The concluding section discusses the policy implications of these findings. A simple macroeconomic model This section lays out a simple model of aggregate demand that can be used to analyze the impact of regulatory burden on economic activity and on the share of credit provided by banks. These effects, coupled with insights provided in the next section, are later shown to be useful in helping policymakers detect whether a monetary aggregate is accurately reflecting nominal GDP growth. The model used here is presented in two parts. First, the conditions for equilibrium in the goods market are described in a world where firms can borrow either from banks or directly from open credit markets. Second, conditions for equilibrium in the credit market are derived. Using these conditions, the equilibrium levels of output and interest rates are derived. A simple IS specification. A portion of firms (Θm) obtains credit only from the financial markets, while the remaining portion (Θ b ≡ 1–Θ m) relies completely on bank loans. Demand by each firm for open market (L m) or bank credit (L b) is (2) Ltm = α 0 + α1Yt − α 2R tm and ( 3) L bt = α 0 + α1Yt − α 2R tb , where Greek letters denote positive coefficients, Y is output, R m is the average rate on open market credit, and R b is the average bank loan rate. The average cost of credit (R) and total private credit demand (L p) across all firms are thus (4 ) (5 ) Rt = Θ m R tm + Θ b R tb and Ltp = Θ m Lmt + Θ b L bt = α 0 + α1Yt − α 2 (Θ m R m + Θ b R b ). Federal Reserve Bank of Dallas On grounds that firms and households spend less when the cost of finance rises, nominal income is assumed to depend negatively on average credit costs: (6 ) Yt = φ0 − φ1R t = φ0 − φ1Θ m R tm − φ1 (1 − Θ m )R tb . Decisions about modeling how the costs of bank and open market credit are determined have been made to be consistent with several key stylistic facts. First, firms that rely on open market credit generally are perceived as posing little default risk. Second, bank credit has an advantage over open market paper in that the deposit insurance system bears some of the default risk of bank loans. Third, open market credit has an advantage over bank credit in avoiding certain regulatory costs imposed on the banking system. Interest rates on bank and open market credit are, respectively: (7 ) R tb = c b + rt + (1 − d )D tb + t bf and (8 ) R tm = c m + rt + D tm , where r is the riskless market interest rate, d is the implicit default risk subsidy on bank loans from deposit insurance, D b is the average fair market risk premium on bank loans, D m is the average fair market risk premium on open market paper, t bf reflects any regulatory burdens on banks that effectively can be treated as a constant, c b is a constant reflecting the per-dollar costs of providing bank loans not associated with interest costs or default risk (primarily information and transactions costs), and c m is a constant reflecting the per-dollar information and transactions costs associated with issuing open market paper. To capture differences in default risk across firms in a tractable way, the assumption has been made that the fair market default risk premium (D i ) across firm types (i ) has a uniform distribution over the interval:4 (9 ) ⎡ ⎛cb − cm ⎞⎤ m D tc = ⎢t bf + ⎜ ⎟⎥ = Θ , d ⎝ ⎠ ⎥⎦ ⎢⎣ (10 ) which is increasing in regulatory taxes imposed on all bank loans (t bf ), decreasing in the extent to which bank loans have lower information and transactions costs than open market paper (c b – c m), and decreasing in the implicit risk-taking subsidy (d ) provided by deposit insurance. Since D i has a uniform distribution over [0,1], Θ m = D c and Θ b = 1 – D c. In this model, banks lend to higher risk firms, while lower risk firms issue open market paper. The reason is that the cost disadvantage of bank regulatory taxes is roughly fixed across borrowers, while the implicit benefit of deposit insurance is increasing in default risk because indirectly taxpayers bear some of the risk.5 For example, the implicit benefit of deposit insurance is low on a bank loan to a firm that has low default risk, while the regulatory burden of such a loan may be very high. In this case, bank loans are a more costly source of credit than open market paper. Thus, the model is consistent with the stylized fact that only very low default risk firms issue commercial paper. This qualitative result can be obtained in this model if one assumes that the information costs of issuing open market debt are lower for firms having low default risk because their creditworthiness is generally more transparent to investors.6 Since default risk is distributed uniformly, the average default risk premium on open market paper is (D c/2) and that on bank loans is ([1 + D c ]/2). Using these average risk premia along with 4 For ease of exposition, any rationing of credit is suppressed. 5 As stressed by Keeley (1990), the value of this implicit subsidy depends on how well-capitalized a bank is. This implicit subsidy declines as a bank’s capital increases because when a bank fails, the capital invested by bank equity and subordinated debt holders are the first funds used to cover any losses from liquidating the bank. For ease of exposition, all banks are treated as being equally capitalized in the model. 6 Diamond (1991) develops a theoretical model that more rigorously and formally demonstrates this result. D i ~ U [0, 1]. Setting equations 7 and 8 equal yields the critical level of default risk (D c ) at which firms are indifferent between bank and open market credit: Economic Review — Second Quarter 1993 3 equations 7, 8, and 10 yields the following expression for the average cost of credit: (11) Rt = Θ m R tm + Θ b R tb ⎡ ⎛ t bf ⎞ ⎤ ⎡ ⎛ cb ⎞ ⎤ = rt + c b ⎢1 − ⎜ ⎟ ⎥ + t bf ⎢1 − ⎜ ⎟ ⎥ ⎢⎣ ⎝ 2d ⎠ ⎥⎦ ⎢⎣ ⎝ 2d ⎠ ⎥⎦ m 2 (1 − d ) (c ) t bf c b − + − 2 2d d t bf c m c bc m + + . d d Differentiating equation 11 and substitution from equation 10 implies (12) ∂R = 1 − Θ m > 0, b ∂c ∂R = 1 − Θ m > 0, ∂t bf ∂R = Θ m > 0, ∂c m ⎡ ⎛ Θm ⎞ ⎤ ∂R ⎛ 1 ⎞ = ⎜− ⎟ + Θ m ⎢1 − ⎜ ⎟ ⎥ < 0, and ∂d ⎝ 2 ⎠ ⎢⎣ ⎝ 2 ⎠ ⎥⎦ ∂R = 1. ∂r Equation 12 implies that the average cost of credit rises when either bank regulatory taxes (t bf ) or information and transactions costs (c b) increase. This result is obtained because the rise in credit costs to those firms that remain bank borrowers will outweigh the effect of some firms’ switching toward less expensive open market paper. Thus, a rise in regulatory taxes might help induce a recession, cause a decline in the importance of banks in credit markets, and—as will be discussed in the next section—trigger an episode of missing money. As equation 12 also indicates, a rise in the information and transactions costs of commercial paper (c b) will cause the average cost of credit to increase. This increase occurs even though some firms shift away from open market paper when the information and transactions costs of open market paper rise because the effects of this shift on average credit costs are outweighed by the impact of higher costs on those that remain nonbank borrowers. Thus, a rise in c m will, by raising the cost of open market paper, cause banks to gain credit market share and reduce the demand for nominal output. A rise in the deposit insurance subsidy decreases the average cost of credit by lowering the cost of bank loans. While the sign of the effect is theoretically ambiguous, a higher subsidy will lower the cost of finance as long as there are some firms that rely on bank loans (that is, Θ m < 1). Substituting equation 11 into equation 6 yields the following IS curve: (13) Figure 1 Effect of a Rise in Bank Regulatory Burden on Goods Market Equilibrium r ISo y 4 Yt = φ0 − φ1rt ⎧ ⎡ ⎛ cb ⎞ ⎤ ⎡ ⎛ t bf ⎞ ⎤⎫ ⎪c b ⎢1 − ⎜ ⎟ ⎥ + t bf ⎢1 − ⎜ ⎟ ⎥⎪ ⎪ ⎢⎣ ⎝ 2d ⎠ ⎥⎦ ⎢⎣ ⎝ 2d ⎠ ⎥⎦⎪ ⎪ ⎪ ⎪ ⎡ (1 − d ) ⎤ ⎡ (c m )2 ⎤ t bf c b ⎪ − − −φ1 ⎨+ ⎢ ⎥ ⎥ ⎢ ⎬, d ⎪ ⎪ ⎢⎣ 2 ⎥⎦ ⎢⎣ 2d ⎥⎦ ⎪ t bf c m c bc m ⎪ + ⎪+ ⎪ d d ⎪⎩ ⎪⎭ which implies a negative relationship between combinations of output and the short-term interest rate that clear the goods market. Thus, the IS curve has the normal downward-sloping shape (Figure 1 ). As will be shown later, equation 13 implies that a rise in regulatory taxes on banks reduces output at each combination of the riskless market interest rate (r ) and goods demand (Y ). Federal Reserve Bank of Dallas Thus, such an increase in bank regulatory burden can be depicted as an inward shift of the IS curve from IS0 to IS1. Of course, a rise in regulatory taxes may indirectly affect rt when the conditions for goods market equilibrium (IS curve) and credit market equilibrium are solved together. Credit market equilibrium conditions. Traditional Keynesian models depict interest rates as determined by the supply and demand for money. This approach may have been plausible for the 1930s and 1940s because few firms could issue open market paper following the collapse of the bond and commercial paper markets during the Great Depression. Today, it is more accurate to model short-term interest rates as determined by the total supply and demand for short-term credit, since commercial paper and Treasury bills have each grown to roughly the size of commercial and industrial (C&I) loans at banks.7 The demand for short-term credit is mainly comprised of the demand for bank loans, commercial paper, and Treasury bills. Although it is likely that the demand for bank loans and commercial paper is interest-sensitive, it can be argued that the demand by the U.S. government for Treasury bills has generally been highly insensitive to shortterm rates. By implication, government demand for short-term credit can be approximated by a constant (L g), and total credit demand (Lt ) equals private plus government demands: (14 ) Lt = Ltp + L tg = α 0 + α1Yt − α 2R t + L g = α 0 + α1Yt + L g − α 2rt ⎧ ⎡ ⎛ cb ⎞ ⎤ ⎡ ⎛ t bf ⎞ ⎤⎫ ⎪c b ⎢1 − ⎜ ⎟ ⎥ + t bf ⎢1 − ⎜ ⎟ ⎥⎪ ⎪ ⎢⎣ ⎝ 2d ⎠ ⎥⎦ ⎢⎣ ⎝ 2d ⎠ ⎥⎦⎪ ⎪ ⎪ m ⎪ ⎡ (1 − d ) ⎤ ⎡ (c )2 ⎤ t bf c b ⎪ −α 2 ⎨+ ⎢ ⎥−⎢ ⎥− ⎬. d ⎪ ⎪ ⎢⎣ 2 ⎥⎦ ⎢⎣ 2d ⎥⎦ ⎪ t bf c m c bc m ⎪ + ⎪+ ⎪ d ⎪⎩ d ⎪⎭ In principle, the supply of short-term credit can be depicted as the sum of credit supplies from different sources. However, the supply of shortterm credit in the United States can be depicted in a simple fashion because the Federal Reserve has typically implemented monetary policy by altering Economic Review — Second Quarter 1993 Figure 2 The Supply of and Demand for Short-Term Credit r,re Ls re Ld y,L a targeted level of the federal funds rate to stabilize nominal aggregate activity. Federal funds are reserves that banks trade with one another to meet reserve requirements on bank deposits. By purchasing or selling reserves in exchange for Treasury bills, the Federal Reserve tries to target a chosen level for the federal funds rate. Under these conditions, banks would borrow from the Federal Reserve to purchase T-bills if T-bill rates were above the average expected level of the federal funds rate over the remaining maturity of the Treasury bills. Banks will continue to buy Treasury bills until T-bill rates fall in line with expectations of the federal funds rate, consistent with the empirical findings of Cook and Hahn (1989). This arbitrage implies that the T-bill rate equals the average federal funds rate target (r e) that the market expects the Federal Reserve to use over the life of a particular maturity T-bill. By implication, the Federal Reserve can generally target shortterm interest rates, and the supply curve of total short-term credit (L s) is horizontal (Figure 2 ) and 7 For example, the outstanding commercial paper of nonfinancial and financial firms is roughly four-fifths the size of C&I loans at banks ( Federal Reserve Bulletin 1993), while the stock of U.S. Treasury bills is roughly as large as C&I loans at banks. 5 depends on the expected path of the federal funds rate target.8 As output rises at a given level of interest rates, the credit demand curve shifts to the right. However, if the Federal Reserve maintains its funds rate target, the credit supply curve remains horizontal and the riskless short-term interest rate does not change. As a result, the equilibrium combinations of interest rates and output at which shortterm credit demand equals short-term credit supply can be depicted as the horizontal CC curve in Figure 3. This curve shifts in line with expectations about the federal funds rate target. The result that the CC curve is horizontal under an interest rate target parallels the flat LM curve obtained in Poole’s (1970) model.9 The effects of altering the relative cost of bank loans and open market paper. According to equation 10, three types of factors affect the relative use of bank loans and open market paper: 8 By contrast, the Federal Reserve has much less effect on long-term interest rates, which it indirectly affects by influencing expectations about future inflation and the size of the inflation risk premium in long-term interest rates. This premium reimburses investors for the risk that inflation may be higher than they expect. If inflation were higher than expected, expectations of future inflation would likely rise, then interest rates would rise so that inflation-adjusted yields are maintained. These conditions imply that bond prices would likely fall. 9 A flat CC curve is a reasonable approximation in the very short run. However, any changes in the central bank’s target rate that are made partly on the basis of money or credit growth imply that this curve is upward-sloping in the medium run. Thus, past changes in the federal funds rate target that have been partly or largely based on M2 growth suggest that the CC (or LM) curve has an upward slope in the short to medium run. 10 6 In the short run, a rise in bank regulatory taxes causes a rise in the average cost of credit. If, however, enough innovation is induced by regulations, the average cost of credit could fall in the long run, provided the long-run cost of issuing open market paper declines enough. For example, while the regulatory burden of reserve requirements likely increased the cost of providing bank loans in the high interest rate environment of the late 1970s and early 1980s, it helped spur the development of the commercial paper market. As pointed out by Post (1992), the cost of commercial paper has generally fallen over the past decade. Figure 3 Credit and Goods Market Equilibrium r CC IS1 y1 IS0 y0 y bank regulatory taxes (t bf ), the implicit deposit insurance subsidy (d ), and the differential between the information costs for bank loans and for open market paper (c b – c m ). In light of these three factors and the forthcoming analysis of three cases of missing money, this section presents analysis of the effects of a rise in bank regulatory taxes, a decline in the implicit deposit insurance subsidy, and a decline in information costs associated with open market paper. The effects of raising bank regulatory taxes. By increasing the cost of loans relative to market interest rates, a rise in bank regulatory taxes shifts the IS curve inward to IS1. This can be demonstrated by differentiating equation 13 and substituting a result from equation 12: (15) ⎛ ∂Y ⎞ ⎛ ∂R ⎞ ∂Y =⎜ ⎟ ⎜ bf ⎟ bf ∂t ⎝ ∂R ⎠ ⎝ ∂t ⎠ = −φ1 (1 − Θ m ) < 0. If the central bank does not perceive the IS shift and cuts interest rates, then smoothing short-term market interest rates (r e ) results in a decline in the demand for nominal output (Y ) from Y0 to Y1. As a consequence, increases in bank regulatory taxes can contribute to the onset of a recession.10 Changes in bank regulatory taxes thus create disturbances to the IS curve, which create problems in using a federal funds rate target. This result accords with the insights in Poole (1970). Poole’s Federal Reserve Bank of Dallas model showed that a central bank policy of targeting a monetary aggregate will be superior to that of targeting a short-term interest rate if IS shocks are large relative to money demand shocks. It is thus tempting to conclude that targeting a monetary aggregate would yield superior results under these circumstances. However, as shown in the next section, the change in banks’ regulatory burden also causes a fall in the demand for money. In the Poole framework, as the importance of money demand disturbances rises, targeting a monetary aggregate becomes less attractive relative to targeting a shortterm interest rate. Thus, changes in banks’ regulatory burden create problems for both types of operating procedures. The effects of reducing the implicit deposit insurance subsidy. The effects of reducing the implicit deposit insurance subsidy are qualitatively similar to those of increasing the regulatory tax on banks. Such a decline increases the cost to banks of providing loans (equation 7), which makes bank loans relatively more expensive than open market paper for more firms. One result of this change in relative costs is that some firms shift away from bank loans and switch to issuing open market paper (Θ m increases in equation 10). In addition, much like a rise in bank regulatory taxes, a reduction in this subsidy (d ) leads to an increase in the average cost of finance (R in equation 12). This increase in cost, in turn, shifts the IS curve inward, thereby reducing the nominal demand for goods (Y ). This result can be seen by differentiation: (16 ) ∂Y ⎛ ∂Y ⎞ ⎛ ∂R ⎞ = ∂d ⎜⎝ ∂R ⎟⎠ ⎜⎝ ∂d ⎟⎠ ⎛ ∂R ⎞ = −φ1 ⎜ ⎟ > 0. ⎝ ∂d ⎠ This expression is positive because ( R/ d ) < 0 and implies that reducing d will shift the IS curve inward. For two reasons, one should avoid inferring from this example that reductions in the implicit deposit insurance subsidy are not necessarily beneficial. First, any effect on aggregate demand can be offset by a monetary easing action, which would push down short-term market interest rates. Second, because deposit insurance implicitly shifts much Economic Review — Second Quarter 1993 of the default risk on high-risk loans from stockholders in banks and thrifts to taxpayers, deposit insurance results in excessively risky lending and thus creates inefficiencies. The high cost of the savings and loan association bailout provides a justification for implementing risk-based capital standards. The point of this example is to illustrate that monetary policy should take into account any macroeconomic impact of curtailing the risk-taking incentives of deposit insurance. The effects of a decline in the information costs of open market paper. A decline in the information costs of open market paper can stem from reductions in costs associated with investors’ learning about firms, improved computer technology that reduces the transactions costs of buying and selling open market paper (such as bonds and commercial paper), and the deepening or increased liquidity of open paper markets. Like a decline in the deposit insurance subsidy to banks and a rise in bank regulatory taxes, a fall in the costs of providing open market paper (c m) will reduce the cost of open market paper (R m) relative to that of bank loans (R b) and thereby induce a rise in the share of open market paper (Θ m in equation 10). The effects on aggregate demand, however, differ greatly because a decline in the cost of open market paper lowers the average cost of finance (equation 11). As a result, the gap between the average cost of finance to firms and the riskless open market rate paid by the U.S. Treasury narrows at each level of income. Thus, a decline in c m causes the IS schedule to shift to the right, thereby driving up aggregate nominal demand. This effect can be demonstrated by differentiating equation 13 and substituting a result from equation 12: (17 ) ⎛ ∂Y ⎞ ⎛ ∂R ⎞ ∂Y =⎜ ⎟⎜ m⎟ m ∂c ⎝ ∂R ⎠ ⎝ ∂c ⎠ = −φ1Θ m < 0. The stabilization of aggregate demand thus calls for increasing the federal funds rate, in contrast to the prescription of cutting the federal funds rate if bank regulatory taxes rise or if the deposit insurance subsidy is lowered. Thus, the monetary policy implications of these examples are not the same. The analysis above implies that to discern among these types of 7 Episodes of money demand instability Figure 4 The Supply of and Demand for Money Opportunity cost Ms Opportunity cost implied by the federal funds rate target Md Money shocks, policymakers should check for not only a decline in bank credit market share, but also for whether such a decline is accompanied by a rising regulatory burden on banks, a reduction in the implicit deposit insurance subsidy to banks, and a weakening in income growth or, alternatively, a reduction in the information and transactions costs of using open market paper and a strengthening of nominal income growth. The next section discusses how an unpredictable decline in the measured money supply may also accompany such shocks. 11 8 This assumption is consistent with the Federal Reserve’s use of a target range for M2. This policy allows M2 balances to move within a range for three reasons. First, it implicitly recognizes that moderate M2 growth will, barring money demand shocks, need to accompany moderate growth in nominal income. Second, this policy recognizes that changes in interest rates will affect the velocity of M2 and thereby alter the pace of M2 growth needed for moderate growth in nominal income. Third, the policy also recognizes that shifts in money demand may occur and that the money supply curve may shift owing to bank behavior. In the latter case, the opportunity cost of money associated with a particular federal funds rate could vary, depending on how actively banks bid for M2 deposits. This practice, in turn, alters the velocity of M2 and the growth rate of M2 that is needed to stabilize aggregate demand. The case of a nonrange money target is discussed later in this article. This section begins by describing how, in a simple framework, reduced bank competitiveness can lead to a missing-money phenomenon. Then, in terms of this framework, three episodes of missing money are discussed: the missing M1A in the mid-1970s, the missing M1A and surge in money market mutual funds (MMMFs) during the late 1970s and early 1980s, and the current case of the missing M2. Finally, the implications of these shocks for M2 targeting are discussed. How reduced bank competitiveness can create a missing-money phenomenon. This section outlines a simple supply and demand model of money that can be used to analyze cases of missing money. In Figure 4, the money demand (M d ) curve is drawn with a downward slope that reflects that households and firms reduce their holdings of money as the opportunity cost of money (OC ) rises (if all else remains the same). The opportunity cost of money is the extra yield that investors forgo by holding money, which provides convenience and transactions services over other assets that have higher pecuniary yields. In practice, opportunity costs are typically measured as the difference between a riskless short-term market interest rate and the average yield earned on monetary assets. The money demand curve is drawn for a given level of income. This curve shifts to the right as income rises because the transactions demand for money will rise at each level of the opportunity cost of money. The money supply curve (M s ) has an upward slope to reflect that banks would be willing to supply more deposits as the spread between market interest rates and deposit rates increases because banks can earn more profit by supplying deposits when the yields on securities (or loans) rise relative to deposit rates. Here it is assumed that the Federal Reserve does not rigidly target money balances (otherwise, the money supply schedule would be vertical).11 The money supply schedule is partly a derived demand for funding loans because banks will bid up deposit rates if loan demand rises and loan interest rates rise as a result. Thus, a monetary easing action that boosts loan demand through interest rate or wealth effects will shift this curve to the right, while an exogenous decline in demand for bank loans will shift the curve to the left. Federal Reserve Bank of Dallas Money demand models, such as the Federal Reserve Board’s M2 model (henceforth, the FRB model ),12 tend to estimate M2 growth well as long as changes in the level of income are the only source of shifts in the money demand schedule. The following examples illustrate this point. First, consider what happens if the Federal Reserve eases monetary policy. The easing shifts the money supply schedule to the right in the short run, causing a decline in M2’s equilibrium opportunity cost. This movement along the money demand curve is picked up by the opportunity cost measures in the FRB model. Then, as the decline in short-term interest rates causes aggregate demand to pick up, nominal income will rise, causing the money demand schedule to shift to the right, which induces a measured rise in M2’s opportunity costs. This shift of the money demand curve is picked up by the income and consumption spending variables in the FRB model. In the past, M2’s growth rate primarily reflected movements of the money supply curve along the money demand curve and shifts of the money demand curve owing to income changes. As a result, the coefficient estimates of the FRB model reflect a positive effect of income on M2 growth and a negative correlation of M2’s opportunity costs with M2 growth. By implication, the coefficient estimates of the FRB model will yield good predictions of M2 growth as long as income and money supply shocks are the only sources of change affecting M2. An example of a money demand shock. Now consider an example in which some changes in the economic environment not reflected in M2’s measured opportunity cost simultaneously cause firms to shift from C&I loans to bonds and induce households to shift out of M2 deposits into bond mutual funds. Income is held constant in this example to reduce the number of curve shifts for ease of exposition. This case is shown in Figure 5. One result is that the demand for bank credit falls, and with it, bank demand for issuing M2 deposits. By implication, the M2 supply curve shifts inward (shift 1), and the FRB model should pick up this decline through its opportunity cost measures. However, because the demand for M2 deposits also falls at every combination of OC and Y, the money demand curve shifts inward (shift 2). Since coefficient estimates for most M2 models are based on a past Economic Review — Second Quarter 1993 Figure 5 A Bypassing of the Banking System Opportunity cost MS1 MS0 1 2 OC″ OC0 A 1 2 Md1 M2″ M2′ M20 Md0 Money negative correlation between M2 and its opportunity cost, the money demand model implicitly assumes that a shift in the money supply rather than demand curve has occurred. In terms of Figure 5, money demand models assume that the demand curve is unchanged and that the supply curve intersects the demand curve at point A. As a result, money demand models would estimate, once interest rate and income data are available, that M2 moved to a level like M2′ rather than having declined all the way to M2″.13 In this instance, a case of missing money would occur. Alternatively, if the demand curve shift is large enough relative to the supply curve shift, then OC could be lower in equilibrium and the money demand model would predict a rise in M2 even though M2 would actually fall (since both the M d and M s curves shift to the left). In either case, the money demand model overpredicts the equilibrium level of M2 because it fails to pick up the money demand curve shift by assuming that only 12 For a discussion of this model, see Moore, Porter, and Small (1990) and Small and Porter (1989). 13 Note that income is being held constant in this example. While changes in income will shift the Md curve, the inclusion of income variables in money demand curves controls for Md shifts stemming from changes in income. 9 Figure 6 Figure 7 Transfers of Funds Balance Sheet Changes Bond Mutual Fund Firm (a) Firms A Bond mutual funds L A L –100 C&I loans +100 bonds +100 mutual fund shares +100 bonds (d) (b) Bank A –100 C&I loans Banks (c) Households changes in nominal income shift the money demand curve.14 Fundamentally, the money demand shift occurs because both borrowers and depositors substitute away from banks for credit and deposit services. To illustrate this point, Figure 6 depicts the transfers of funds in the movement to a lower equilibrium level of M2. Suppose a firm issues a $100 bond purchased with $100 by a bond mutual fund, which in turn gives $100 in mutual fund shares to a household in exchange for a $100 check drawn on a bank account. Suppose also that the household moved the $100 from a nonreservable small time deposit to a checking account to make the transaction.15 The firm takes the $100 raised by issuing bonds (exchange a) to pay off $100 in C&I 10 14 From a Marshallian point of view, money demand changes predicted by typical econometric models can be interpreted as movements along short-run money demand curves that may shift with income levels, whereas money demand shocks can instead be interpreted as movements along a long-term money demand curve in so far as these “shocks” represent the endogenous response of firms and households to changes in the opportunity cost of using money. 15 This assumption enables us to avoid changing bank reserves and is sensible, given that bond funds seem to be more substitutable for small time accounts than for other M2 balances that are less useful as savings vehicles. Household L –100 small time deposits A L –100 small time deposits +100 bond mutual fund shares loans to the bank (exchange d). The bond mutual fund pays the firm with the $100 it raised from selling mutual fund shares to households (exchange b). The household, in turn, obtains the $100 used to purchase bond fund shares by withdrawing $100 from its bank checking account (exchange c). In essence, the $100 that the household shifts into bond funds eventually goes back to the bank when the firm issuing bonds pays off its C&I loan. Another way of showing this equilibrium is to review each party’s balance sheet, as depicted in Figure 7. On the firm’s balance sheet, total liabilities are unchanged as the $100 increase in bonds issued matches the $100 decline in C&I loans. For the household, the $100 increase in bond fund holdings matches the $100 decline in M2 balances. Notice that total assets and total liabilities are unchanged for the firm and household. The bond fund, however, experiences a $100 increase in both assets (the $100 rise in bonds) and liabilities (the $100 increase in mutual fund shares). By contrast, the banking industry is hit with a $100 decline in C&I loans on the asset side that is matched by a $100 decline in M2 deposits on the liability side. If the only source of inflows into bond funds came from M2 balances, then one way to solve this case of missing money would be to add bond funds to M2, much like adding MMMFs to M2, provided that one or more variables could be found to consistently measure the desire to hold bond funds. This example illustrates how a case of missing money can arise when M2’s demand curve shifts, and the shift is not the result of a change in income. Federal Reserve Bank of Dallas Thus, missing money likely can be accounted for by unusual events that cause either possible changes in the elasticity of money demand with respect to income or opportunity costs, or permanent declines in money demand for given levels of income and opportunity cost measures. The mid-1970s case of missing money. Until the early 1980s, the monetary aggregate most closely monitored by the Federal Reserve was M1A, which was defined as currency plus demand deposits.16 Based on its prior relationship to income and interest rates, M1A was unusually weak in the mid-1970s, leading one monetary economist to call this episode “The Case of the Missing Money” (Goldfeld 1976). The mid-1970s were characterized by shocks to both sides of bank balance sheets and by a severe recession. On the asset side, there was unusual weakness in C&I loans. Many large firms shifted from C&I loans to commercial paper and finance company loans for three reasons. First, because market interest rates rose above deposit rate ceilings set under Regulation Q, depositors shifted funds away from banks and thrifts toward investments bearing market interest rates (see the box entitled “Regulation Q and the Competitiveness of Banks and Thrifts”). Owing to a shortage of loanable funds, banks and thrifts rationed credit with nonprice terms, which drove larger firms to credit sources that were unaffected by Regulation Q (Figure 8).17 (Mortgage borrowers had fewer alternatives and as a result, there was a sharp decline in housing construction [ Jaffee and Rosen 1979 and Hendershott 1980]). Second, partly to free up funds for other borrowers, some banks provided lines of credit to back up commercial paper issuance to encourage their largest borrowers to make this shift.18 Third, the rise in short-term rates increased the reserve requirement tax on banks, and banks passed this extra cost onto borrowers by raising the prime rate relative to market interest rates. This increase in the cost of C&I loans relative to commercial paper encouraged many large firms to shift to paper as a source of finance.19 This shift was permanent for many firms because once they incurred the fixed costs of becoming a paper issuer, it was cheaper to bypass bank loans whose cost was inflated by reserve requirements. On the liability side of bank balance sheets, there was unusual weakness in business holdings Economic Review — Second Quarter 1993 Figure 8 The Nonbank Share of Short-Term Credit (Seasonally Adjusted) Percent 55 50 45 40 35 30 25 20 ’60 ’62 ’64 ’66 ’68 ’70 ’72 ’74 ’76 ’78 ’80 ’82 ’84 ’86 ’88 ’90 of demand deposits at a time when the interest prohibition on demand deposits was at a thenrecord binding level. These conditions were accompanied by firms’ entering repurchase agreements (RPs) and purchasing overnight Eurodollars (Tinsley, Garrett, and Friar 1981), likely reductions in compensating balances owing to firms’ borrowing less from banks and shifting to commercial paper (Duca 1992b), and firms’ incurring fixed costs to initiate cash management techniques that reduced their need for demand deposits (Porter, Simpson, and Mauskopf 1979). Thus, this missing-money episode occurred at a time when both bank assets and bank liabilities were unusually weak. Moreover, the missing money 16 Demand deposits are noninterest-bearing deposits that are checkable. 17 Finance companies raise funds mainly by issuing commercial paper. 18 By providing liquidity to firms in a pinch, such backup lines reduce the risk to investors holding commercial paper. 19 Most commercial paper is not subject to reserve requirements because it mostly is held directly by firms and households or is held by money funds. 11 of the mid-1970s can be interpreted as having stemmed from a decline in the competitiveness of the banking system that resulted from an interaction between high interest rates and bank regulations (such as Regulation Q and reserve requirements). In terms of the model in the first section, replace M2 with M1A, whose opportunity cost is some short-term T-bill rate. In reference to Figure 5, the C&I loan shock to bank balance sheets reduced the need of banks to issue deposits, thereby shifting the supply of deposits curve leftward (shift 1), and the shift away from compensating balances and demand deposits toward RPs, Eurodollars, and cash management can be represented by an inward shift in the demand curve for M1A (shift 2). As a result, the level of M1A is lower than suggested by its opportunity cost and by nominal income. The missing M1A and growth of money funds in the late 1970s. In the late 1970s and early 1980s, another case of missing M1A coincided with large inflows into nonbank types of deposits, namely MMMFs, overnight repurchase agreements, and overnight Eurodollars. During the late 1970s, these new instruments grew rapidly, while demand deposits were unusually weak (Wenninger, Radecki, and Hammond 1981 and Dotsey, Englander, and Partlan 1981). Owing to high nominal rates, a high reserve requirement penalty was in effect, and Regulation Q ceilings were binding on many smaller banks and thrifts that were not well established enough to issue large time deposits that were not subject to interest rate ceilings. On the asset side of depository balance sheets, many firms shifted toward commercial paper. In addition, the advent of market-rate-based money market and small-saver certificates reduced the funding cost advantage of banks over nonbanks. As a result, bank auto loan rates rose toward finance company rates, and banks lost market share to finance companies. Once again, unusual weakness in demand deposits coincided with declines in depository assets and liabilities that can be traced to regulatory effects. The decline in the competitiveness of depositories was also accompanied by a surge in MMMFs so large that removing MMMFs from M2 before 1980 would have reduced M2 growth in the late 1970s by 1 to 3 percentage points (Figure 9 ). In terms of Figure 5, the reduced demand for C&I loans (shift 1) and for demand deposits by 12 Figure 9 The Growth Rates of M2 and M2 Minus MMMFs Percent 20 M2 M2 Less MMMFs 16 12 8 4 0 ’60 ’62 ’64 ’66 ’68 ’70 ’72 ’74 ’76 ’78 ’80 firms (shift 2) during this episode is similar to the mid-1970s case of the missing money, as is the combination of reduced demand for loans (shift 1) and M2 deposits (shift 2) by households. In response to this episode of missing money, the Federal Reserve redefined the monetary aggregates and expanded the definition of M2 to include the new innovations. Before 1980, there was no published monetary aggregate that resembled the current definition of M2. Instead, the Federal Reserve published several aggregates that reflected separations of banks from thrifts and some aggregations of small and large time deposits. In no case were MMMFs, RPs, and Eurodollars included in aggregates published by the Federal Reserve until the official redefinition of M2 in early 1980 (Simpson 1980). The current case of missing money. The current episode of missing M2 has been accompanied by 1. rapid inflows into bond and equity funds that are not consistently linked to spreads between shortand long-term interest rates, 2. heavy corporate bond and equity issuance, 3. edit-check reports to the Federal Reserve Board staff of large paydowns of C&I loans by corporations issuing bonds, Federal Reserve Bank of Dallas 4. Resolution Trust Corporation (RTC) activity, which has reduced the demand for M2 in unusual ways and which has reduced both the asset and liability sides of depository balance sheets, and 5. the institution of new risk-based capital standards that may have widened the spread between the prime and short-term interest rates. These phenomena have arguably reduced or been a reflection of a decline in the competitiveness of depositories as financial intermediaries. On the asset side of depositories, the wide spread of prime over short-term market rates has encouraged many firms to shift away from C&I loans. Bond issuance has surged the past two years, and although commercial paper has grown less robustly, it has grown faster than bank loans. The discrepancy between bond and paper issuance partly reflects that corporations are refinancing long-term debt. In addition, a reduction in the competitiveness of prime rate financing could affect more firms that could issue bonds than firms that could issue commercial paper. The reason is that only a subset of firms that can issue bonds are well-known enough to issue commercial paper, especially since the Securities and Exchange Commission (SEC) restricted the extent to which money market mutual funds could purchase commercial paper with ratings below A1/P1 (Crabbe and Post 1992). Nevertheless, there has been some shift away from bank loans to commercial paper that may reflect the wider spread between the prime rate and high grade commercial paper that has persisted since year-end 1990 (Figure 10 ). That widening coincided with the implementation of new and tougher risk-based capital standards on banks that increased the cost of C&I loans (see the box titled “The Impact of Risk-Based Capital Standards and the FDIC Insurance Premium Hike”). Aside from commercial borrowing, a wide spread between consumer loan rates and M2 deposit rates is encouraging households to withdraw M2 funds to pay off consumer loans (Feinman and Porter 1992). Nevertheless, for firms without access to the bond markets and for households that cannot self-finance purchases, the increased regulatory burden on banks Economic Review — Second Quarter 1993 Figure 10 The Spread Between the Prime and One- and Two-Month Commercial Paper Rates Percentage points 6 P T PT P T P T New Capital Standards 4 2 0 –2 ’71 ’73 ’75 ’77 ’79 ’81 ’83 ’85 ’87 ’89 ’91 has likely lowered investment and consumption spending by increasing the cost of bank financing. On the liability side of depository balance sheets, several unusual factors are affecting the demand for M2. First, RTC activity has created a prepayment risk on M2 deposits that is not measured by spreads between market and deposit interest rates. As a result, these measures of M2’s opportunity cost are understating M2’s true opportunity cost, thereby leading money demand models to overpredict M2 growth (Duca, forthcoming). Second, RTC resolution activity also has accelerated the adjustment of deposits to a lower interest rate environment by prematurely ending small time deposit contracts. Third, RTC effects and the recent large spread between short-term and long-term interest rates may have induced the public to gather information about long-term, non-M2 assets such as bond and equity funds (Feinman and Porter 1992). Fourth, the same factors apparently led mutual funds to increase their advertising of their products and induced several large banks to begin marketing bond and equity funds to their depositors (Cope 1992). These actions may have led to a discontinuous portfolio reallocation by households from M2 toward bond and equity mutual funds, thereby causing unusual weakness in M2 as it is currently defined. 13 Figure 11a Figure 11b Money Market Equilibrium OC (r) r Ms LM OC1 r1 r0 OC0 Md(y1) Md(y0) m y0 their demand for consumer loans, which causes an inward shift of the bank supply of deposits curve (shift 1). Similar shifts can also plausibly arise from RTC resolution activity. RTC resolutions effectively swap Treasury debt and thrift assets for thrift deposits, thereby shifting inward the deposit supply curve (shift 1). What missing money implies for monetary policy. As stressed by Poole (1970), unusual changes in the demand for money reduce the ability of a monetary aggregate target to stabilize aggregate demand. This can be shown by deriving In terms of Figure 5, the shift by corporations from C&I loans to bonds and equity finance is similar to the shift toward commercial paper in the mid-1970s, and substitution by households from M2 into bond and equity mutual funds is similar to the shift toward MMMFs in the mid-1970s. Incentives for households to reduce their assets and liabilities also can induce similar supply and demand curve shifts. As discussed in Feinman and Porter (1992), wide spreads between loan and deposit rates offered to households encourage them to reduce their demand for M2 at given levels of income and opportunity cost measures (shift 2), and to reduce Figure 12a y y1 Figure 12b A Money Target Reduces the Impact of IS Shocks OC Ms (money target) r Ms LM1 r2 OC2 LM0 r1 OC1 C B OC0 rA IS1 A d Md(y0) Ms0 14 M (y1) IS0 m yA yC yB y1 y Federal Reserve Bank of Dallas Figure 13a Figure 13b A Money Target Increases the Impact of Money Demand Shocks OC r Ms (money target) LMMA Ms LMA LMM C OCA rA OC1 r1 LMB A B C Md0(yA) OC2 r2 IS0 Md1(yA) Ms0 m the conditions under which the supply and demand for money are in equilibrium (the LM curve) and then solving for nominal output by combining the LM curve with the IS curve from the first section of this article. In terms of the money demand and money supply curves in Figure 5, a rise in income will shift the money demand curve (M d ) out and to the right (Figure 11a). This change implies that the combination of opportunity cost terms and income levels at which the demand for money equals its supply can be depicted by the upward sloping line in Figure 11b. Figure 11a assumes that the opportunity cost of money increases with the shortterm market interest rate. An assumption implicit in the upward sloping money supply curve in Figure 11a is that the central bank will allow a rise in income to boost money balances. If, however, the central bank prevents the money balances from changing, then the money supply schedule is essentially made vertical. This effect can be shown to make the LM curve steeper. Figure 12a indicates that an increase in income from Y0 to Y1 will shift the money demand curve to the right. If the money supply curve is vertical, the opportunity cost of money would rise to OC 2 , whereas if the money supply curve had a nonvertical upward slope, the opportunity cost of money would rise to only OC1. Since the opportunity cost of money increases when the market interest rate is higher, when income is Y1, the equilibrium market interest rate that clears the Economic Review — Second Quarter 1993 yA yB yC y money market under a fixed-money policy is higher (r2 ) than when the money supply curve slopes upward (r1 ). Thus, fixing money balances makes the LM curve steeper. A steeper LM curve has important policy implications. If we combine this steeper LM curve with the IS curve from the first section of this article, we can see that the impact of a given IS shock on nominal output is smaller. In Figure 12b, the economy is initially at point A with nominal output at YA. The initial equilibrium is at point A because point A is the only combination of nominal output (Y ) and the market interest rate (r ) at which both goods market (IS ) equilibrium and money market (LM ) equilibrium occur. If the IS curve shifts rightward from IS0 to IS1, then the new equilibrium under a fixed money rule is at point C, whereas the new equilibrium is at point B when under the flexible money supply policy. Notice how output is less affected by an IS shock when the LM curve is steeper (YC is closer than YB to YA ). But if some change in the economic environment also affects the money demand curve, then stabilizing the level of the money supply may further destabilize nominal output. Consider the increased popularity of bond funds, which causes the public to demand less money at each combination of income and opportunity cost of money. Then, as shown in Figure 13a, the money demand curve shifts inward. As a result, the level of nominal income must be higher for the same initial level of money to be held in equilibrium for a given level 15 Figure 14 How a Bypassing of the Banking System Is Problematic for Interest Rate and Money Targeting r LM0 LM1 A rA D rB B C IS0 IS1 yD yB yA yC y of opportunity cost. This implies that both LM curves in Figure 13b shift to the right by the same horizontal distance and that the level of income rises. However, if money balances are stabilized, the money demand shock pushes the economy to point C rather than point B. As a result, nominal output is more affected by a money demand shock when the LM curve is steeper (YC is further away from YA than is YB ). Because a steeper LM curve implies that money demand shocks have a greater destabilizing effect on output, money targeting becomes less useful when the demand for money is unstable. Now consider the impact of a rise in bank regulatory burden on both the IS and LM curves, assuming that the Federal Reserve does stabilize money held (in other words, that the LM curve is steep). As demonstrated in the first section of this article, the resulting increased cost of credit to borrowers causes the IS curve to shift inward to the left. In addition, as discussed earlier in this section, the reduced competitiveness of the banking 20 16 This qualitative result is also obtained if the central bank does not rigidly target money balances, thereby making the LM curve less steep. Quantitatively, however, the problem of inference using a monetary targeting perspective is smaller when the LM curve is less steep. system will likely be accompanied by an unusual decline in the demand for money (that is, a leftward shift of the money demand curve) that causes a rightward shift in the LM curve. In Figure 14, the economy is initially at point A. If the IS shift is large enough to outweigh the LM shift, then increased regulatory burden on banks will result in weakness in nominal income and a decline in short-term interest rates (point B) that accompanies both a case of missing money and a decline in the share of credit supplied by banks. These results, which are broadly consistent with recent events, are also obtained if the LM curve is less steep. Unusual shifts in either the IS or LM curves complicate policy-making aimed at stabilizing aggregate demand because policymakers readily observe interest rates, whereas estimates of nominal GDP are available after a lag and are subject to substantial revision. In the context of Figure 14, notice how the new level of the short-term market interest rate under states economic weakness if it is assumed that the IS curve did not shift. In this case, consider what would happen if an analyst mistakenly assumed that a money demand shock (an unusual decline in money demand) caused the LM curve to shift rightward so that the new LM curve intersected the IS curve at point C. Based on this assumption, one would mistakenly infer from the new interest rate level (rC ) that the economy was at point C rather than point B and that nominal output was YC rather than YB . At the same time, the unusual weakness in money balances held over states economic weakness if one assumes that a money demand shock did not occur but that the IS curve shifted. In this case, one would infer from the original LM curve and a market interest rate of rC that the IS curve shifted left to put nominal output at point D and that income has fallen to YD , which is less than YC .20 Thus, because changes in bank regulations can, in principle, shift both the IS and LM curves, they can create problems for the use of either an interest rate or money target. Furthermore, the dichotomy of overestimating GDP growth from an interest rate targeting perspective and underestimating GDP growth from a money targeting perspective from this example may have relevance for recent events. In particular, this dichotomy parallels the tendency of most major economic forecasters to have overpredicted GDP Federal Reserve Bank of Dallas growth during 1991–92 (in other words, they assumed that the economy was at a point like C in Figure 14), while M2 growth suggested that GDP growth should have been weaker than it actually was (point D). Conclusion Evidence from three cases of missing money indicates that factors reducing the competitiveness of banks accompanied each episode. In the two earlier cases (1974–75, 1979–80), the interaction between controls on deposit rates and high market interest rates spawned innovations and reactions that reduced M1 growth. The subsequent policy responses of deregulating deposit rates and of preventing inflation from accelerating have prevented these types of factors from spawning further innovations that destabilize money demand. The most recent case of missing money also reflects how regulatory and nonregulatory factors have encouraged firms and households to bypass banks and thrifts. On the asset side of bank balance sheets, risk-based capital standards have raised the cost of bank loans, which in turn has encouraged firms to shift away from bank finance and some households to pay down consumer debt by drawing down their bank deposits. On the liability side of bank balance sheets, the steepening of the yield curve, the depressing effects of higher FDIC insurance premiums on deposit rates, and RTC resolution activity have encouraged households to shift away from bank deposits to bond (and perhaps equity) mutual funds (Duca 1993). As a result of these factors, both sides of bank and thrift balance sheets have declined in unusual ways. This combination of influences is suggestive of leftward shifts in the supply and demand for money, and thus may account for why money demand models are overpredicting M2 growth. The appropriate regulatory response to the recent case of missing money is less clear-cut than Economic Review — Second Quarter 1993 in earlier episodes. While capital standards and increases in risk-based deposit insurance premiums have ostensibly induced banks to widen spreads between loan and deposit rates, they also have the desirable effect of shifting the downside risk of lending away from taxpayers to bank equity holders. Determining whether risk-based capital standards and deposit insurance premiums are appropriate is beyond the scope of this article. Nevertheless, this study has several implications for monetary policy. First, changes in the competitiveness of the banking system can alter the information content of monetary aggregates. Second, the demand for money can be altered by factors affecting long-term market interest rates. As argued by Feinman and Porter (1992), both considerations suggest that the Federal Reserve does not have as much direct control over M2 as previously thought, implying that the monetary aggregates need to be interpreted in more complicated ways than previously thought. Third, by causing IS (goods market) and LM (money market) disturbances, changes in the regulatory burden on banks have created problems for both interest rate and monetary aggregate targeting in three recent recessions. By implication, conducting a sound monetary policy is not as easy as either hindsight or ex post monetary indicators suggest. As a result, achieving broad economic goals requires that a central bank exercise a good deal of judgment and discretion in conducting its operating procedures. Finally, the previous and current episodes of missing money imply that the Federal Reserve should take an active role in policy actions that affect the competitiveness of the banking system and ensure that the consequences of such actions for the implementation of monetary policy are taken into account when formulating these policies. For this reason, the Federal Reserve must have significant input into the regulation of banks if it is to fulfill its mission as a central bank. 17 Regulation Q and the Competitiveness of Banks and Thrifts The degree to which Regulation Q put banks and thrifts at a competitive disadvantage in raising loanable funds can be gauged by measuring the extent to which market interest rates rose above deposit rate ceilings. The measurement of Regulation Q effects raises three issues: 1. which retail deposit rate to use, 2. whether rate ceilings for thrifts or banks should be used, and 3. how to handle the introduction of market-rate-based deposit instruments prior to the lifting of all rate ceilings on nontransactions deposits in 1983. With respect to issue 1, for two reasons the Regulation Q variable presented here reflects regulations affecting small time deposits. First, because small time deposits serve more as savings rather than transactions instruments, the small time deposits are more sensitive to their opportunity cost than are other types of household M2 deposits. Second, most market-based deposit instruments that were introduced in the late 1970s were, by design, substitutes for small time deposits. In handling issue 2, rate ceilings on thrifts were used because regulations tended to favor thrifts since rate ceilings on thrift accounts were as high as, if not higher than, those on bank deposits. In addressing issue 3, there were two basic types of partially regulated, deposit-type instruments that were introduced before 1983 by law: small-saver certificates and money market certificates. Small-saver certificate regulations were used in constructing a Regulation Q variable be- 18 Figure A The Bindingness of Regulation Q Ceilings on Deposit Rates Percentage points 3.5 3 2.5 2 1.5 1 .5 0 –.5 ’59 ’62 ’65 ’68 ’71 ’74 ’77 ’80 ’83 ’86 ’89 ’92 cause minimum balance requirements on small-saver certificates ($500 to $1,000) were much more similar to those on retail deposits than were the requirements on money market certificates ($10,000) over most of the late 1970s and early 1980s. Given these considerations in dealing with issues 1, 2, and 3, the Regulation Q measure here is defined using spreads between market interest rates and rate ceilings on small time deposits and/or small-saver certificates. Between 1960 and the second quarter of 1978, this measure (REGQ) equals the quarterly average spread between the three-year Treasury rate and the rate ceiling on three-year small time deposits when the ceiling was binding, and zero otherwise. Starting in the third quarter of 1979 when small(Continued on the next page) Federal Reserve Bank of Dallas Regulation Q and the Competitiveness of Banks and Thrifts—Continued saver certificates were created, REGQ equals one of the following based on quarterly averages of monthly data: a. any ceiling spread set by legislation between market interest rates and rates on small-saver certificates,1 b. the maximum of zero and the difference between the 21/2 -year Treasury yield (constant maturity) and any legislated cap on small-saver rates,2 or c. zero since August 1981 when rate ceilings on small-saver certificates were removed. For details on deposit regulations, see Mahoney, White, O’Brien, and McLaughlin (1987). As can be seen in Figure A, Regulation Q was very binding in the 1974–75 and 1979– Economic Review — Second Quarter 1993 80 periods when missing-money problems were arising for M1. These disintermediation effects were largely ended when rate ceilings were dropped on small-saver certificates in August 1981 and were completely eliminated with the lifting of all ceilings on nonbusiness deposit rates in 1983. The earlier episodes of binding Regulation Q ceilings (the early 1960s and again in 1967) were not accompanied by missing-money episodes, mainly because they were not accompanied by innovations (such as the creation of money substitutes) that affected the demand for money. 1 These set spreads ranged from zero to 50 basis points. 2 Between January 1980 and August 1981, ceilings on smallsaver yields were based on the 21/2-year constant maturity Treasury yield. 19 The Impact of Risk-Based Capital Standards and the FDIC Insurance Premium Hike on Banks’ Costs While U.S. commercial banks were not subject to a minimum capital rule before year-end 1990, they attempted to meet an unofficial goal of maintaining a minimum ratio of 6 percent total capital (equity plus subordinated debt) to assets. Using that ratio as a base, the capital standards that were fully implemented at year-end 1992 raised the effective minimum ratio of total capital to loans from 6 percent to 8 percent. In light of emerging loan quality problems in 1990, many large banks acted as though capital standards were fully implemented at year-end 1990 to reassure market investors that the banks could meet the final phase-in of capital standards at year-end 1992. The effect of new capital standards on the marginal cost of lending roughly equals the additional capital banks need (0.08–0.06 percent) multiplied by the extent to which the yield on capital (ROE ) exceeds that on insured deposits (r d ). Because most banks cannot issue subordinated debt, assume that capital costs roughly equal a targeted yield on the return on equity (ROE ) capital. Based on anecdotal evidence, let’s use a target ROE goal of 15 percent. For the yield on deposits, let’s use 4 percent, which roughly approximates the average rate on six-month time deposits over most of 1992. Based on these figures, the new capital standards raised the cost of funding C&I loans by 0.22 percent (0.02 x 0.11). In addition, in the second half of 20 1990 the Federal Deposit Insurance Corporation (FDIC) announced that it would increase the insurance premium levied on insured deposits by 0.075 percent, from 12 to 191/2 cents per $100 of deposits. To remain profitable, banks eventually would need to pass on the extra costs of tougher capital standards and higher insurance premiums (0.295 percent) to their customers in the form of a wider spread between loan and deposit rates. How does 0.295 percent compare with the pricing of the prime lending rate? Banks typically set the prime rate equal to the cost of borrowing overnight funds in the federal funds market plus some spread to compensate themselves for administrative costs, default risk, and some target return to equity holders. Because default risk varies with the business cycle, the spread between interest rates on bank loans and a competing source of credit can be used as an indicator of how competitive banks are in providing C&I loans, provided that the spread moves with default risk. One increasingly popular interest rate spread is that between the prime rate and commercial paper (Friedman and Kuttner 1992). Compared with the calculated impact of both capital standards and deposit insurance changes (0.295 percent), the spread between the prime rate and the one- to twomonth prime commercial paper rate rose by a (Continued on the next page) Federal Reserve Bank of Dallas The Impact of Risk-Based Capital Standards and the FDIC Insurance Premium Hike on Banks’ Costs—Continued somewhat higher 0.50 percent near year-end 1990, as indicated earlier in Figure 10. The somewhat greater rise in this spread partly reflects the requirement that many banks have risk-based capital ratios greater than 8 percent, based on regulator assessments of the bank’s soundness. For this reason, the calculated effect of the new bank capital standards presented here likely understates their average effect on banks’ cost of funding loans. The remainder of the increase in the spread may also reflect a slight increase in the default risk on bank loans compared with Economic Review — Second Quarter 1993 commercial paper. While both rates rise with a pervasive increase in default risk, this spread tends to widen temporarily during recessions, perhaps because the issuers of commercial paper generally are more established firms and, during tough times, are less prone to default on loans. Nevertheless, this spread has not narrowed to prerecessionary levels. This evidence and the fact that the spread widened during the phasing-in of the new bank capital standards strongly suggest that the new bank capital standards have raised the cost of prime-based bank loans. 21 References Cook, Timothy, and Thomas Hahn (1989), “The Effect of Changes in the Federal Funds Rate on Market Interest Rates in the 1970s,” Journal of Monetary Economics 24 (November): 331–51. Cope, Debra (1992), “Banks Making Inroads on Mutual Funds,” American Banker (December 9): 1, 12. Crabbe, Leland, and Mitchell A. Post (1992), “The Effect of SEC Amendments to Rule 2A–7 on the Commercial Paper Market,” FEDS Working Paper no. 199, Board of Governors of the Federal Reserve System (May). Diamond, Douglas W. (1991), “Monitoring and Reputation: The Choice between Bank Loans and Directly Placed Debt,” Journal of Political Economy 99 (August): 689–721. Dotsey, Michael, Steve Englander, and John C. Partlan (1981), “Money Market Mutual Funds and Monetary Control,” Federal Reserve Bank of New York Quarterly Economic Review (Winter): 9–13. Duca, John V. (forthcoming), “RTC Activity and the ‘Missing M2’,” Economics Letters. ——— (1993), “Should Bond Funds Be Added to M2?” unpublished manuscript, Federal Reserve Bank of Dallas. ——— (1992a), “The Case of the Missing M2,” Federal Reserve Bank of Dallas Economic Review (Second Quarter): 1–24. ——— (1992b), “U.S. Business Credit Sources, Demand Deposits, and the ‘Missing Money’,” Journal of Banking and Finance 16 ( July): 567–83. Federal Reserve Bulletin (1993), (Washington, D.C.: Federal Reserve System), January. Feinman, Joshua, and Richard D. Porter (1992), “The Continued Weakness in M2,” FEDS 22 Working Paper no. 209, Board of Governors of the Federal Reserve System (September). Friedman, Benjamin, and Kenneth Kuttner (1992), “Money, Income, Prices, and Interest Rates,” American Economic Review 82 ( June): 472–92. Goldfeld, Stephen M. (1976), “The Case of the Missing Money,” Brookings Papers on Economic Activity 7(3): 683–730. Hallman, Jeffrey J., Richard D. Porter, and David H. Small (1991), “Is the Price Level Tied to the M2 Monetary Aggregate in the Long Run?” American Economic Review 81 (September): 841–58. Hendershott, Patrik (1980), “Real User Costs and the Demand for Single-Family Housing,” Brookings Papers on Economic Activity vol. 2: 401–44. Jaffee, Dwight M., and Kenneth Rosen (1979), “Mortgage Credit Availability and Residential Construction,” Brookings Papers on Economic Activity vol. 2: 333–76. Keeley, Michael C. (1990), “Deposit Insurance, Risk, and Market Power in Banking,” American Economic Review 80 (December): 1183–1200. Mahoney, Patrick I., Alice P. White, Paul F. O’Brien, and Mary M. McLaughlin (1987), “Responses to Deregulation: Retail Deposit Pricing From 1983 through 1985,” Staff Study no. 151, Board of Governors of the Federal Reserve System ( January). Miller, Merton H., and Daniel Orr (1966), “A Model of the Demand for Money by Firms,” Quarterly Journal of Economics 80 (August): 413–35. Moore, George R., Richard D. Porter, and David H. Small (1990), “Modeling the Disaggregated Demands for M1 and M2 in the 1980’s: the U.S. Experience,” in Financial Sectors in Open Economies: Empirical Analysis and Policy Issues, eds. P. Hooper, K. H. Johnson, D. L. Federal Reserve Bank of Dallas Kohn, D. E. Lindsey, R. D. Porter, and R. Tryon (Washington, D.C.: Board of Governors of the Federal Reserve System): 21–105. Poole, William (1970), “Optimal Choice of Monetary Policy Instruments in a Simple Stochastic Macro Model,” Quarterly Journal of Economics 84 (May): 197–216. Porter, Richard D., Thomas D. Simpson, and Eileen Mauskopf (1979), “Financial Innovation and the Monetary Aggregates,” Brookings Papers on Economic Activity 10 (1): 213–29. Post, Mitchell A. (1992), “The Evolution of the U.S. Commercial Paper Market since 1980,” Federal Reserve Bulletin (December): 879–91. Economic Review — Second Quarter 1993 Simpson, Thomas D. (1980), “The Redefined Monetary Aggregates,” Federal Reserve Bulletin (February): 97–114. Small, David H., and Richard D. Porter (1989), “Understanding the Behavior of M2 and V2,” Federal Reserve Bulletin (April): 244–54. Tinsley, P. A., B. Garrett, and M.E. Friar (1981), “An Exposé of Disguised Deposits,” Journal of Econometrics 15 ( January): 117–37. Wenninger, John, Lawrence Radecki, and Elizabeth Hammond (1981), “Recent Instability in the Demand for Money,” Federal Reserve Bank of New York Quarterly Economic Review (Summer): 1–9. 23 24 Federal Reserve Bank of Dallas David M. Gould Roy J. Ruffin Economist Federal Reserve Bank of Dallas Professor of Economics University of Houston What Determines Economic Growth? S ince 1973, per capita income growth in the United States and other advanced countries has slowed to 2.2 percent a year, or almost half the 3.9-percent annual rate of the preceding quarter century. If the United States had maintained the level of growth experienced in the 1950s and 1960s, real per capita income today would be about 11 percent ($2,200 in 1987 dollars) greater than it actually is. In contrast, it has been estimated that eliminating the variability in U.S. consumption since World War II would be equivalent to boosting current real consumption by only about 4.8 percent ($420 in 1987 dollars).1 If the choice is between long-term growth policies and further short-term stabilization policies, long-term growth policies clearly have the potential for vastly higher benefits. Perhaps the reason why economists have neglected long-run economic growth is that, for a long time, the profession relied on a theory that offered little scope for policy to influence important sources of growth. According to traditional growth theory, the main determinants of long-run economic growth are not influenced by economic incentives. Recently, however, the study of economic growth has been reinvigorated by new developments in theory and empirical findings that suggest growth is in the sphere of policy. This new literature, referred to as endogenous growth theory, helps to explain movements in long-term growth and why some countries grow faster than others. Because long-term economic growth is the fundamental determinant of whether our grandchildren will have better lives than ours or whether the poor nations will catch up with or fall further behind the rich nations, this article attempts to summarize what economists have learned about economic growth and applies recent empirical findings to the above issues. Economic Review — Second Quarter 1993 The first section examines the long-term growth record, focusing on the extent of growth variations across countries and across decades. The second section presents the traditional growth model and recently developed endogenous growth models. The next section discusses whether poor countries are catching up with richer nations or whether the rich are getting relatively richer. The fourth section examines factors that have been found to influence long-run economic growth, and the last section presents lessons for the future. A historical perspective on economic growth Despite the recent slowdown in economic growth, long-run growth not only has persisted since the early nineteenth century but has accelerated. Maddison (1991) has documented the persistence and acceleration of economic growth for 14 advanced capitalist countries (Table 1). The annual growth rate of the 14 countries averaged only 0.9 percent from 1820 to 1870 but rose to 1.6 percent between 1870 and 1989. For the forty years from 1950 through 1989, growth in these John V. Duca offered many helpful comments as the reviewer for this article. We also benefited from the discussions and comments of W. Michael Cox, Dani Ben-David, Ping Wang, and Mark A. Wynne. All remaining errors are solely our responsibility. 1 Lucas (1987, 27). Lucas estimates that eliminating the variability in U.S. consumption would be equivalent to increasing average real consumption about 0.1 percent per year. However, if current income volatility has effects on future productivity, as allowed for by Ramey and Ramey (1991), the long-run costs of volatility may be higher. 25 Table 1 Growth Rates of Per Capita Real GDP, by Country (Annual Averages) 1820–1870 1870–1989 1950–1973 1.9 1.2 2.4 1.7 Austria .6 1.8 4.9 2.4 Belgium 1.4 1.5 3.5 2.0 Denmark .9 1.8 3.1 1.6 Finland .8 2.3 4.3 2.7 France .8 1.8 4.0 1.8 Germany .7 2.0 4.9 2.1 Italy .4 2.0 5.0 2.6 Japan .1 2.7 8.0 3.1 Netherlands .9 1.5 3.4 1.4 Norway .7 2.2 3.2 3.6 Sweden .7 2.1 3.3 1.8 United Kingdom 1.2 1.4 2.5 1.8 United States 1.5 1.9 2.2 1.6 .9 1.6 3.9 2.2 Australia Average 1973–1989 SOURCE: Maddison (1991). countries was at an even higher rate (3.2 percent), despite the slowdown since 1973. Maddison (1991) and Romer (1986) have shown that the leading technological country, defined in terms of productivity per worker hour, has experienced increasing rates of growth since 1700. Intuitively, this pattern may be a consequence of the creation of new technology in the leading country. According to Maddison, there have been only three technological leaders in the past four centuries: the Netherlands, the United Kingdom, and the United States. Maddison’s research shows that the growth rate of the leading country in three successive periods increased relative to that of the leader in the preceding period (Table 2 ). Using annual data spanning up to 130 years 26 across 16 countries, Ben-David and Papell (1993) find evidence of increasing per capita growth rates of gross domestic product (GDP). They find that in the more recent periods, trend GDP per capita growth rates were, on average, 2.5 times higher than the growth rates in the earlier periods. Just by examining the GDP growth record in the United States, we can see that growth over the long run has tended to accelerate. Table 3 shows the per capita growth of the United States in five successive periods. The annual growth rate has risen from 0.58 percent for 1800–1840 to 1.82 percent for 1960–91. Romer (1986) also found a similar upward drift in the growth rates of per capita GDP for all countries in the Maddison (1991) sample for which data were available. Federal Reserve Bank of Dallas Table 2 GDP Growth Rates of Leading Technological Countries Period Leading country at beginning of period Average annual compound growth rate (Percent) 1700–1820 Netherlands –.05 1820–1890 United Kingdom 1.2 1890–1989 United States 2.2 SOURCE: Maddison (1991). How reliable are the Maddison–Romer conclusions? Two factors—changes in household production and upward biases in measures of the rate of inflation—would seem to strengthen their conclusion that growth has been persistent and accelerating. First, GDP does not cover household production and leisure. Because hours of work have steadily decreased, it would seem that nonmarket output should have increased relative to measured Table 3 Per Capita Real GDP Growth in United States Period Average annual compound growth rate (Percent) 1800–1840 .58 1840–1880 1.44 1880–1920 1.78 1920–1960 1.68 1960–1991 1.82 SOURCES: Economic Report of the President (1992). Romer (1986). Economic Review — Second Quarter 1993 GDP. Hours of work seemed to be roughly constant at about 3,000 worker hours per year until about 1870, when they began to drop (Maddison 1991, 270–71, 276). Today, annual worker hours are about 1,600 in most of the advanced industrial countries, which suggests a substantial increase in leisure time. Leisure time, however, has probably not increased as much as worker hours have fallen because labor force participation rates have increased. Nonetheless, had the measures of GDP included estimates of the value of household production and leisure time, it is likely that the growth rates would have been higher in the period since 1870. Another factor understating the pickup in living standards in this century is the overestimation of inflation. The rate of growth in real GDP is the rate of growth in nominal GDP less the rate of inflation as measured by some price index. But price indexes tend to overstate changes in the cost of living (Gordon 1992). They are biased upward partly because they do not incorporate new products in a timely fashion; for example, the consumer price index (CPI) did not include automobiles until 1940, several decades after production of the Model T. Price indexes also do not completely incorporate quality changes in existing products, account for the substitution of cheaper goods for more expensive goods over time, or take into consideration the availability of discount outlets. If the CPI is off, say, 0.5 percent per year, then official measures can understate the real rate of GDP growth by the same magnitude. The substitution 27 bias has been estimated to be about 0.18 percent per year in the United States, and quality changes in U.S. consumer durables have biased the rate of price increase in these products by about 1.5 percent per year (Gordon 1992). Thus, a bias of 0.5 percent or even 1 percent per year would not be too farfetched. What can explain accelerating growth over nearly two centuries? Is the recent slowdown in per capita growth a new trend or a temporary setback? The following section addresses these questions in discussing the two main theories of economic growth. role in determining the rate of technological innovation and long-run growth. The following subsections discuss the structure of the two models. The Solow growth model. The traditional growth model advanced by Robert Solow (1956), a Nobel Prize winner, is perhaps the most famous one.2 The key idea of this framework is that growth is caused by capital accumulation and autonomous technological change. Solow views the world as one in which output, Y, is generated by the production function Theories of economic growth where K is the capital stock and L is the labor force. Solow postulated that the production function displays constant returns to scale, so that doubling all inputs would double output. However, holding one input constant—say, labor—and doubling capital will yield less than double the amount of output. Referred to as the law of diminishing marginal returns, this is one of the distinguishing elements of the Solow model. The Solow model is driven by savings and variations in the ratio of capital to labor. Suppose that k = K/L is the capital–labor ratio. It is convenient to begin with the observation that the percentage change in k equals the percentage change in K less the percentage change in L; that is, Growth is a complicated process, but the main theories of economic growth are conceptually simple. There are basically two categories of economic growth theories—those based on the traditional Solow (1956) growth model and those based on the concept of endogenous growth. The Solow model emphasizes capital accumulation and exogenous rates of change in population and technological progress. This model predicts that all market-based economies will eventually reach the same constant growth rate if they have the same rate of technological progress and population growth. Moreover, the model assumes that the long-run rate of growth is out of the reach of policymakers. The recent proliferation of endogenous growth models began with the work of Paul Romer. Romer (1986) observed that traditional theory failed to reconcile its predictions with the empirical observations that, over the long run, countries appear to have accelerating growth rates and, among countries, growth rates differ substantially. Endogenous growth theories are based on the idea that long-run growth is determined by economic incentives. The most popular models of this type maintain that inventions are intentional and generate technological spillovers that lower the cost of future innovations. Naturally, in these models an educated work force plays a special 2 28 For a recent exposition, see Wynne (1992). (1) (2) Y = F (K, L), k/k = K/K – L/L . The change in the capital stock equals investment, and investment equals the output that is saved rather than consumed. Thus, (3) K = sY, where s is the savings rate. Solow assumed that both the savings rate, s, and the growth rate of population, L/L , are constant. Substituting (3) into (2) and multiplying by k yields (4) k = sY/L – k ( L/L). Equation 4 has a simple interpretation. The term sY/L is the amount of investment per unit of the labor force. The term k( L/L) is the amount of investment per worker that is necessary to maintain the capital–labor ratio, k = K/L . For example, Federal Reserve Bank of Dallas suppose K equals $5 million and L equals 100, so k equals $50,000. Then, a growth rate of 0.02 percent for the population requires $1,000 of new investment per worker ($100,000) to keep k equal to $50,000. The Solow model is depicted in Figure 1, which measures investment per worker on the vertical axis and the population capital–labor ratio on the horizontal axis. The amount of investment per worker, sY/L , increases at a decreasing rate because the law of diminishing returns implies that per capita output, Y/L , declines as k rises. On the other hand, investment per worker required to keep capital intact, k(∆L/L), rises steadily because it is just proportional to k . Therefore, the two curves are likely to intersect at some equilibrium capital–labor ratio, k僓. When the capital–labor ratio is less than k僓, actual investment per worker exceeds that required to keep k constant, so k rises. When the capital–labor ratio is more than k僓, investment per worker falls short of that required to keep k constant, so k falls. Thus, the economy gravitates toward k僓. This is called a steady state because the economy can persist forever at this point. The capital stock and the level of output are rising at the same rate as the growth in the population, so per capita income, Y/L, does not change. In the Solow growth model, where technological progress is exogenous, income will rise with the level of physical or human capital (accumulated human knowledge), but the rise will not generate ever-increasing growth rates. Skilled workers increase the level of income, just like any other productive factor, but they do not increase growth in the long run because technological progress does not depend on the presence of a skilled work force. The basic conclusion of the model is that the rate of growth of the economy in the long run simply equals the rate of growth in the labor force plus the rate of exogenously determined technological progress. It is important to note that the rate of savings affects only the level of GDP, not the long-run rate of growth. A larger rate of savings will cause the rate of growth to increase temporarily because greater capital accumulation increases the productivity of labor and the level of GDP. But in the long run, the rate of growth will settle down to the rate of change in the labor force plus the rate of technological progress. Economic Review — Second Quarter 1993 Figure 1 The Solow Growth Model sY L k ∆L L sY L k k The Solow model implies that if rates of growth differ among countries, it is only because the countries are at different stages of movement toward the steady state. Rich countries should grow at a slower pace than poor ones; accordingly, over time, the per capita incomes of the rich and poor countries should converge. Endogenous growth models with innovation. The Solow model suffers from its assumption that technological progress is not explained by economic forces. However, while the Solow model is silent on the mechanism of technological progress, some recently developed endogenous growth models have attempted to articulate the economic process behind technological development. Joseph Schumpeter (1950) and Jacob Schmookler (1966) have argued forcefully that technological progress takes place because innovators find it profitable to discover new ways of doing things. Technological progress does not just happen as a result of disinterested scientists operating outside the profit sector. Schmookler reviewed the record of important inventions in petroleum refining, papermaking, railroading, and farming and found “not a single, unambiguous instance in which either discoveries or inventions” were solely the result of pure intellectual inquiry (p. 199). Rather, the incentive was to make a profit. The implication is that productivity growth might be related to the structure and policies followed by the economy, rather than to the exogenous forces of nature and luck. If growth is 29 endogenous, we would expect to find a wide variation in the rates of growth of different nations, with no apparent correlation with their levels of per capita income. Research and development are carried out to make a profit on a new product. But every new product adds to the stock of human knowledge, so the cost of innovation falls as knowledge accumulates. To use an old metaphor, we stand on the shoulders of those who precede us. Obviously, the car required the prior invention of the wheel and the gasoline engine. Panati (1987) gives some interesting examples. The potato chip followed french fried potatoes; detergents followed soap (by 3,000 years); the hair dryer was suggested by the vacuum cleaner; and athletic shoes required vulcanized rubber. These examples suggest that the rate of growth of the economy will vary directly with the rate of introduction of new products: think of the automobile, the airplane, the personal computer, or the television set. Some recently developed endogenous growth models have tried to capture the process behind the introduction of new products.3 In these models, technological progress is faster, the larger is the level of accumulated human knowledge. The explanation is that the cost of innovation falls as the level of human knowledge increases.4 As opposed to the Solow model, there are no diminishing returns to capital when other factors are held constant; so, raising the level of capital can lead to ever-increasing growth rates. Therefore, income growth will tend always to be faster among countries that have a relatively large stock of capital, a large educated population, or an economic environment that is favorable to the accumulation of human knowledge. The convergence hypothesis A stark prediction of the Solow model is that countries with similar preferences and access to the 30 3 See, for example, Lucas (1988), Romer (1990), and Grossman and Helpman (1991). 4 See the Appendix for a formal presentation of this model. Figure 2 Real GDP Growth Per Capita and 1960 Real GDP Per Capita Average annual per capita GDP growth rate, 1960–85 (Percent) 8 7 ● 6 ● ●● 5 ● ● ● ●● ● ● ● ● ●●● ● ●● ●● ●● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ●● ● ● ●● ●● ● ● ● ● ● ● ● ● ● ● ●●● ● ● ● ● ●●● ● ● ● ● ● ●● ● ● 4 3 2 1 0 –1 ● ● ●● ● ●● ● ● ● ● ● ● ● ● ● ● –2 ● –3 0 2 4 6 8 10 12 14 1960 GDP per capita (Thousands of dollars) SOURCE OF PRIMARY DATA: Summers and Heston (1991). same pool of technology should eventually reach the same per capita income level. Consequently, poor nations will tend to grow faster than richer nations until their income levels catch up with, or converge to, the income levels of rich countries. In contrast to the Solow model, the endogenous growth model makes no such predictions. The model allows for the possibility that countries that start off richer and have more resources, such as human or physical capital, may always be ahead of less developed countries. What is happening? Are the poor countries catching up with the richer nations, or are the rich getting relatively richer? Comparing their incomes is difficult because nations use different currencies and may have large variations in costs of living. If one uses market exchange rates to convert official GDP statistics into a common currency, the poorest 60 percent of the world’s nations received only about 5 percent of the world’s income in 1988, down from about 10 percent in 1960. It appears the poor countries are losing out. But the cost of living in poor countries is lower than in rich countries. To correct for this difference, it is necessary to use a measure of purchasing power parity— Federal Reserve Bank of Dallas that is, the exchange rate that would make the costs of living of countries comparable. Robert Summers and Alan Heston (1991), therefore, recalculated the incomes of nations, using estimates of purchasing power parities. On this basis, the Solow model apparently has the correct predictions because income convergence appears to be taking place. From 1960 to 1988, the share of the poorest 60 percent of the world’s population rose from about 17 percent of world income to almost 21 percent, while the share of the richest tier of countries fell from 68 percent of world income to about 60 percent. This analysis may be misleading, however, because changes in income shares are highly sensitive to how one defines income classes. If one compares the richest 10 percent of countries with the poorest 10 percent of countries, convergence does not appear to be taking place. Generally, the middle-income countries, which are sometimes grouped with the very poor countries, are experiencing convergence with the rich nations. Another way of determining the degree to which incomes are converging across countries is to observe the relationship between growth rates and levels of income. If income levels of countries tend to converge, poor countries should grow faster than richer countries as they catch up to reach the higher level of income. Figure 2 shows the relationship between income in 1960 and growth rates between 1960 and 1985 for 98 countries of the Summers–Heston data set. There does not appear to be any strong negative relationship between growth rates and the level of income, which may indicate that convergence is not taking place. If it was, the diagram should show a negative, or downward-sloping, relationship between the level of income and growth rates, rather than the relationship pictured. Some have argued that a problem with Figure 2 is that it does not hold constant other factors that determine growth. If we examine the relationship of income levels in 1960 and economic growth between 1960 and 1985, holding constant human capital, we find that the poor countries appear to be catching up with the rich countries (Figure 3). Although income convergence conditional on human capital and other variables has been used as evidence against endogenous growth theory (Mankiw, Romer, and Weil 1992), it is not necesEconomic Review — Second Quarter 1993 Figure 3 Real GDP Growth Per Capita and 1960 Real GDP Per Capita: Human Capital Held Constant Average annual per capita GDP growth rate, 1960–85 (Percent) 5 4 ● ● 3 ●● ● ●● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●●● ● ● ● ● ● ● ● ●● ●● ● ●● ● ● ● ● ● ● ●● ● ● ● ● ● ●●● ● ● ● ● ●● ● ● ● ●● ● 2 1 0 –1 –2 –3 –4 ● ● ●● ● ●● ● ● ● ● ● ● ● ● ● –5 ● –6 0 2 4 6 8 10 12 14 1960 GDP per capita (Thousands of dollars) SOURCES OF PRIMARY DATA: Summers and Heston (1991). United Nations (1971). sarily inconsistent.5 We pointed out earlier that endogenous growth theory suggests that countries with higher levels of education (human capital) might provide greater incentives for invention and, therefore, much higher rates of growth. But holding human capital constant, endogenous growth theory may also predict convergence. Endogenous growth theory merely says that countries may diverge if they have different levels of human capital, all other factors constant. There is evidence suggesting divergence because countries do have different levels of human capital, and human capital tends to be positively correlated with economic growth. The determinants of economic growth The Solow and endogenous growth models have different implications for what is, or is not, 5 Furthermore, the methodology of regressing average growth rates against initial income levels does not necessarily provide statistical evidence of convergence. For a description of this problem, see Danny Quah (1990). 31 Figure 4 Figure 5 Solow Model: Increase in Income Due to Educational Subsidy Endogenous Growth Model: Increase in Growth Due to Educational Subsidy Income per capita Income per capita Income with educational subsidy Income with educational subsidy Income without educational subsidy Income without educational subsidy Transition period Subsidy begins Years important in determining the rate of growth. Using these models as guides, economists have tried to estimate the role of various factors suspected of determining the rate of economic growth. Does the real world behave like the endogenous growth model, in which technological progress and long-term growth are influenced by economic factors, or like the Solow growth model, in which the determinants of technological progress and growth are exogenous? The question is important because the answer can tell us how countries may influence their growth rates. It is a difficult question to answer because technological progress is a long-run phenomenon and may take centuries to observe. Furthermore, over the relatively short period for which data are available, factors that apparently influence growth rates may, in reality, be changing only income levels. Hence, an observed increase in growth may be just a short-run transition to a higher income level and not a permanent increase. For example, suppose Indonesia decides to subsidize college education by providing free tuition. 6 32 One-third is typically found to be capital’s share of output across countries. This assumes a Cobb–Douglas production technology of the form Y = KαL1–α, where α is capital’s share of output. Subsidy begins Years If the Solow model accurately reflects reality, Indonesia will experience faster growth in the transition to a higher level of income (because of more investment in the accumulation of human knowledge), but income growth in Indonesia will not permanently increase (Figure 4 ). If the endogenous growth model is a better reflection of reality, the greater accumulation of human knowledge will result in not only higher income but also a permanently higher growth rate (Figure 5 ). The problem is distinguishing between models in the short run. Both models make the same short-run prediction that free college tuition increases Indonesia’s growth. It is only in the long run, when growth either speeds up or does not change, that distinguishing between these two models becomes possible. Plosser (1992) points out that the Solow growth model, even in the transition to a higher income level, cannot satisfactorily describe the changes in growth rates across countries. Imagine, for example, that Indonesia increases its rate of investment by 50 percent. As discussed above, the model predicts that the growth rate would immediately increase but would gradually decline over time until the new higher income and level of capital were reached. Assuming that the share of total capital in output is one-third, the Solow model predicts that income per capita would only rise about 22 percent.6 If the country completed the transition to the new higher level of income in thirty years, then the increase in the average annual Federal Reserve Bank of Dallas growth rate would be about 0.7 percent per year.7 Consequently, large increases in investment rates have little ability in the standard Solow growth model to explain the observed large differences in growth rates across countries. Studies have stressed different reasons why economic growth varies across countries. Because of the current popularity of endogenous growth models, most recent studies have focused on the role of human capital accumulation. However, human capital as an input to production is also important in the Solow model. In addition to human capital, a country’s economic environment can play an important role in influencing economic growth. For example, internal competitive structure, a country’s openness to trade, its political stability, and the efficiency of its government can influence innovative activity and economic growth, as discussed below. Human knowledge. Does educating a work force increase a country’s growth? Barro (1991), Mankiw, Romer, and Weil (1992), Levine and Renelt (1992), and Gould and Ruffin (1993), among others, have found evidence suggesting an educated populace Figure 6 Partial Association Between Real GDP Growth Per Capita and School Enrollment Rate is a key to economic growth. A larger educated work force may increase growth either because of faster technological progress, as individuals build on the ideas of others, or by simply adding to the productive capacity of a country. For example, in 1960, only 7 percent of Guatemala’s children of secondary school age actually attended secondary school. Barro (1991) estimates that had the Guatemalans invested in education to increase attendance to a relatively modest 50 percent in 1960, the country’s growth rate per capita from 1960 to 1985 might have increased an amazing 1.3 percent per year. Figure 6 depicts the empirical relationship between GDP growth rates and secondary school enrollment as a proportion of the working-age population for 98 countries between 1960 and 1985.8 On the vertical axis are average annual per capita growth rates for 1960–85, and on the horizontal axis is the log of secondary school enrollment rates as a proportion of the working-age population in 1960. Held constant are income levels in 1960, as well as capital savings rates. The slope of the fitted line in Figure 6 implies that increasing the secondary school enrollment rate a modest 2 percent, from 8 percent to 10 percent, raises the average growth rate an estimated 0.5 percent per year, holding other factors constant. The figure suggests that one important way for poor coun- Average annual per capita GDP growth rate due to schooling, 1960–85 (Percent) 6 ● 7 Plosser (1992) notes that there is considerable controversy over how fast an economy moves to its new level of income. Barro and Sala-i-Martin (1992) and Mankiw, Romer, and Weil (1992) estimate that it takes between 25 years and 110 years for one-half of the transition to be completed, depending on the sample and other characteristics considered. King, Plosser, and Rebelo (1988) compute the half-life of the transition as ranging from 5 years to 10 years under their parameter assumptions. 8 School enrollment rates are often used as a proxy for accumulated knowledge because of the lack of data on the size of the educated population. A problem with using school enrollment rates, however, is that they measure the increase in the size of the educated population rather than the actual stock of educated people. Using literacy rates across countries may be more attractive because they are a measure of the stock of educated people. They, too, present problems, however, as literacy rates are sometimes measured differently across countries. ● 5 4 ● ● ● 2 ● 1 ● 0 ● ● ● ● ● ● ● ● ● ● ● ● ● ● –1 ● ● –2 ● ● ● ● ● ●● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● –3 –1 ● ● ● ● ●● ● ● ●● ● ● ● ● ● ● ●● ●● ● ● ● ● ● ● ● ● ● ●● ● ● ● ●● ●● ● ● ●● ● ● ● ● ● 3 –.5 0 .5 1 1.5 2 Log of secondary school enrollment as percent of working-age population, 1960 SOURCE OF PRIMARY DATA: Mankiw, Romer, and Weil (1992). Economic Review — Second Quarter 1993 2.5 33 Figure 7 Partial Association Between Real GDP Growth Per Capita and Government Consumption in GDP Average annual per capita GDP growth rate, 1960–85 (Percent) 5.0 2.5 ●● ● ● ● ●● ● ●● 0 ● ● ●● ● ● ● ●● ● ● ● ● ● ●● ● ● ● ●● ● ● ● ● ● ● ● ● ● ●● ●● ● ● ● ● ●● ● ● ● ●● ●● ● ● ● ●●● ● ● ● ● ● ● ● ● ● ● ● ● ●● ●● ● ● ● ● –2.5 ● ● ● ● ● ● ● ● ● ● –5.0 0 5 10 15 20 25 Percent of government consumption in GDP, 1960 SOURCE: Barro (1991, 431). tries to advance is through greater investment in education. Political and governmental factors. How much does the political environment help or hinder economic growth? One can imagine that an extremely unstable government (one that is susceptible to rapid policy reversals) would create insecurity about the future and decrease the incentives to invest in future development. For example, oil exploration would disappear if private investors believe that the government will expropriate oil wells. Likewise, people will not invest in developing new products if they cannot reap the rewards of their ideas. This is why copyright and patent protection can be an important factor in economic growth. Although it is difficult to measure property rights in a country, a safe assumption is that 9 34 As measured by assassinations and revolutions, political instability can also be associated with the direct destruction of a country’s capital stock. This, by itself, can certainly reduce a country’s growth rate. they suffer when a country experiences a large number of revolutions and political assassinations. Holding constant the levels of education, income, and government consumption, Barro (1991) estimates that political instability, as measured by the number of revolutions and political assassinations in a country, decreases GDP growth per capita. For example, with greater political stability, South Korea’s growth rate would have been 6.25 percent per year, rather than 5.25 percent, from 1960 to 1985. More dramatically, El Salvador may have lost almost 7 percent per year in per capita growth because of its extreme political instability.9 If political instability has a negative influence on growth, how does the size of government affect economic growth? Barro (1989, 1990, 1991) finds that the larger the share of government spending (excluding defense and education) in total GDP, the lower are growth and investment. Barro also finds that government investment has no statistically significant effect on economic growth. A government may attempt to increase private productivity through government spending, but the evidence suggests it has no such effect and may even decrease growth. Growth appears to fall with higher government spending because of lower private savings and because of the distortionary effects from taxation and government expenditure programs. Figure 7 shows the negative relationship between per capita growth and the share of government consumption in GDP. Held constant are income levels in 1960, as well as the level of education and indicators of the political stability. As Figure 7 shows, increasing the share of government consumption in GDP from 10 percent to 15 percent would decrease economic growth about 0.6 percent per year. International trade. Do countries that are open to international trade grow faster than closed economies? Evidence suggests that the answer is yes. From 1960 to 1985, economies that have pursued outward-oriented pro-trade policies—such as the four so-called Asian Tigers (Singapore, Hong Kong, South Korea, and Taiwan)—experienced growth rates between 8 percent and 10 percent a year. In contrast, the relatively closed economies of Africa and Latin America experienced growth rates rarely exceeding 5 percent a year. Ben-David (1991) finds that when countries in Europe joined the European Community and dropped their trade Federal Reserve Bank of Dallas barriers, incomes increased and approached those of the wealthier nations. A country open to international trade may experience faster technological progress and increased economic growth because the cost of developing new technology falls as more hightech goods are available. In other words, trade increases growth because it makes a greater variety of products and technologies available. De Long and Summers (1991) find that relatively closed countries with high effective rates of protection have productivity growth rates that, on average, are 1.1 percentage points below those of other countries. Roubini and Sala-i-Martin (1991) find that a country that moves from being a strongly outward-oriented trade regime to a strongly inwardoriented trade regime would experience a 2.5percentage-point decrease in its annual growth rate. Gould and Ruffin (1993) attempt to distinguish between human capital as an input to production and human capital as the source of long-term growth in open and closed trading regimes. They find that when human capital, as measured by literacy rates, is relatively high, open economies experience growth rates 1 to 2 percentage points higher than the growth rates of closed economies. Equipment investment. De Long and Summers (1991) have argued that equipment investment has potentially large effects on economic growth. They explain that new technologies have tended to be embodied in new types of machines. For example, at the end of the eighteenth century, steam engines were necessary for steam power, and automatic textile manufacture required power looms and spinning machines. In the early twentieth century, assembly-line production was unthinkable without heavy investments in the new generations of high-precision metal-shaping machines that made parts interchangeable and assembly lines possible. In examining a cross-sectional distribution of growth rates in the post–World War II period, De Long and Summers find evidence suggesting that investments in machinery and equipment are a strategic factor in growth and possibly carry large positive benefits in generating further technological progress. Holding constant such factors as relative labor productivity, labor force growth, school enrollment rates, and investment other than in machinery and equipment, De Long and Summers Economic Review — Second Quarter 1993 find that each extra percentage point of total output devoted to investment in machinery and equipment is associated with an increase of 0.26 percentage point per year in economic growth. Other investment also has a positive impact on growth, but the effect is only one-fourth as large as that for machinery investment.10 Overview of factors behind growth. Table 4 summarizes factors that have been shown to influence growth rates. As the table indicates, factors that are associated with increasing human or physical capital investment tend to enhance technological progress and economic growth. On the other hand, factors that reduce incentives to invest, or interfere with well-functioning markets, tend to reduce growth. Conclusion For centuries, economists have been trying to answer questions about what determines economic growth and to make predictions about the future. Malthus, an economist who wrote in the late eighteenth century, predicted that expanding population growth combined with limited resources and declining productivity would result in only a subsistence income. Certainly, in the slowly growing agrarian era in which Malthus lived, it would have seemed impossible for the land to provide for everyone with unbounded plenitude. However, with the technological advances in the latest century, it is difficult to be pessimistic. New products appear to beget other products, so technology seems to be advancing at ever-increasing rates. Endogenous growth literature arose out of the desire to explain why, over long periods, economic growth appears to be accelerating and why some countries grow faster than others. The traditional Solow model left unanswered too many questions about growth differentials across countries and the mechanism of technological progress. 10 Whether equipment investment generates positive externalities that influence technological progress is subject to some debate. Auerbach, Hassett, and Oliner (1992) argue that economic growth due to equipment investment is completely consistent with the basic Solow model. 35 Table 4 Determinants of Economic Growth Growth enhancing Growth reducing Schooling, education investment 1, 2 Government consumption spending 1 Capital savings, investment 2 Political, social instability 1 Equipment investment 3 Trade barriers 3,4,5,6 Level of human capital 1,6 Socialism1 1 Barro (1991). Mankiw, Romer, and Weil (1992). De Long and Summers (1991). 4 Ben-David (1991). 5 Roubini and Sala-i-Martin (1991). 6 Gould and Ruffin (1993). 2 3 Technological progress, however, is what ultimately determines growth, and growth determines whether our grandchildren will have better lives than ours. We are just beginning to understand theoretically and empirically the mechanisms of economic growth, and much work has yet to be done. But so far, there appears to be a strong relationship between investment, particularly human capital investment, and growth. Other factors also are positively related to investment and growth, such 36 as political stability, well-defined property rights, equipment investment, low trade barriers, and low government consumption expenditures. These findings are consistent with the long-run growth predictions of endogenous growth models but are also consistent, in the short run, with Solow models. It may be several decades before we have enough detailed long-run data to distinguish clearly between these theories. In the meantime, maintaining policies consistent with long-run growth can have significant benefits. Federal Reserve Bank of Dallas Appendix Endogenous Growth Model with Innovation This Appendix explains the basic endogenous growth model found in Grossman and Helpman (1991). Suppose there are n products. To simplify, each product sells for the same price and has the same cost of production. Each product is the property of a single firm. We assume that each unit requires only one unit of labor so that the marginal cost of production is simply the wage rate, w. Every firm sets a price, p, that is the same markup over costs, (A.1) p = w (1/α), where 1/α is the markup. The parameter α is between 0 and 1. In effect, α is the cost of production per dollar’s worth of the product. Accordingly, the profit on $1 worth of sales will be (1 – α). If the economy sells $E worth of products, then the total profit of all n firms is (A.2) Π = E (1 – α). Research and development take place in the form of new products. Firms invent new products in an effort to capture some fraction of the profits given in (A.2). It takes a units of labor to invent a new product. Accordingly, firms will enter the market as long as the present value of future profits, called v, exceeds innovation costs, wa (that is, v wa). Economic Review — Second Quarter 1993 Firms will enter the market until w rises or v falls up to the point that (A.3) wa = v. The rate of growth of new products is g = ∆n /n. Each firm’s profit is E (1 – α)/n; accordingly, the profit of any existing firm falls as new firms develop new products. The stock market valuation of any set of n firms must be reduced by the growth rate, g. If g is zero, the aggregate value of all n firms would be vn = E (1 – α)/r, where r is the rate of interest. But if g is greater than zero, then the aggregate value of the stock market is (A.4) vn = E (1 – α)/(r + g). Combining (A.2), (A.3), and (A.4), we have (A.5) p = v /aα = E (1 – α)/aαn (r + g). To determine the rate of growth of new products, we need to know how much labor is devoted to their production and how much is devoted to research and development. The amount of labor devoted to production is the total output of products (because one unit of (Continued on the next page) 37 Appendix Endogenous Growth Model with Innovation—Continued output requires one unit of labor). In turn, the total output of products is the physical sales volume, E /p. The amount of labor devoted to research and development is the number of new products, ng, multiplied by the labor required per new product, a, or ang. Thus, the growth rate, g, is determined by the equation ang + E /p = L, (A.6) where L is the total amount of labor available. Substituting (A.5) into (A.6) yields (A.7) ang + aαn (r + g)/(1 – α) = L. Multiplying by (1 – α), we get (A.8) g (1 – α) + α (r + g) = L(1 – α)/an, The key assumption in the theory of endogenous growth is that there are technological spillovers. A simple way of capturing this idea is to let the cost of invention fall as human knowledge accumulates. In this model, human knowledge can be regarded as the number of products, n. Accordingly, it is assumed that a = c /n, where c is a positive constant. Substituting this into (A.9) yields the final growth equation: (A.10) g = L[(1 – α)/c ] – αr. An important implication is that the larger the stock of people capable of carrying out research and development, L, the larger the rate of growth. Because human capital is growing, the endogenous growth model implies accelerating growth rates. which equals (A.9) 38 g = L(1 – α)/an – αr. Federal Reserve Bank of Dallas References Auerbach, Alan J., Kevin A. Hassett, and Stephen D. Oliner (1992), “Reassessing the Social Returns to Equipment Investment,” Working Paper Series, no. 129 (Board of Governors of the Federal Reserve System, Division of Research and Statistics, Economic Activity Section, December). Barro, Robert J. (1991), “Economic Growth in a Cross Section of Countries,” Quarterly Journal of Economics 106 (May): 407–43. ——— (1990), “Government Spending in a Simple Model of Endogenous Growth,” Journal of Political Economy 98 (October, pt. 2): S103–25. ——— (1989), “A Cross-Country Study of Growth, Saving, and Government,” NBER Working Paper Series, no. 2855 (Cambridge, Mass.: National Bureau of Economic Research, February). ———, and Xavier Sala-i-Martin (1992), “Convergence,” Journal of Political Economy 100 (April): 223–51. Ben-David, Dan (1991), “Equalizing Exchange: A Study on the Link Between Trade Liberalization and Income Convergence” (University of Houston, October, Photocopy). ———, and David H. Papell (1993), “The Great Wars, the Great Crash, and the Unit Root Hypothesis” (University of Houston, January, Photocopy). De Long, J. Bradford, and Lawrence H. Summers (1991), “Equipment Investment and Economic Growth,” Quarterly Journal of Economics 106 (May): 445–502. Economic Report of the President (1992) (Washington, D.C.: U.S. Government Printing Office, February). Gordon, Robert J. (1992), “Measuring the Aggregate Price Level: Implications for Economic Performance and Policy,” NBER Working Paper Series, no. 3969 (Cambridge, Mass.: National Bureau of Economic Research, January). Economic Review — Second Quarter 1993 Gould, David, and Roy J. Ruffin (1993), “Human Capital Externalities, Trade, and Economic Growth,” Federal Reserve Bank of Dallas Research Paper no. 9301 (Dallas, January). Grossman, Gene M., and Elhanan Helpman (1991), Innovation and Growth in the Global Economy (Cambridge: MIT Press). King, Robert G., Charles I. Plosser, and Sergio T. Rebelo (1988), “Production, Growth and Business Cycles: I. The Basic Neoclassical Model,” Journal of Monetary Economics 21 (March/May): 195–232. Levine, Ross, and David Renelt (1992), “A Sensitivity Analysis of Cross-Country Growth Regressions,” American Economic Review 82 (September): 942–63. Lucas, Robert E., Jr. (1988), “On the Mechanics of Economic Development,” Journal of Monetary Economics 22 ( July): 3–42. ——— (1987), Models of Business Cycles (Oxford and New York: Basil Blackwell). Maddison, Angus (1991), Dynamic Forces in Capitalist Development: A Long-Run Comparative View (Oxford and New York: Oxford University Press). Mankiw, N. Gregory, David Romer, and David N. Weil (1992), “A Contribution to the Empirics of Economic Growth,” Quarterly Journal of Economics 107 (May): 407–37. Panati, Charles (1987), Panati’s Extraordinary Origins of Everyday Things (New York: Harper and Row, Perennial Library). Plosser, Charles I. (1992), “The Search for Growth” (University of Rochester, August, Photocopy). Quah, Danny (1990), “Galton’s Fallacy and Tests of the Convergence Hypothesis” (Massachusetts Institute of Technology, May 10, Photocopy). 39 Ramey, Garey, and Valerie A. Ramey (1991), “Technology Commitment and the Cost of Economic Fluctuations,” NBER Working Paper Series, no. 3755 (Cambridge, Mass.: National Bureau of Economic Research, June). Romer, Paul M. (1990), “Endogenous Technological Change,” Journal of Political Economy 98 (October, pt. 2): S71–S102. ——— (1986), “Increasing Returns and Long-Run Growth,” Journal of Political Economy 94 (October): 1002–37. Roubini, Nouriel, and Xavier Sala-i-Martin (1991), “Financial Development, the Trade Regime, and Economic Growth,” NBER Working Paper Series, no. 3876 (Cambridge, Mass.: National Bureau of Economic Research, October). Schmookler, Jacob (1966), Invention and Economic Growth (Cambridge: Harvard University Press). 40 Schumpeter, Joseph A. (1950), Capitalism, Socialism, and Democracy, 3d ed. (New York: Harper and Brothers). Solow, Robert M. (1956), “A Contribution to the Theory of Economic Growth,” Quarterly Journal of Economics 70 (February): 65–94. Summers, Robert, and Alan Heston (1991), “The Penn World Table (Mark 5): An Expanded Set of International Comparisons, 1950–1988,” Quarterly Journal of Economics 106 (May): 327–68. United Nations, Department of Economic and Social Affairs (1971), World Economic Survey, 1969 –1970, E/4942 ST/ECA/141. Wynne, Mark A. (1992), “The Comparative Growth Performance of the U.S. Economy in the Postwar Period,” Federal Reserve Bank of Dallas Economic Review, First Quarter, 1–16. Federal Reserve Bank of Dallas Stephen P. A. Brown Mine K. Yücel Assistant Vice President and Senior Economist Federal Reserve Bank of Dallas Senior Economist and Policy Advisor Federal Reserve Bank of Dallas The Pricing of Natural Gas in U.S. Markets R ecent patterns in natural gas prices have raised public concern about the pricing of natural gas in U.S. markets ( Johnson 1992). In recent years, prices paid by industrial and electrical end users have fallen more than wellhead prices, while residential and commercial prices have fallen less than wellhead prices (Yücel 1991).1 The lack of uniform changes in natural gas prices may arise from differences in end users and the market institutions that serve them. Industrial and electrical users of natural gas can switch easily between oil products and natural gas, while residential and commercial users cannot. Industrial and electrical users generally bypass local distribution companies (LDCs), relying heavily on spot supplies in a competitive market served by brokers and pipeline companies. In contrast, residential and commercial users typically purchase their gas from LDCs, which earn a regulated rate of return and obtain their supplies under long-term contracts. These observations about U.S. natural gas markets lead us to ask two questions. Do differing characteristics in end users and the market institutions serving them lead to differences in pricing behavior? Are changes in natural gas prices uneven in the long run, or is popular concern about natural gas prices unwarranted? To answer these questions, we examined econometrically how price shocks are transmitted across various markets for natural gas. characteristics of the final customer—that is, as residential, commercial, industrial, and electrical prices for natural gas. Natural gas is first sold at the wellhead, where it is produced. Both pipeline companies and brokers use the collection and pipeline systems to transport gas from the field to their customers, with brokers using the pipelines as contract carriers. Pipeline companies and brokers sell their natural gas directly to some end users and to LDCs, which pay the city gate price. In turn, the LDCs distribute gas throughout localities and sell it to additional end users. When comparing end-use markets for natural gas, several differences stand out. Industrial and electrical users of natural gas generally can switch easily between fuels to seek the lowest cost energy source. As a consequence, most of these end users rely heavily on spot supplies purchased directly from pipeline companies and brokers. For the most part, these suppliers seem to behave competitively in serving this market (Brown and Yücel 1993). In contrast, most residential and commercial consumers are tied to a single fuel. These end users purchase their natural gas from LDCs, which Natural Gas Markets and Prices As natural gas journeys downstream from the wellhead, it travels through collection systems, pipelines, and local distribution systems before it reaches its consumers. Natural gas prices are observed in six separate markets. Prices in these markets include wellhead, city gate, and four enduse prices. End-use prices are identified by the Economic Review — Second Quarter 1993 The authors thank Nathan Balke, Phil Drake, Tom Fomby, Bill Gruben, Shengyi Guo, Joe Haslag, Tim Smith, Lori Taylor, and Mark Wynne for helpful comments, but do not implicate them in the conclusions. Shengyi Guo provided able econometric and programming assistance. The views expressed are those of the authors and should not be attributed to the Federal Reserve Bank of Dallas or to the Federal Reserve System. 1 Electrical utilities use natural gas to generate electricity. 41 earn a regulated rate of return and obtain much of their gas under long-term contract. (For a discussion of why electrical and industrial end users rely more heavily on spot markets for natural gas than commercial and residential customers, see the box titled “Development of a Spot Market for Natural Gas.”) To some extent, differing reliance on spot and contract supplies may account for the longterm difference in the way price shocks are transmitted through the market for natural gas. When average wellhead prices change, spot prices generally change more than those specified in long-term contracts. Thus, when average wellhead prices fall, as has been the trend since 1985, spot prices generally fall more than those specified in long-term contracts. The extent to which the supply in a market comprises spot gas determines how responsive its prices are to changes in the average wellhead price. With a greater than average reliance on spot supplies, electrical and industrial customers stand to see a change in their gas prices that is greater than the market average. With a less than average reliance on spot supplies, commercial and residential customers stand to see a change in their gas prices that is less than the market average. 42 2 During the estimation period, there were extensive changes in federal regulation of the natural gas pipeline industry. These changes raise concern about estimating stable relationships between the wellhead price and the electrical, industrial, and city gate prices. Because we do find cointegrating relationships in all cases, we treat the regulatory changes as primarily endogenous or irrelevant to the transmission of price shocks. Monthly data for average wellhead, city gate, electrical, industrial, commercial, and residential prices were obtained from the U.S. Department of Energy. The price series were deflated with a monthly GNP deflator series and then seasonally adjusted with the X–11 procedure in SAS. The monthly GNP deflator was obtained by using the Chow–Linn procedure on quarterly data. The consumer price and producer price indexes were used as monthly reference series in the Chow–Linn procedure. 3 See Balke (1991), Sims, Stock, and Watson (1990), and Stock and Watson (1988). Empirical Analysis of Natural Gas Pricing The institutional arrangements in natural gas markets suggest that seven pairs of natural gas prices have an upstream–downstream relationship. These are the wellhead price with electrical, industrial, city gate, commercial, and residential prices, and the city gate price with commercial and residential prices. For each pair of upstream and downstream prices, we conceptualize the long-run relationship as a simple markup model of natural gas prices in which price shocks are transmitted: (1) PDt = a + βPU t , where PD is a downstream price for natural gas, and PU is an upstream price for natural gas. To examine how changes in price are transmitted across the markets for natural gas, we utilize time-series methods. In the absence of a specific theory to be tested, we use the statistical tests, together with identifying assumptions, to assess in which markets shocks to natural gas prices originate and how they are transmitted across natural gas markets. Our econometric work involves a number of steps. We check whether the price series are stationary and find that all of them have stochastic trends (or are integrated). For each of the seven pairs of prices, we then test for cointegration and use a series of reduced-form vector-error-correction models to test for causality and adjustment to equilibrium error. We then identify the sources of long-run price shocks and calculate their persistence. Estimation and testing uses monthly data from January 1984 through March 1992.2 Integration. As an initial step in our econometric work, we check whether our price series are integrated or stationary. A time series that is integrated is said to have a stochastic trend (or unit root). Identifying a series as an integrated, nonstationary series means that any shock to the series will have permanent effects on it. Unlike a stationary series, which reverts to its mean after a shock, an integrated time series does not revert to its preshock level. Applying conventional econometric techniques to an integrated time series can give rise to misleading results.3 Therefore, we use both augmented Dickey–Fuller and Phillips–Perron tests to test for Federal Reserve Bank of Dallas Development of a Spot Market for Natural Gas As consumers of large quantities of energy, industry and electrical utilities have found it attractive to invest in the ability to switch between residual fuel oil and natural gas. This ability may have contributed to the development of spot markets for natural gas. Without the ability to switch fuels, an electrical or industrial user would find it undesirable to rely on spot supplies because the pipeline company or LDC providing connections to the natural gas transportation and distribution system could appropriate the end user’s capital investment. Energy consumption involves relatively high capital costs. After an end user makes a capital investment that is specific to natural gas consumption, the pipeline company or LDC providing the connection could exploit monopoly power over the end user’s capital investment. Government regulation and longterm supply contracts negotiated before the investment is made are two ways to protect a capital investment that has a specific use. Competition among suppliers also protects the energy user’s capital investment and allows the user to rely on spot supplies (Ellig and High 1992). A spot market for natural gas may not provide enough competition to protect the capital investment of its end users.1 End users stochastic trends. We find that all price series are integrated of order one—that is, the first differences of all series are stationary.4 Cointegration. After determining that each price series is integrated of order one, we test each of the seven pairs of natural gas prices described above for cointegration. Two integrated time series are cointegrated if they move together in the long run. Cointegration implies a stationary long-run relationship between the two series. As such, the cointegrating term provides information about the long-run relationship. Economic Review — Second Quarter 1993 must still rely on a specific LDC or pipeline company for a hookup to the natural gas transportation and distribution system. The ability to switch fuels provides end users with competitive energy supplies, allowing them to rely on spot supplies of natural gas. In contrast, most commercial and residential customers find it too expensive to invest in the ability to switch fuels, given their relatively small consumption of energy. Their low levels of consumption combined with their inability to switch fuels reduces the attractiveness of relying heavily on spot supplies of natural gas. In that sense, LDCs protect their customers from potential upstream monopolies by obtaining a greater share of their natural gas supplies under long-term contract. In turn, state governments regulate LDCs, giving them a regulated rate of return and preventing them from exercising monopoly power over their customers. Combined with the regulators’ concern for security of supply, however, this regulated rate of return may induce LDCs to overcommit to long-term contracts (Lyon 1990). 1 The spot market for gas creates a competitive demand for the transportation and distribution of natural gas, protecting the capital investments of the pipeline and LDCs involved in contract carriage. If cointegration is not accounted for, any model involving the two cointegrated variables could be misspecified, and/or the parameter estimates could be inefficiently estimated.5 Therefore, we employ the Johansen procedure to estimate 4 In other words, shocks to first differences of all series are not permanent. 5 See Engle and Yoo (1987). 43 Table 1 Cointegration of Upstream and Downstream Prices Price pairs Upstream Downstream Significance (β 1) Wellhead Electrical 1.401 .002 Wellhead Industrial 1.453 .000 Wellhead City Gate 1.186 .000 Wellhead Commercial 1.294 .003 Wellhead Residential 1.148 .120 City Gate Commercial 1.079 .282 City Gate Residential .938 .447 the cointegrating relationship between pairs of upstream and downstream prices.6 We find a linear cointegrating relationship (of the form PD = βPU ) between all seven pairs of prices considered. Because each estimated cointegrating relationship is stationary, the cointegrating terms provide an efficient estimate of the long-run relationships between upstream and downstream prices. If a one-unit change in the upstream price occurs over the long run, it will be met by a β change in the downstream price over the long run. Conversely, if a one-unit change occurs in the downstream price over the long run, it will be met by a 1/β change in the upstream price. As Table 1 shows, the estimated β s are unequal, signifying that a one-unit change in an upstream price affects some downstream prices more 44 Cointegrating relationship 6 The Johansen procedure is a maximum likelihood method. We chose it over several other procedures because it provides the most efficient estimates of the cointegrating relationships. In addition, it provides estimates of the number of cointegrating relationships. 7 Let PE = βWE PW , PR = β WR PW , and PR = β ER PE. Then PR = ( βWR / β WE )PE , and β ER = β WR / β WE. It follows that a value of one for β ER implies β WE = β WR . A nonunitary value for β ER implies β WE β WR . than others. It is not possible, however, to directly test whether the differences between the β s are statistically significant. To assess whether the estimated β s differ significantly from each other, we use an indirect test for which we estimate cointegrating relationships between all possible price pairs. With these additional estimates, we judge which β s are or are not equal to each other. For example, to assess whether the β in the wellhead–electrical price pair (βWE ) is different from the β in the wellhead–residential price pair ( βWR ), we estimate a cointegrating relationship between the electrical– residential price pair (βER ) and check to see whether it has a value equal to one. A value of one for βER would imply βWE = βWR . A nonunitary value would imply βWE βWR .7 We find the β s between the wellhead price and the electrical and industrial prices are the same as each other but greater than the β between the wellhead price and the city gate price. We also find that the β s between the wellhead price and the commercial and residential prices are equal to each other but less than the β s between the wellhead price and the electrical and industrial prices. At best, the estimated β s only partially support public concerns about uneven changes in natural gas prices. Differences in the estimated β s do show that uneven changes in natural gas prices are maintained in the long run. Specifically, a perFederal Reserve Bank of Dallas manent change in the wellhead price is accompanied by more extreme changes in the electrical and industrial prices for natural gas than in the commercial and residential prices for natural gas. Nonetheless, our estimates suggest that the recent pattern, in which residential and commercial prices for natural gas have fallen less than the wellhead price, is unlikely to persist. The β s between the wellhead price and the commercial and residential prices are estimated at greater than or equal to one, indicating that residential and commercial prices for natural gas change by at least as much as the average wellhead price in the long run. For the three pairs of prices involving wellhead price with electrical, industrial, and city gate prices, the β s are greater than one. These β s mean that the markups over the wellhead price taken by the pipeline companies on electrical, industrial, and city gate prices increase as natural gas prices rise and decrease as natural gas prices fall. These β s can be consistent with either normal responses to shocks in demand, or shocks to supply coupled with increasing returns to scale. Because price shocks can arise from either shocks to supply or demand, further interpretation of the β s requires additional information. For natural gas, demand shocks can originate in factors such as changing oil prices, economic activity, weather, technology, and government regulation of energy consumption. Supply shocks can originate in factors such as changing production technology, geophysical knowledge, and government policy. Using this information for the period of analysis, we view changes in demand to be a more important source of initial shocks to natural gas markets than changes in supply (Brown and Yücel 1993). Given our view, demand shocks account for longrun movements in natural gas prices (see “LongRun Sources of Variance,” below). As such, the estimated β s between the wellhead price and the electrical, industrial, and city gate prices are consistent with a normal response to shocks in end-use demand. As end-use demand is increased, the pipeline companies experience rising costs and/or are able to increase profits. As end-use demand is decreased, the pipeline companies experience falling costs and/or are forced to reduce profits. The likelihood of variable profitability is consistent with the fact that pipeline companies Economic Review — Second Quarter 1993 purchase more gas under long-term contract than they sell under long-term contract. Given the existing contracts, prices at which pipelines sell natural gas would vary more in the face of fluctuating demand than the prices they pay for natural gas. Data for the estimation period generally show falling natural gas prices and declining profitability for pipeline companies.8 The differences we see in the β s most likely reflect how spot and contract prices respond to changing market conditions. Spot prices adjust more readily, and industrial users rely more heavily on spot supplies than do LDCs. Differences in the β s also may reflect a lack of incentive for LDCs to pursue the cheapest sources of gas as prices are falling because their rate of return is regulated, and their customers cannot easily switch fuels. The β s between the city gate price and the commercial and residential prices for natural gas are not significantly different from unity. These estimates probably reflect a regulated rate of return for LDCs and a direct pass-through of gas price changes. Causality. A causal relationship between two variables implies that changes in one variable lead to changes in the other. To test for a predictive relationship between the variables, we perform Granger causality tests on each of the seven pairs of upstream and downstream prices. Because all our price series are cointegrated, we account for cointegration by specifying an error-correction model in which changes in the dependent variable are expressed as changes in both the independent variable and dependent variable, plus an error-correction term. For cointegrated variables, the error-correction term is the deviations from the long-run cointegrating relationship between the variables. The coefficient on the equilibrium error reflects the extent to which 8 An alternative explanation for declining pipeline profitability is structural change. Since 1985, the Federal Energy Regulatory Commission has changed the role of pipelines toward one of open access and contract carriage. These changes helped foster the development of a spot market for natural gas served by brokers and the pipeline companies. This explanation is inconsistent with finding cointegration unless one can view the structural change as endogenous to market pressure brought to bear by falling demand. 45 Table 2 Causality and Adjustment to Equilibrium Error (Significance) Price Pairs Upstream Downstream Causality PU to PD PD to PU Adjusts to error* PU PD Wellhead Electrical .025 .000 .014 .542 Wellhead Industrial .007 .000 .001 .285 Wellhead City Gate .005 .000 .000 .783 Wellhead Commercial .000 .166 .082 .001 Wellhead Residential .002 .320 .155 .002 City Gate Commercial .000 .758 .457 .006 City Gate Residential .005 .600 .570 .011 * The significance of errors in the cointegrating relationship in the respective upstream and downstream price equations indicates adjustment to the equilibrium error. the dependent variable adjusts during a given period to deviations from the cointegrating relationship that occurred in the previous period.9 The tests involve estimating a reduced-form vector-error-correction model comprising the following set of equations for each pair of prices: n n i =1 j =1 (2) ∆PU t = ∑ ai ∆PDt −i + ∑ b j ∆PU t − j + α1CI t −n −1 + µ1t , n n i =1 j −1 (3) ∆PDt = ∑ ci ∆PU t −i + ∑ d j ∆PDt − j + α 2CI t −n −1 + µ2t , where PU is the upstream price, PD is the downstream price for natural gas, CI is the errors in the 9 46 See Engle and Granger (1987). 10 The error-correction term is included because all pairs of upstream and downstream prices are cointegrated with each other. 11 The value of n was set in the Johansen procedure to assure white noise in the residuals. cointegrating relationship (PDt –β PUt ),10 ai , bj , ci , dj , α1, α2 are parameters to be estimated, and µ1t and µ2t are white noise residuals.11 The coefficients α1 and α2 represent the adjustment to equilibrium error. Causality runs from the downstream price to the upstream price if α1 and the ai jointly are statistically different from zero. Similarly, causality runs from the upstream price to the downstream price if α2 and the ci are jointly statistically differ-ent from zero. If both sets of coefficients are significantly different from zero, causality is bidirectional. As shown in Table 2, shocks in the wellhead, electrical, industrial, and city gate prices cause shocks in all other prices with which they are paired. If we abstract from shocks that might be initiated by pipeline companies or LDCs, shocks originating in electrical and industrial prices most likely reflect changes in the prices of competing fuels. Shocks originating in the city gate price may reflect the LDCs’ response to quantity shocks in their end-use markets. For any particular downstream price, shocks to the wellhead price may reflect changes in drilling technology, changes in reserve estimates, contracts fixed to oil prices, and changes in demand by other downstream buyers. Federal Reserve Bank of Dallas Price shocks in commercial and residential prices do not cause shocks in other prices. This finding most likely reflects how LDCs administer natural gas prices for commercial and residential users. In response to changing weather, fluctuating economic activity, or changing prices for competing fuels, the customers adjust their consumption of natural gas.12 Changes in consumption prompt LDCs to seek smaller supplies at lower prices or greater supplies at higher prices. Only as the city gate price paid by LDCs is changed, however, are price changes passed on to end users. Adjustment to Equilibrium Error. If two variables are cointegrated, any movement away from the long-run cointegrating relationship will eventually be corrected, and the variables will move back into their long-run relationship. The adjustment could be through one or both of the variables. Which variable adjusts to the equilibrium error depends on many factors, including elasticities and market structure. In a cointegrated system (such as represented by equations 2 and 3), the presence of an errorcorrection term implies that the dependent variable adjusts to the equilibrium error. The coefficient on the equilibrium error, α, reflects the extent to which a given price variable reacts in the short run to deviations from its long-run relationship with another price variable. In the equations where α is significant, the dependent variable adjusts to deviations from the cointegrating relationship. In equations where α is not significant, the dependent variable does not adjust to deviations from the cointegrating relationship. As shown in Table 2, the electrical and industrial prices do not adjust to errors in their equilibrium relationship with the wellhead price. Instead, the wellhead price adjusts. These findings are consistent with electrical and industrial demand being more elastic in the short run than is the supply of natural gas. A high short-run elasticity of demand reflects these end users’ ability to switch fuels. Similarly, the wellhead price adjusts to errors in its equilibrium relationship with the city gate price, but the city gate price does not adjust. The resistance of the city gate price may indicate some ability of commercial and residential customers to switch fuels, the ability of LDCs to foster competition between suppliers, or an inelastic supply of gas at the wellhead. Economic Review — Second Quarter 1993 From our institutional knowledge of the natural gas market, we can make a compelling case for a very inelastic supply of natural gas in the short run. Producers hesitate to vary production because doing so could disturb pressure in the well, which could reduce its eventual output or make production from it more costly. The adjustment of commercial and residential prices to errors in their equilibrium relationships with upstream prices is consistent with administered prices. Impulse Response. To examine the dynamic properties of shocks to the price variables, we calculate impulse response functions. The impulse response function traces the effects and persistence of a shock on both the upstream and downstream prices. The persistence of a shock tells us how fast the system adjusts back to equilibrium. The faster a shock dampens, the quicker the adjustment. We use the Choleski decomposition to calculate impulse response functions for each of the seven reduced-form vector-error-correction models.13 For each model, we analyze the effects of a one-time, standard deviation shock to the first difference of each price in the pair.14 We trace the effects of this impulse on the equilibrium error and the upstream and downstream prices. We find that the maximum impact generally occurs within one to three months of the shock. 12 Econometric evidence suggests that residential and commercial natural gas consumption do respond to changes in the prices of competing fuels (Bohi 1981). The response is not through fuel switching in the existing energy-using capital stock, however. The response comes through longterm changes in the energy-using capital stock. 13 The Choleski decomposition decomposes the residuals µ1t and µ2t into two sets of impulses that are orthogonal to each other. Orthogonalization allows one to take covariance between the residuals into account. The Choleski decomposition imposes a recursive structure on the system in which the ordering of the dependent variables is specified. If the covariance between the residuals is sufficiently high, the ordering can affect the results. We experimented using both changes in the upstream price and changes in the downstream prices first in the ordering. 14 This implies a permanent shock to the price. 47 Table 3 Impulse Responses in Upstream and Downstream Prices (Based on Choleski decomposition) Price Pairs Upstream Downstream Month in which shock dampens to 5 percent of maximum ∆PD first shock to ∆PU first shock to ∆PD ∆PU ∆PD ∆PU Wellhead Electrical 8 19 16 20 Wellhead Industrial 19 16 17 17 Wellhead City Gate * * * * Wellhead Commercial 11 11 11 9 Wellhead Residential 10 10 10 10 City Gate Commercial 24 26 24 27 City Gate Residential 14 16 15 16 * Shocks do not dampen. In six cases, deviations from the long-run equilibrium relationship between the prices dampen below 5 percent of their peak value within eight to twenty-seven months after a shock, as shown in Table 3.15 In four cases, shocks dampen in about one and a half to two years. Deviations from the equilibrium relationships between wellhead price and commercial and residential prices appear to dampen in less than a year. This quick dampening appears somewhat anomalous given the slower adjustment to equilibrium in the relationships between city gate price and commercial and residential prices. Nonetheless, the quick adjustment is reasonable because in the long run, wellhead, commercial, and residential prices adjust to shocks originating in city gate prices.16 48 15 In one case, the wellhead price-city gate price relationship, the shocks did not dampen. 16 See “Long-Run Sources of Variance,” below. 17 See footnote 13. Long-Run Sources of Variance. To find out which price shocks are the most likely sources of variance, we use the Choleski decomposition to calculate the variance decomposition. For given time horizons, the variance decomposition apportions the stochastic variability in a given price to shocks in itself and the price with which it is paired. Given our impulse response analysis, we use sixty months to represent the long run. As shown in Table 4, the calculated source of variance is generally invariant to the ordering of the variables in the Choleski decomposition.17 The two models in which the wellhead price is matched with electrical and industrial prices are an exception. If the change in wellhead price is placed first, both shocks to the wellhead price and the end-use prices are equal sources of variability. If either the change in electrical or industrial prices is placed first, shocks to the end-use price account for more than 90 percent of the variance. In these cases, we prefer the ordering in which innovations in the end-use price are placed first. This ordering presumes that shocks are most likely to originate in the end-use prices, which fits our view that changes in demand were a more important source of shocks than changes in supply Federal Reserve Bank of Dallas Table 4 Long-Run Sources of Variance in Upstream and Downstream Prices (Based on Choleski decomposition) Price Pairs Upstream Downstream Wellhead Sources of variance ∆PD first ∆PU first ∆PD ∆PU ∆PD ∆PU (Percent) (Percent) Electrical 50 46 50 54 3 5 97 95 Industrial 49 47 51 53 8 5 92 95 City Gate 27 25 73 75 25 4 75 96 Commercial 85 79 15 21 69 62 31 38 Residential 76 66 24 34 78 67 22 33 Commercial 94 92 6 8 99 94 1 6 Residential 96 87 4 13 100 88 0 12 Wellhead Wellhead Wellhead Wellhead City Gate City Gate during our period of analysis (Brown and Yücel 1993). Shocks to demand may come from fluctuations in economic activity, changing oil prices, or other factors. With our preferred ordering, we find shocks to electrical and industrial prices to be the primary sources of variance over the long run in their pairings with wellhead prices. Our findings are consistent with the long-run supply of gas at the wellhead being fairly inelastic, and the long-run demand for natural gas by industrial and electrical users being fairly elastic. Shocks to commercial and residential prices are not the primary sources of variance in their pairings with the city gate price. We do find, however, that shocks to the city gate price are the primary source of variance over the long run in its pairing with the wellhead price. This finding suggests that LDCs drive city gate and wellhead prices in the face of fluctuating sales to end users and that the longrun supply of gas at the wellhead is fairly inelastic. Economic Review — Second Quarter 1993 Our evidence is consistent with administered prices in the commercial and residential markets for natural gas. As consumers lower (increase) their consumption of natural gas, the LDCs pursue fewer (greater) natural gas supplies at lower (higher) prices. Only after the shock in the quantity of natural gas demanded is transmitted to the city gate price do commercial and residential prices for natural gas adjust to changes in the city gate price. Summary and Conclusion Our econometric evidence indicates that changes in natural gas prices are unequal in the long run. Nonetheless, all downstream prices change by at least as much as the average wellhead price. Statistically, residential and commercial prices change as much as the city gate price. In the face of persistent shocks, however, market institutions and market dynamics can lead to lengthy periods in which the residential and com49 mercial prices of natural gas adjust less than the wellhead or city gate prices. Electrical and industrial users of natural gas rely heavily on spot supplies and can switch fuels easily. Their ability to switch fuels may be related to the development of a spot market to serve them. Reliance on the spot market may explain why these end users have seen a greater reduction in natural gas prices than have the LDCs over the past seven years. The ability to switch fuels may account for electrical and industrial prices being the source of shocks in their relationships with the wellhead price. It also may explain why prices in these end-use markets are quick to adjust. Commercial and residential customers cannot switch fuels easily and rely heavily on LDCs for their natural gas. The inability of these end users to switch fuels probably contributes to the reluctance of LDCs to purchase spot supplies of gas. Reliance on contract supplies may explain why the city gate price has not declined as much as electrical and industrial prices of natural gas over the past seven years. Furthermore, the LDCs administer prices in the commercial and residential markets under state regulation. The administration of prices in these markets leads to slower adjustment in commercial and residential prices. Only after city gate 18 50 prices can be reduced are commercial and residential prices for natural gas reduced. Pipeline companies have been an integral part of the uneven change in natural gas prices. As natural gas prices have declined over the past seven years, electrical, industrial, and city gate prices have fallen more than the wellhead price, while pipeline profitability has been reduced. Our analysis suggests that the decline in pipeline profitability is associated with reduced demand for natural gas brought about by lower oil prices. To summarize, public concern about recent movements in natural gas prices, in which residential and commercial prices have fallen less than wellhead prices, may be somewhat misplaced. Although uneven changes in natural gas prices are maintained in the long run, all downstream prices change by at least as much as the wellhead price. Uneven changes reflect differences in the end users and the market institutions that serve them. Our analysis indicates that if energy prices were to rise, pipeline profitability would rise, and commercial and residential end users would see smaller increases in natural gas prices than electrical and industrial end users. Compared with other natural gas prices, commercial and residential prices would be slow to rise.18 Data limitations prevent us from testing for asymmetric relationships. Federal Reserve Bank of Dallas References Balke, Nathan S. (1991), “Modeling Trends in Macroeconomic Time Series,” Federal Reserve Bank of Dallas Economic Review, May: 19 –33. Engle, Robert F., and Byung Sam Yoo (1987), “Forecasting and Testing in Co-integrated Systems,” Journal of Econometrics, 143–59. Barcella, Mary L. (1992), “Natural Gas Distribution Costs and the Economics of Bypass,” Annual North American Conference, International Association for Energy Economics, New Orleans, October 25–28. Farmer, Richard D., and Phillip Tseng (1989), “Higher Old Gas Prices and the Staged Deregulation of the U.S. Gas Industry,” Energy Policy (December): 567–76. Bohi, Douglas R. (1981), Analyzing Demand Behavior: A Study of Energy Elasticities (Baltimore: Johns Hopkins University Press for Resources for the Future). Brown, S. P. A. (1984), “Natural Gas Pipelines: Rent Revealed,” in The Energy Industries in Transition: 1984 –2000, John Weyant and Dorothy Sheffield, eds. (Washington, D.C.: International Association of Energy Economists). ——— and Mine K. Yücel (1993), “Market Structure and the Pricing of Natural Gas,” Federal Reserve Bank of Dallas, work in progress. Canes, Michael E., and Donald E. Norman (1984), “Long-Term Contracts and Market Forces in the Natural Gas Market,” Journal of Energy and Development 10 (Autumn): 73–96. Ellig, Jerome, and Jack High, (1992), “Social Contracts and Pipe Dreams,” Contemporary Policy Issues 10 (January): 39–51. Engle, Robert F., and C.W.J. Granger (1987), “Cointegration and Error Correction: Representation, Estimation, and Testing,” Econometrica, (March): 251–76. Economic Review — Second Quarter 1993 Johnson, Robert (1992), “Fueling Anger: Natural Gas Prices Irk Both Producers, Users; Are Utilities at Fault?” Wall Street Journal April 2: A1. Lyon, Thomas P. (1990), “Spot and Forward Markets for Natural Gas,” Journal of Regulatory Economics (September): 299–316. Norman, Donald A., and F. Elizabeth Sowell (1988), “Risk and Returns in the Interstate Natural Gas Pipeline Industry.” American Petroleum Institute Research Study #044 (Washington, D.C.: A.P.I., July). Sims, Christopher A., James H. Stock, and Mark W. Watson (1990), “Inference in Linear Time Series Models with Some Unit Roots,” Econometrica, ( January): 113–44. Stock, James H., and Mark W. Watson (1988), “Variable Trends in Econometric Time Series,” Journal of Economic Perspectives 2 (Summer): 147–74. Yücel, Mine K. (1991), “A Closer Look at Natural Gas Prices,” Federal Reserve Bank of Dallas Southwest Economy, November/December: 7–8. 51 52 Federal Reserve Bank of Dallas Fiona D. Sigalla Beverly Fox Associate Economist Federal Reserve Bank of Dallas Assistant Economist Federal Reserve Bank of Dallas Investing for Growth: Thriving in the World Marketplace A Summary of the 1992 Southwest Conference L ong-term economic growth depends on investment today. That message reverberated throughout the Federal Reserve Bank of Dallas’ fifth annual conference on the Southwest economy. “The reality of free trade underscores potential weaknesses in the U.S. economy....As businesses in the United States become even more interwoven with businesses around the world, our success will be critically affected by the way we manage problems that reduce competitiveness,” cautioned Dallas Fed President Robert D. McTeer, Jr. Conference speakers expressed optimism about opportunities for growth when and if reforms under the North American Free Trade Agreement (NAFTA) and General Agreement on Tariffs and Trade (GATT) redefine the international marketplace. “The Southwest economy is in a position to benefit disproportionately from free trade,” said Gerald P. O’Driscoll, Jr. The health of the domestic economy, however, will determine the nation’s future. To capitalize on the opportunities presented by a larger market, the United States must overcome domestic impediments that affect our growth potential. The future, Admiral Bobby Inman explained, depends on what we do to prepare this economy to compete more effectively. Speakers identified several areas of the economy that would pay future dividends from investments in restructuring now. Ensuring free enterprise and improving human capital were the two most prominent examples. “For financial capital and entrepreneurship to be productive, there has to be investment in human capital, there has to be investment in...physical capital, infrastructure. There also has to be investment in...social capital, Economic Review — Second Quarter 1993 [in] developing the kind of society that enables an enterprise system to emerge,” Ernesto Cortes said. Investing in free enterprise To maximize long-run U.S. economic growth, U.S. firms must thrive, which means they must take risks, innovate, and create. Rules and regulations, some speakers complained, have become a burden that inhibits innovation and risk-taking by distorting market incentives and encouraging rentseeking behavior. “We are one of the few countries in the world not moving toward freer enterprise these days,” McTeer said. He cited Mexico, on the other hand, as a nation that sparked a dramatic increase in economic growth by restoring free enterprise (see the box titled “Mexico’s Investment in Free Enterprise”). To move back to free enterprise in the United States, policymakers must rethink economic incentives and government regulation. Conference participants proposed several strategies for reform: let domestic markets work, redefine government, make a commitment to long-term growth, establish credible policies, and allow free trade to let international markets work. Let domestic markets work. Government regulations can be beneficial when they organize society or force firms and individuals to be socially respon- The authors thank Rhonda Harris, Jerry O’Driscoll, and Harvey Rosenblum for their helpful comments and suggestions. 53 Mexico’s Investment in Free Enterprise Mexico is reaping the rewards of its commitment to free enterprise and its investment in long-run economic growth. “At a time when our own economic policies leave much to be desired, Mexico’s policymakers are setting an example for the world,” said Dallas Fed President Robert D. McTeer, Jr. In 1982, Mexico was plagued by large fiscal deficits, unsustainable levels of external borrowing, a highly protected economy, rampant inflation, and heavy government regulation, explained Ariel Buira of Banco de Mexico, Mexico’s central bank. Mexico has rebounded after restructuring its economy through such market-based reforms as fiscal and monetary discipline, privatization, deregulation, and trade liberalization. Monetary Discipline Mexico’s success in reducing inflation expectations and long-term interest rates required a commitment to fiscal restraint. McTeer explained how Mexico benefited from international interdependence through the use of “external exchange rate discipline to reinforce...sound but temporarily painful domestic policies.” Buira agreed that Mexico was attempting to import some price stability by pegging its currency to the dollar, which has sible. The highway code, for example, organizes traffic to safeguard the public. Environmental regulations force firms to be accountable for pollution they may cause. Some regulations, however, inhibit decision-making, distort investment, and transfer resources from one group to another. In many cases, firms and individuals are able to profit by subverting market forces and lobbying the government, in effect, to regulate competitors out of business. Such rent-seeking behavior differs from productive investment because it does not 54 helped establish a credible monetary policy. The peso, Buira said, “now moves in a fairly narrow band and within that band can fluctuate freely.” Fiscal Discipline Mexico has successfully reduced its public debt and curbed its public spending. “The control of public finances has been the keystone of economic stabilization,” said Buira. At the beginning of its debt crisis, Mexico’s budget deficit was 17 percent of GDP. “Sustained efforts to correct a huge deficit led to an unprecedented fiscal adjustment of more than 15 percentage points of GDP in the period between 1982 and 1991.” According to Buira, Mexico’s fiscal adjustment is equivalent to five times that considered under the Gramm– Rudman–Hollings Act. Privatization and Deregulation Mexico’s far-reaching privatization program shrunk the public sector and helped reduce the country’s public debt burden. “The process of divestiture of state enterprises has been intensified since 1989, with the privati(Continued on the next page) create wealth. “At best,” Harvey Rosenblum explained, “rent-seeking behavior redistributes wealth; at worst, it destroys wealth.” Rosenblum advocated restructuring regulations to institute market-based incentives. In banking, he said, regulation has often limited choice, increased costs, stifled innovation, and distorted investment. “The banking system has already felt the repercussions of regional shocks, yet geographic and product-line restrictions have limited banks’ ability to diversify their Federal Reserve Bank of Dallas Mexico’s Investment in Free Enterprise—Continued zation of the two major airlines, major mining companies, sugar mills, fisheries, the telephone company, all the commercial banks and steel companies, and a number of others,” Buira said. The country also undertook major regulatory reforms. “Deregulation has advanced on a number of fronts,” said Buira, citing the elimination of restrictions to entry and licensing requirements for telecommunications and land transportation and the simplifications of regulations in industries such as air transportation, mining, petrochemicals, and automobiles. Trade Liberalization Mexico’s far-reaching program of trade liberalization demonstrates the benefits of letting international markets work. In 1985, Mexico joined GATT, opened its economy to foreign investment, and reformed the regulatory framework for economic activity. “This was a major change in philosophy. The idea was that we practices.” Bankers risking their own capital will make better financial decisions than examiners, regulators, legislators, and other government bureaucrats, he said. “The increased level of regulation that we have is not particularly conducive to better serving our customers in all cases,” said Linnet Deily. Regulations add to the cost of the services people buy, she explained, adding, “I think that we have gone overboard in response to consumerist activity without keeping the broader range of public interest in place.” Redefining government. Regulations may not operate effectively because government is not working properly. “We have to get government to work,” said Donald Shuffstall. Tom Luce explained that government has yet to undergo the dramatic restructuring that has kept many firms in business. Economic Review — Second Quarter 1993 would not be able to export unless we were able to import freely,” Buira explained. “Liberalization has stimulated both imports and exports. In fact, the growth of total imports is explained to a very considerable extent by imports of capital goods and intermediate inputs for export-oriented industries. Exports of manufactured goods have replaced oil as the country’s main foreign exchange earner,” he said. Buira was optimistic about the avenues that will be opened with NAFTA. “Since the liberalization of trade policy, trade between Mexico and the U.S. has more than doubled, to our mutual benefit. This will be further enhanced by the establishment of a free trade area in North America,” he said. “With implementation of this program of stabilization, structural adjustment, and liberalization, the economy has resumed growth after a decade of stagnation. These reforms have created the internal conditions for sustainable economic growth.” “If my organization and your organization had not adapted, we would not be here today. The one institution...in our country that has not gone through that dramatic restructuring is government,” he said. Rosenblum noted that government often fails to focus on long-term economic growth: “Our country waits until problems are out of control before beginning to discuss solutions, then responds by applying Band-Aids to deal with symptoms rather than causes.” And, Luce added, “Too often, compromise has lead to changes on the margins, which basically means that nothing fundamentally has been restructured and changed.” Special interests may be distracting government from decisions that would benefit long-run economic growth, Luce and Rosenblum suggested. Politicians are hesitant to adopt long-run strategies that require short-run sacrifice, Rosenblum said. 55 “The reason we have changes on the margin,” Luce explained, “is because every special interest is represented, but very seldom is the common good represented in terms of a voice that says, ‘Hey, what about the kids?’ ” He challenged people and business to get involved in the political process and to learn how to bring about change. Government credibility. Governments that set long-term policies and stick to them establish credibility that benefits their economies. Credible policies encourage firms and individuals to make long-run plans. One important policy that must be credible is a government’s stance on inflation. Said Rosenblum: “Inflation distorts prices, worries investors, and slows capital spending. Sound, stable, and credible macroeconomic rules that allow economic agents to take a long-term view support economic growth. Low-inflation countries with sound monetary policies tend to grow faster than countries with high inflation.” (See the box titled “Exporting Credibility.”) The exchange value of a nation’s currency depends on three things, Rosenblum explained: “One is the current interest rates in each country, relative to one another. Another is expected prices—expected inflation in each country. The third factor...is the expected growth rate of national income. Foreign exchange traders are betting every day on economic growth, on whether or not governments have a commitment to credible growth policies.” The measure of Federal Reserve credibility, Rosenblum added, “is the gap between longterm interest rates and short-term interest rates. When long-term rates get well above short-term rates, the Fed may be suffering from a credibility gap....Any policy that lacks commitment and credibility, whether it be a government or corporate policy, is bound to fail.” The credibility of government policies is easily measured in the international marketplace. “In this world, central banks can run, but they can’t hide. Their policies will be evaluated instantly on the world’s computer screens and reflected instantly in currency prices,” McTeer said. “That doesn’t imply a loss of power based on fundamentals. It does imply a loss of the power to fool some of the people some of the time. The external value of a nation’s currency will depend on the relative soundness of domestic monetary policies no matter what is said or done in the foreign exchange markets.” 56 He suggested that the information revolution will lead to, “not a gold standard or a Bretton Woods standard but to an information standard for money where currency values instantly respond to each bit of new information in a global plebiscite.” Letting international markets work. A government’s commitment to free trade must be credible. Countries have many incentives to open their borders to foreign markets. Free trade will stimulate economic growth, increase domestic demand, and allow firms and individuals to consume a greater variety of goods and services. In fact, free trade is “the only magic bullet on the horizon with the potential...to raise our living standards significantly,” McTeer said, explaining that soon the world will operate as a single economy. Quoting Walter Wriston in The Twilight of Sovereignty, McTeer said, “National sovereignty, including monetary sovereignty, is rapidly succumbing to the relentless pressure of computer and communications technology and to satellites and fax machines.” This technology, McTeer noted, gives individuals all over the world instantaneous access to information and makes national borders increasingly irrelevant. “An economy is more likely to grow if it has an open, competitive trade policy rather than high protectionist barriers,” added Rosenblum. “Isolation is synonymous with poverty. Closing borders can be done only at a very extreme cost,” he said, comparing nations whose principal difference is the openness of their trade policies, not their people or location. North Korea vs. South Korea, the former East Germany vs. West Germany, Hong Kong, Singapore, and Taiwan contrasted with mainland China are all examples. Donald J. Carty agreed that free trade is essential, but he said that the manner in which a country opens its borders is also important for the long-run health of an economy. Carty said the United States has been slow to open trade, often implementing policies that sacrifice long-term competitive advantage for short-term political and economic gains. “While few U.S. companies can blame all their international problems on U.S. trade negotiations, I think it’s fair to say that U.S. trade policy has contributed to our worldwide competitive difficulties in virtually every industry,” he said. The United States “has often given considerably more than we’ve gotten in trade negotiaFederal Reserve Bank of Dallas Exporting Credibility As national economies become more integrated into a global economy, the Federal Reserve can contribute to improved living standards worldwide by providing an anchor for nations pursuing price stability, suggested Dallas Fed President Robert D. McTeer, Jr. “Economies that are trying to pursue price stability that may not have a long tradition of internal monetary discipline may wish to import some of that discipline or borrow some credibility by pegging to a more stable currency,” he said. “As international trade becomes more important to the U.S. economy, we have to consider the impact of foreign economic conditions on domestic economic conditions....A more integrated world economy with increased capital mobility has potential implications for our conduct of policy,” he said. A more formal exchange rate mechanism could be put into place after free trade tions,” said Carty, observing that the United States approaches trade negotiations much differently than other countries. Most countries focus on creating a competitive advantage for their producers, while the United States has many other goals. The United States often begins negotiations with a sense of obligation to help less-fortunate countries. Then, believing in the superiority of domestic producers, the United States will support an agreement that opens trade opportunities, regardless of whether the agreement is imbalanced in favor of foreign producers. What’s more, U.S. economic interests often have been secondary to security and geopolitical objectives, he said. When entering bilateral agreements, the United States usually already has a highly developed industry that is ready for worldwide competition, “while most other nations want to limit competition to ensure their fledgling industry sufficient market share.” As a consequence, most bilateral Economic Review — Second Quarter 1993 and capital movements have spread to most of the Western Hemisphere. “After growth rates and inflation rates have converged and monetary and fiscal policies are reasonably compatible, then the Americas will need to have a look at alternative exchange rate mechanisms. Perhaps a tighter and more formal exchange rate relationship would be indicated,” he said. In the meantime, “Hopefully, the Federal Reserve System will conduct monetary policy in the next few years in a way that it may someday be regarded, as the Bundesbank has been in recent years, as an anchor of stability. If that could be the case, our contribution to world standards of living would dwarf any contribution we might make with occasional quarter-point jiggles in the fed funds rates or half-point changes in the discount rate,” McTeer said. agreements “limit rather than encourage competition,” Carty said. Carty used the airline industry to illustrate the consequences of slow, uneven trade deregulation. The domestic airline industry, which was deregulated in 1978, has had difficulty integrating with the still heavily regulated international aviation market. Current trade agreements, according to Carty, give an advantage to foreign carriers that can partner with a domestic airline to create a global network, while U.S. airlines have much more difficulty acquiring access to international markets. Carty cited an open skies agreement recently negotiated with the Netherlands, “which gives KLM unrestricted rights within the United States in exchange for allowing the U.S. carriers to have unrestricted rights in the Netherlands.” That agreement “will strengthen KLM at the expense of U.S. carriers,” he said. Further, U.S. carriers operating abroad must deal with a host of nontariff 57 barriers including, in some countries, requirements that U.S. carriers hire their competitors to provide customer service. “The restrictive instincts of most of the world’s governments and the unwillingness of the U.S. government to exert its negotiating leverage have denied U.S. carriers not only dominance but even the level of leadership that should be the natural benefit of being based in the world’s largest aviation market,” said Carty. The time has come, he said, for the United States to call for a worldwide, multilateral open skies agreement: “We can turn all the world’s airlines loose to work the magic of competition in every market. But if other countries are not willing to support new opportunities for all, they must not be allowed to buy their way into U.S. markets to the detriment of U.S. producers....Most governments recognize...that workers and consumers are simply the same folks in different clothes.” Investing in human capital Throughout most conference sessions, participants returned to the theme of investing in human capital—the people whose skill and labor make up a nation’s work force. Inman pointed out that, while the United States is always at the forefront of creating new technologies, we have become increasingly laggard in turning that technology into commercial goods and services. Part of the reason, he suggested, is because we do not have the highly trained work force to operate sophisticated technologies. “There was a day, not too long ago,” said Paige Cassidy, “when any average high school graduate with basic mechanical aptitude could expect to find employment in industry. That day is gone. The value of unskilled labor is rapidly disappearing. In our workplace of the future, employees on the factory floor must be highly literate and computer friendly. And if industry is to be competitive and if our national economy is to be viable, we must have a skilled, highly trained work force.” Speakers suggested several ways to maximize the potential of America’s human capital: encourage immigration, capitalize on cultural diversity, reinvigorate education, and reform health care. The need to control costs, reduce bureaucracy, and implement market-based reforms in the economy 58 was a common theme. Our economy, they said, needs major structural reforms rather than tinkering at the margins. Immigration. Opening our borders to both goods and people will boost our nation’s human capital and stimulate economic growth. As Julian Simon explained: “Every time our system allows in one more immigrant, on average, the economic welfare of American citizens goes up, and every time we keep out one more immigrant, on balance, our economic welfare goes down....Additional immigrants, both the legal and the illegal, raise the standard of living of U.S. natives and have little or no negative impact on occupational or income class.” Since the turn of the century, the United States has become much bigger, wealthier, and better able to assimilate growing numbers of immigrants. But, Simon explained, both legal and illegal immigration have declined significantly, both in absolute numbers and as a proportion of the total population in the United States: “We have this phrase that we are a nation of immigrants; we are not a nation of immigrants now. Indeed, there are many countries in the world that we tend to think of as homogeneous that have a much larger proportion of immigration than we do. For example, Great Britain, Switzerland, France, and even Sweden have a much larger proportion of immigrants in the population than we do.” Misperceptions about immigrants hurt the economy, he said. “There is only one painless way of dealing with the deficit that does not mean the pain of...reducing services or increasing taxes, and that is to bring in more immigrants....Immigrants pay much more in taxes than the cost of the welfare services they use. Immigrants come when they are young, when they are strong, and when they are earning and contributing, not when they are old and are taking in.” Rather than displacing natives from jobs, immigrants create jobs through their purchases and by opening new businesses—small businesses, a main source of the nation’s jobs. “Immigrants earn and they spend, and their spending provides jobs for others. Immigrants simply expand the economy,” he said, explaining that immigration helps raise productivity, improves our competitiveness, and provides an invaluable network of communications abroad. Federal Reserve Bank of Dallas Simon suggested that immigration is often considered harmful because “it is very easy to identify the losers, much harder to see the winners....It is so commonsensical that immigrants push natives out of jobs, [but] the virtue of economics is it gives us anti-commonsensical, anti-intuitive answers to many questions.” Simon recommended a policy of phased, incremental increases in immigration: “Increase the level of immigration by a million people now, and watch what happens in the next three years. If anything unexpected happens, we could adjust for it.” If the immigrants continued to assimilate nicely into the economy, he said, the United States could raise the limit by another million for another three years and continue in a “systematic and controlled way...to allow in more immigrants and make us a better country.” Capitalizing on our diversity. Racism and naiveté about other cultures cause some people to oppose immigration. But the key issue for policymakers should be how many human beings, not which ones, Simon said. To benefit from the human capital open borders provide, the country must also learn to value fully all of the human capital that is already here. “Our greatest strength is our cultural diversity, but we are not using the full capability of all our work force,” cautioned Major General Hugh Robinson. Some corporations, Robinson explained, undervalue workers by applying stereotypes to women, African–Americans, Hispanics, Asians, and others. When people act as if these stereotypes are true, the country does not use its work force to its full capability. “We are talking about using the full capability of our work force and not devaluing some part of it for some unknown reason that nobody can figure out....It is a business decision to develop our work force so that all have the same opportunity,” he said. Cultural diversity is also an asset in the global marketplace because corporations must deal with the diversity that they will encounter in other nations. “Should we not put our best foot forward by utilizing the diversity we have here in this country?” Robinson asked. “It is a business decision to adjust the work force and to participate actively in this global environment.” Capitalizing on our diversity will become Economic Review — Second Quarter 1993 even more important as our population becomes more diverse. According to Robinson, “85 percent of the entrants into this country’s work force in the year 2000 will be people of color and women. Furthermore, by the year 2050, projections show that whites will slip below 50 percent of the national total. If you look at the world in which we live, people of color are already and have long been the majority.” Robinson noted that while education is the key to our success as a nation, who does the teaching and what is taught are also very important: “We must really understand diverse peoples and cultures, and we must do it in a way that promotes a valuing of diversity. If we don’t value diversity, it won’t be a plus for us....In American education, as we move through the final decade of the twentieth century, we have the responsibility to move closer to the fundamental goal that underlies multiculturalism, pluralism and cultural diversity—the transformation of the American economy into a place where difference no longer makes a difference.” Education. Other speakers echoed Robinson’s concern about the importance of education as a factor in America’s competitiveness and long-term growth potential. “Education,” McTeer said, “is key because the United States has a competitive advantage in producing goods and services that use our abundant human resources. Therefore, the more open we become to the world, the more serious will be the consequences of a flawed education system.” The U.S. system of higher education is competitively supplied, and it is the envy of the world. In contrast, McTeer pointed out, our primary and secondary systems are basically “local monopolies [that] not only are expensive; they don’t consistently produce a quality product.” Education is one way to obtain “not only higher incomes, but a fairer or more equal distribution of income as well—clearly a win-win situation,” suggested Rosenblum. Speakers criticized the current educational system for not adequately preparing our youth and for failing to implement successful reform despite years of talk. They called for fundamental change. “Tinkering at the margins is not going to work,” said Milton Goldberg. “If our schools are to be competitive in the twenty-first century, then 59 nothing short of revolutionary change will have to occur. Despite almost a decade of talk about education reform, education reform has been disappointing.” Often, educators have attempted only reforms that are easily implemented, rather than working on more complex, in-depth changes, he said. “Look what happens when we don’t make the investment in education,” Robinson said. “We end up sending people to prison and having to support them there at some ridiculous cost of $30,000 or $40,000 per year. We send them to hospitals because their health isn’t good. We send them to other places where our public funds are spent in a nonproductive fashion....If you think education is expensive, try ignorance,” said Robinson, quoting former Harvard University President Derek Bok. “If we merely put more money in the same system”, Luce said, “we will get the same results. We have to begin to change the system.” “How do we break from tradition? What is it going to take to break the mold to make a difference in terms of doing what is right for our young people?” asked Marvin Edwards. Three strategies the speakers suggested to improve education were emphasizing parental involvement, restructuring the educational system to incorporate market-based incentives, and increasing the amount of time children are at school. Parental involvement. According to Clint Bolick, “The one thing almost everyone in education will agree on is that parental participation is the single most crucial factor in terms of educational outcomes.” Cassidy agreed: “If parents are not involved, everything slows down. When the parents are involved, things speed up very quickly....[But] we know many children in this country are not coming to school ready to learn.” Many children are in an environment that does not prepare them to be good students, and some parents, Edwards said, “are not prepared to be parents and need to be trained.” Cassidy described several programs that would involve parents and the community as a team to help prepare children and facilitate the educational process. Many of these break-the-mold programs are being financed by the New American Schools Development Corporation. “I think we have to do a much better job in American education of making it very clear...that the parent is the child’s first and most influential 60 teacher,” Goldberg said. “I feel about parent responsibility as I do about student expectation. Our expectation for achievement in American schools generally has been very low; students have given us exactly what we have expected of them. High expectations never hurt anybody.” Incentives. Some speakers suggested that the education system, like the rest of government, would operate more effectively if it incorporated marketbased incentives and were less regulated. “Our classroom teachers are the most regulated professionals in America. We need to treat our teachers as professionals. They must be empowered to function like other professionals, with the opportunity to adapt, change, create, and modify as they best judge,” Cassidy said. Luce concurred: “What we need to do is treat the teaching profession once again as a profession, not a guild. We have to break the cycle of acrossthe-board pay increases....We have to change the rules so that teachers who are not performing can be discharged.” Luce suggested that the business community use its skills to develop procedures for the difficult task of evaluation and assessment. “I understand why many teachers do not want to be evaluated by the principal who, in their mind, is a fired football coach. But let me tell you something. We do have assessment in football. If we do not win in football, you are fired.” The current education system operates based on lobbying much like the rest of government, he said. “Do you know in Texas public schools, the last time I looked, there are 600,000 students taking agriculture vocational educational courses? Folks, there will not be 600,000 agriculture jobs in the state in the next five lifetimes available to kids. But we have 600,000 kids taking agriculture vocational education. Why? Because the agriculture vocational education lobby is strong as horseradish....We give a school more money to teach a vocational education course than we do to teach math and English,” Luce said, adding that many schools spend more money on one-day processing of their football films than on their English department curriculum. Restoring incentives would spur innovation in the educational system, Luce said, suggesting that, rather than pay schools for having children in class, it may be more effective to pay schools for graduating children. “Every child can learn, but Federal Reserve Bank of Dallas every child learns in a different, unique, special way. And yet we are still conducting our classes as if they were assembly lines for a manufacturing model that disappeared many years ago.” With an incentive program, Luce said, high school classes could grow to as large as 250 students. “One year later, they go to college and they have a class that big. Wouldn’t it make more sense to do that and take that same money and spend it on a 3-yearold?...We should have prekindergarten programs so all children, of all races and financial groups, will have an equal opportunity to line up at the starting line in the first grade equally prepared and equally equipped,” he said. Choice. One way to introduce incentives into America’s educational system is by making public schools compete with private schools. Several speakers noted that public schools are structurally different from private schools. Competition has helped keep down the size of private school administration. In large urban school systems, Bolick noted, about 50 cents of every dollar spent on education makes it to the classroom in terms of salaries and instruction; in private schools, about 95 cents of every dollar spent makes it to the classroom. School choice, a proposal that would enable parents to apply state funds to either public or private schools, can help introduce incentives into the public education system because schools would have to compete for students. Bolick advocated a system in which public schools and private schools participate on a voluntary basis. School choice, he said, could provide the impetus for radical deregulation and decentralization of public schools: “Public schools will compete for kids they previously could take for granted, since these kids had nowhere else to go.” Bolick would like a public school system “that exists because parents choose to send their children there, not because they have no alternative....Choice transfers power over basic educational decisions from bureaucrats to parents,” he said, adding that parents have the greatest stake in their children’s success. Choice has worked at universities, Bolick said, because college students can take their financial aid anywhere. With choice, “if the schools do not attract enough students, they literally go out of business.” But proposals for choice leave a lot of questions unanswered, according to Edwards. “We Economic Review — Second Quarter 1993 have heard a lot about choice over the years....Is choice the ability to have any child choose any school anywhere? If so, are we going to provide transportation for any choice any child makes anywhere? So then we have a major, major, financial commitment.” Bolick countered: “In most instances you could have choice plans and provide transportation for low-income kids and still not spend as much money as the public schools are currently spending.” “I think we can probably make anything a success in a vacuum, in a small scale....Choice is not realistic unless choice is available for everybody. We have a lot of experiments in choice, but we do not have choice. No one has shown how choice can work in an entire state or an entire county or across lines of a school district,” Edwards said. Choice “is an emotional response to a need on the part of the American people, because it sounds good, but no one is ready to define it. Until someone has the courage to really define it beyond emotion, we do not have an answer to this big dilemma.” Despite many good suggestions for reforming the education system, problems arise in implementing effective change, speakers noted. “We need to guard against quick fixes—ideas that simply sound good and capture people’s emotion are not going to fix education,” said Edwards, who recommended working within the current system to find reforms that can be effective. Goldberg, however, restated the need for systemic reform that considers many variables if we are to improve the achievement of American children. He noted a problem in moving educational reform from pilot programs into mainstream reform. “American education is a graveyard of innovations that have worked. One of the most serious problems in American education is our inability to learn from experience.” Time and learning. Goldberg suggested that the United States rethink the way students spend their time. American students, he noted, spend less time on school work than students in most nations, and the time that is spent in the classroom and on homework is often used ineffectively. Goldberg and Cortes recommend lengthening the school day to give teachers more time for planning and to help students be more motivated 61 to learn. “Students will spend extra time in additional recess, additional lunchtime, and additional extracurricular activities....When the youngsters come into the classrooms, the work is intensive, direct, and heavy. The youngsters have already blown off steam,” said Goldberg, adding that teachers will be held accountable for planning and will have more time to work at their jobs. “There are some very simple ways that time can be increased in American schools. For example, we have to reduce student absenteeism. A youngster who is not in school is not learning schoolwork. We can improve school management to make distractions in the classroom far fewer. We can improve classroom management so teachers know how to do the instructional work more efficiently. And we can do the most dramatic of all; we can restructure the school day and think about year-round schools. The research is very clear: there is a very direct correlation between time spent on homework and student achievement, particularly at the junior and senior high level,” Goldberg said. Lengthening the school day will increase the productivity of our past investment in education, he said, explaining: “We have a capital investment of enormous proportions in the school buildings of this country. These fancy buildings that we built all over the country for hundreds of millions of dollars are being used for instruction 15 percent of the available instructional time.” Health care. Another way to invest in human capital is to cure our nation’s sick health care system. Spiraling public and private health care costs have increased the expenses of employers and government and are limiting the availability of medical services. Taming health care costs is crucial to containing business costs and ensuring a quality work force. Speakers addressed two problems: consumers cannot obtain the information necessary to make choices about the cost and quality of various health care options, and medical care is overconsumed because consumers do not pay the marginal cost of health care services. “There are strong reasons to believe that the additional medical services patients receive are not worth the additional costs,” William A. Niskanen suggested. Speakers called for a structural reorganization of the health care system to realign incentives and reduce costs. 62 “We need a major overhaul instead of marginal changes...that do not lead to overall savings,” Ron Anderson said. Costs. Rising health care costs have placed a heavy burden on our economy. In the past 30 years, “total expenditures for medical care have increased from about 5 percent of GDP [gross domestic product] to now about 14 percent of GDP....The cost of health insurance is now the most rapidly increasing component of private payrolls, and payments for public medical programs are the most rapidly increasing component of government budgets,” Niskanen said. Two causes of rising health care costs cited by participants were consumers’ and physicians’ lack of incentive to reduce expenditures in a system subsidized by insurance, government payments, and tax deductions; and a lack of competition that stems from the difficulty consumers experience in collecting information about the cost and quality of health care providers. Under the current system, individuals who are most able to control costs have little incentive to limit expenditures. “The share of medical costs that are borne directly by the patient has declined from about 50 percent to under 20 percent,” explained Niskanen. “Neither patients nor physicians have an adequate incentive to control the costs of medical care.” Health care is subsidized by tax deductions and insurance; consumers do not pay the true price of medical care. Frequently, consumers do not even see the true price because a third party—government or private insurance— pays the bills. “The common phrase ‘health insurance’ is a double misnomer. The event that is insured is not some adverse change in health status but is the payment for some specific medical service. The basic concept of insurance is to reduce the variance of cost among groups with the same prior risks. Most plans, however, include people with very different prior risks in the same premium pool. What we call health insurance in this country would be much better described as a medical prepayment plan. These plans redistribute income from people who use few medical services to people who use many medical services,” Niskanen said. Subsidizing health care costs results in an overutilization of health services beyond the level that is optimal for society. The use of technology Federal Reserve Bank of Dallas in medicine illustrates the breakdown between health care incentives and costs. “The technological revolution,” Douglas Werner explained, “be it therapeutics or diagnostics or combinations thereof, is the major cost driver [in medicine].” “The people who make decisions on the types of technology to use do not pay the bills. In basically all other sectors, improvements in technology have led to reduction of relative costs, not increases in relative costs,” Niskanen commented. Hospitals can afford to purchase amenities such as chandeliers or another CAT scan or another MRI because “someone else is paying for that. They are not really making the hard decisions because they pass those costs on to someone else,” Anderson said. Consumers have no idea if they are buying chandeliers or superior medical care because they lack information about the cost and quality of health care services. Individuals needing health care rarely call and ask a hospital’s room rate or mortality rate, for example. “We have a system driven by utilization,” said Anderson, adding that there is no measure for health care quality and “people don’t know what they’re buying.” Harper agreed and noted that hospitals and doctors respond to these incentives: “The reward system has been wrong. Rewards have been based on utilization, on quantity and not on quality of care.” The lack of incentive for cost or quality control has encouraged the growth of an expensive health care bureaucracy. According to Anderson, 23 percent of health care costs may be the result of the bureaucracy, which, he believes, is making the health care system incoherent. “It should not drive patients and doctors and nurses and everyone crazy because it is irrational,” he said. Panelists disagreed on the most effective reform. Managed care, in which the public or private sector decides which doctors consumers should use and which illnesses will be paid for, could help control health care costs. Market-based reforms, including the elimination of subsidies to reduce demand, were also suggested. Panelists disagreed on how reforms should be implemented but generally agreed that consumers need more information about the cost and quality of health care providers, along with incentives to control costs. Increasing information. Measuring health care quality and cost and making that information Economic Review — Second Quarter 1993 accessible to the public could help individuals make decisions about health care purchases. “I think competition will work if the purchasers of health care really know what they are buying,” said Robert Shoemaker. “The system needs information to identify high-quality, cost-efficient providers of services,” explained Dwain Harper. “If you look at the literature, [there is little] agreement on what is a measurement of quality....There is going to have to be some investment put into developing the technologies to measure these results,” he said. The Cleveland Health Quality program, the state of Oregon, and others have attempted to improve health care productivity by using technology to bridge the information gap and help consumers measure quality. The Cleveland group abstracts information manually off the records of thirty-one hospitals and places it on a database. Insurance companies can then easily compare information about hospitals that is risk-adjusted for outcomes and patient satisfaction. According to Harper, this program helps determine the “highest quality outcomes and the most reasonable costs.” Providing incentives to control costs. Even if more information about the cost of health care services were available, consumers would still need an incentive to base decisions on what they know. One suggestion for creating such an incentive is to require consumers to pay the marginal cost of consuming medical care. For example, curbing tax deductions for private and public health insurance could reduce the stimulant on the demand for medical services. According to Niskanen, “The reduction of tax-subsidized medical prepayment plans is a necessary condition to reduce the growth of demand for medical care.” He advocated an income test to limit tax deductions for medical services and insurance and adding deductions for set levels of preventative care. “It is important to recognize that a substantial part of tax-subsidized health insurance accrues to higher income people. Higher income people are more likely to be privately insured, and the value of that insurance is a function of their marginal tax rate. Similarly, the people on Medicare with the highest incomes are the ones who likely live the longest, and the value of Medicare increases with the marginal tax rate. Clearly, the amount of tax-subsidized health insurance could be substan63 tially reduced without much change in the insurance available to the poor.” Niskanen also recommended that the current prepayment form of medical plans be restructured as indemnity insurance, similar to auto insurance. “Patients would be paid a fixed amount above some deductible per illness or accident but would bear the sole marginal cost of whatever medical services they elect,” he said. Managed care. Managed-care programs reduce costs by providing a fixed level of care at a reduced price. Under managed care, a company or government agency decides which types of illnesses to treat and where consumers can obtain medical care. Costs can be reduced by having a network of physicians under contract with agreements on rates, patient volume, and quality, Werner said. He noted that managed-care programs are estimated to reduce costs by 15 to 25 percent. “It is time for somebody to address the costs, even if it means capping reimbursement,” Anderson commented. The state of Oregon and several private insurance companies are attempting to control costs through managed care. The Oregon plan, Shoemaker explained, categorizes groups of people and determines which medical procedures will be covered and at what price for each group. The program estimates the providers’ costs and adds “a reasonable level of compensation.” Shoemaker stressed that “emphasis is placed on preventive care,” but “people will get an adequate, but not an excessive, level of care.” Anderson suggested that a managed-care program should be combined with indemnity 64 insurance and operated with a single-reimbursement system. Such a system, he said, would be understandable, portable, and would stop costshifting. “I believe it can be a social insurance program with competition for quality, delivering care that is patient-centered,” he said. Werner, however, complained that a single-payer system would be expensive because it would lack competition on the administrative costs of health care. Werner suggested managed care as a viable strategy to use in a transition period, until “we get better control of measuring quality, providing quality, and understanding the economic benefit of quality medical services.” Niskanen countered that while managed care costs less than care in which people select their own physicians, it is a different standard of service that “does not prove to be effective in reducing the rate of increase in total cost.” Conference participants, concerned about the availability of health insurance, noted that it is cheaper to provide preventative care than emergency care for people without insurance. Shoemaker advocated that employers be required to either make health insurance available to employees or pay a tax that will give them access to an insurance pool, a system frequently referred to as play or pay. Anderson and Niskanen expressed concern that a play-or-pay system would force employers to cut back on employees or wages. “Pay or play is merely a transition to national health insurance,” Niskanen said. Federal Reserve Bank of Dallas Investing for Growth: Thriving in the World Marketplace A conference sponsored by the Federal Reserve Bank of Dallas October 29 –30, 1992 The Role of the Federal Reserve in the International Economy Opening Address Robert D. McTeer, Jr., President, Federal Reserve Bank of Dallas, Dallas, Texas Investing in International Trade and Growth Chairman Gerald P. O’Driscoll, Jr., Vice President and Economic Advisor, Federal Reserve Bank of Dallas, Dallas, Texas Speakers Structural Reform in Mexico Ariel Buira, Director of International Affairs, Banco de Mexico, Mexico City, Mexico The Positive Consequences of Immigration Julian Simon, Professor of Business Administration, University of Maryland, College Park, Maryland The Future of the Southwest in a Global Market Admiral Bobby R. Inman, U.S. Navy (Retired), private investor, Austin, Texas The Diversity Imperative: Looking to the Year 2000 Major General Hugh G. Robinson, U.S. Army (Retired), Chairman and Chief Executive Officer, The Tetra Group, Inc., Dallas, Texas Investing in Human Capital: Revitalizing Education Chairman Marvin E. Edwards, General Superintendent, Dallas Independent School District, Dallas, Texas Speakers It’s About Time to Learn Milton Goldberg, Executive Director, National Education Commission on Time and Learning, Washington, D.C. The School Choice Imperative Clint Bolick, Vice President and Director of Litigation, Institute for Justice, Washington, D.C. New Ideas in Education Paige S. Cassidy, Director of Communications, New American Schools Development Corporation, Arlington, Virginia Investing in Human Capital —Reforming Health Care (A Panel Discussion) Moderator Michael R. Levy, Publisher, Texas Monthly, Austin, Texas Panelists Ron J. Anderson, President and Chief Executive Officer, Parkland Memorial Hospital, Dallas, Texas Economic Review — Second Quarter 1993 65 Panelists — Continued Dwain L. Harper, Executive Director, Cleveland Health Quality Choice Coalition, Cleveland, Ohio William A. Niskanen, Chairman, CATO Institute, Washington, D.C. Robert C. Shoemaker, Jr., State Senator and Chairman—Health Insurance and Bioethics Committee, Portland, Oregon Douglas C. Werner, Senior Vice President—South Central Market Area, Aetna Health Plans, Richardson, Texas Economic Principles vs. Political Agendas Harvey Rosenblum, Senior Vice President and Director of Research, Federal Reserve Bank of Dallas, Dallas, Texas Investing in the Southwest Economy (A Panel Discussion) Moderator Michael R. Levy, Publisher, Texas Monthly, Austin, Texas Panelists Ernesto J. Cortes, Jr., Southwest Regional Director, Industrial Areas Foundation, Austin, Texas Linnet F. Deily, Chairman, President and Chief Executive Officer, First Interstate Bank of Texas, N.A., Houston, Texas Tom Luce, Partner, Hughes and Luce, Dallas, Texas Donald C. Shuffstall, President, Don Shuffstall International, El Paso, Texas International Aviation: Opportunities If... Donald J. Carty, Executive Vice President—Finance and Planning, AMR Corp. and American Airlines, Inc., Dallas–Fort Worth, Texas 66 Federal Reserve Bank of Dallas