The full text on this page is automatically extracted from the file linked above and may contain errors and inconsistencies.
What Monetary Policy Can and Cannot Do Based on a speech presented by President Santomero to the National Association for Business Economics, New York, September 10, 2001 W BY ANTHONY M. SANTOMERO hen we consider monetary policy, there is some common ground on which most economists can readily agree. But there are also more contentious issues — areas with legitimate room for disagreement. In this article, President Santomero reviews both the areas of agreement and the areas open to debate and offers his perspective on them. He concludes with some thoughts about the implications for the conduct of monetary policy. Most Fed policymakers — indeed, most professional economists today — would agree that (1) the goal of monetary policy is to help create an economic environment that fosters maximum sustainable growth, and (2) the most important contribution the Fed can make to that environment is to provide price stability. Behind this philosophy of appropriate monetary policy goals lie some important economic principles on which, again, I think there is broad agreement. The first economic principle is that price stability is crucial to a wellfunctioning market economy. Prices are signals to market participants. A stable overall price level allows people to clearly recognize shifts in relative prices and adjust their decisions about spending, saving, working, and investing in welfare-enhancing ways. Inflation, by contrast, jumbles and distorts price signals and generates bad economic decisions. www.phil.frb.org The second economic principle is that price stability is a contribution to financial stability and attendant economic growth that only monetary policy can make. We know that relative prices will fluctuate in response to shifts in the supply or demand for particular products, but it takes a persistent influx of excess money and credit to sustain a general inflation. At the same time, money is neutral in the long run. That is to say, changing the supply of money does not affect the pool of real resources available to the economy, and so, ultimately, it affects only the price level. To these two principles I will add two empirical observations about which I hope we can also agree. The first is this: For the past 22 years, the Fed has focused on the goal of price stability and has been relatively successful in achieving it. We took the economy from the double-digit inflation of the late 1970s to a core inflation rate in the range of 2 to 3 percent — a range approaching essential price stability, that is, inflation low enough to no longer significantly influence economic decisions. Equally important, as the downward trend in market interest rates attests, we have succeeded in reducing inflation expectations. Market participants not only see stable prices today, but they also expect stable prices to persist for the foreseeable future. This is evident from a number of sources. Not surprisingly, the Philadelphia Fed’s Survey of Professional Forecasters is my personal favorite. Long-term inflation expectations, measured in our survey as the average rate of change in the CPI over the next 10 years, have held steady at 2.5 percent since early 1999. Establishing and maintaining confidence in the Fed’s goal of reaching for price stability is crucial to fostering productive saving and investment decisions. The second empirical observation on which I think monetary Anthony M. Santomero, President, Federal Reserve Bank of Philadelphia Business Review Q1 2002 1 economists will agree concerns the Fed’s policy strategy. We talk about monetary policy, recognize that inflation is a monetary phenomenon, and express belief in the neutrality of money. But the mechanism used to achieve our goal of price stability no longer involves setting targets for monetary aggregates. Indeed, the entire disinflation period coincides with the abandonment of one monetary aggregate after another, as none exhibited a predictable velocity. Rather, the Fed’s policy strategy has been to move the fed funds rate in the direction it thinks necessary to achieve its inflation target and bring aggregate demand into balance with the economy’s long-run potential supply. This is the essence of the so-called Taylor rule. The principles and observations I’ve just enumerated deliver a straightforward answer to the question of what monetary policy can do. Monetary policy can and should strive to establish a stable price environment, and the Fed has made considerable progress toward that goal by pursuing a persistent, if not particularly precise, strategy over the past 20 years. Of course, this is where the controversy begins. Having acknowledged that monetary policy can and should provide long-term price stability, the question arises: Can monetary policy do more? Some would say monetary policy cannot do more. Advocates of this view believe that attempting to do more is unlikely to improve economic performance in the short-term and, in fact, may even impair economic performance in the long term. Others would say that monetary policy can do more. It can go beyond stabilizing prices in the long term and help stabilize the real economy’s performance in the short term. That is to say, they believe monetary policy can be used to manage overall demand with sufficient precision over sufficiently 2 Q1 2002 Business Review short periods of time to reduce the volatility of output or employment in the face of demand or supply shocks. Is this so? Unfortunately, to my mind the answer is not a simple yes or no. It depends on the characteristics of the shocks and the state of economic science. Monetary policy can and should strive to establish a stable price environment, and the Fed has made considerable progress toward that goal by pursuing a persistent, if not particularly precise, strategy over the past 20 years. An analogy is helpful here. Suppose I raise this question: “Can doctors cure people?” One response might be: “Doctors can help people suffering from a variety of illnesses. In some cases, they can completely cure the patient of the illness. In other cases, they can mute the symptoms. In still others, they can do very little. I expect that over time, as medical knowledge and technique improve, doctors will be able to treat more illnesses and treat them more effectively. But even the most optimistic person doubts that we can ever conquer all of the maladies facing humanity.” The situation is similar for monetary policy. Monetary policy can be used to eliminate or at least mute the impact of some shocks to the economy, but not all. And, over time, as economic knowledge and policy techniques improve, policymakers’ capacity to stabilize the economy should and has increased. I think the medical analogy is useful. But it is just an analogy. Economics is not medicine. The speed with which the two disciplines make progress and the ultimate bounds on their capacity to improve welfare are not necessarily the same. We all stand in awe of the accomplishments that medicine has achieved in the last 50 or 100 years. Furthermore, we all anticipate tremendous progress in medical science in the years ahead. The past and likely future course of monetary economics is not so clear. Monetary economics has made significant progress over the years. We are surely better at responding to demand-side shocks than we were in the 1930s. We are also better at responding to supply-side shocks than we were in the 1970s. On the other hand, how closely can we calibrate the proper monetary policy response to sudden demand or supply disturbances? I think the answer is: not all that closely. Look, for example, at the Fed’s response to the productivity growth surge of the past few years or to the stock market correction. Not surprisingly, with the benefit of hindsight, the calibration was not perfect. Can we reasonably expect to operate at a higher level of precision in the near future? I do not believe so. As policymakers, we face considerable limitations on our capacity to assess, analyze, and shape economic conditions. We are limited in three fundamental ways. First, our capacity to measure and benchmark the economy’s performance is limited. What is the current economic situation? How close are we to the economy’s supply potential? How robust is demand relative to that potential? These are questions we can answer only imprecisely. www.phil.frb.org As professional economists, we all know that our measurements of current economic conditions are subject to almost constant revision. The point I want to emphasize today is that these revisions can be substantial enough to change policymakers’ perception of the need for or at least the extent of policy action. This is an issue that we in Philadelphia have spent considerable effort analyzing. Currently, our Bank is in the midst of a research project called the real-time data set for macroeconomists, being led by Dean Croushore and Tom Stark of our Research Department. The project assembles macroeconomic time series as they were recorded at specific points in time and explores the implications of data revisions for economic forecasting, hypothesis testing, and policymaking.* For my purposes here, suffice it to say that examining these time series of different vintages provides an interesting perspective on monetary policymakers’ situation. For example, in early October 1992 policymakers were contemplating action to stimulate the economy because they were concerned that the recovery from the recession of 1990-91 was stalling. To someone looking at the real GDP series we are using today, this anxiety would seem strange. The data show that real GDP grew at 3.8 percent in both the first and second quarters of 1992 and at 3.1 percent in the third quarter. But policymakers’ concerns seem much more reasonable when you look at the real GDP series they were using back in the fall of 1992. That series showed growth of just 2.9 percent in the first quarter of 1992 and 1.5 percent in the second quarter. This * See “A Summary of the Conference on Real-Time Data Analysis” on page 5. This conference was held at the Philadelphia Fed in October 2001. www.phil.frb.org example shows that the data on which we rely in real time can be imprecise enough to distort the tenor of our policy deliberations and the apparent wisdom of alternative policy actions. Aside from such basic measurement problems, there is the issue of getting good readings on the economic parameters by which monetary policymakers get their bearings: a benchmark for potential output on the supply side and for the appropriate real interest rate on the demand side. On the supply side, consider the current discussion about the U.S. economy’s long-run capacity for growth. The remarkable gains in productivity that occurred in the latter half of the 1990s came as something of a surprise to economists. The persistence of those gains has convinced most of us that technological innovations have elevated underlying productivity growth significantly from that of the prior two decades. I personally believe that productivity growth will remain elevated as firms learn to make better use of the value. What is that equilibrium value? It is not a constant, of course. It is the outcome of myriad individual saving and investment decisions, themselves predicated on factors subject to numerous fluctuations, such as changes in stock market wealth, perceived business opportunities, and fiscal policy. As a practical matter, the equilibrium interest rate may turn out to be relatively constant over time or subject to relatively easily predicted shifts. But, again, the state of our knowledge is limited. To put this issue in a current context, we might all agree that the federal tax cut package has increased the equilibrium real rate for the economy, but I think we would be hard pressed to agree by how much or for how long. Or I could have made a similar reference to the effect of the recent wealth contraction and its effect on interest rates. To summarize, one fundamental limitation on monetary policymakers’ capacity to stabilize the economy in the short run is their limited capacity to measure or gauge economic The data on which we rely in real time can be imprecise enough to distort the tenor of our policy deliberations and the apparent wisdom of alternative policy actions. technology they purchase. But the truth is that the current state of economists’ knowledge about the interplay of technology, innovation, and productivity does not afford us much more than a good guess as to the pace and pattern of potential supply growth in the future. On the demand side, policymakers face a similar knowledge gap. Since the instrument of monetary policy is the fed funds rate, the strategy of monetary policy is to set the shortterm real interest rate at an appropriate level relative to its long-run equilibrium performance very precisely, particularly in real time. A second fundamental limitation on monetary policymakers’ capacity for economic stabilization is much broader. It is the limited capacity of economic science to model people’s economic behavior. I believe market expectations are rational in the long run. But in the short run, the marketplace is beset by waves of optimism and pessimism that move expectations irrationally. We should not lose sight of the fact that Business Review Q1 2002 3 market participants are human beings, subject to emotions that can cause them to overreact or underreact to events. The result can be a significant change in spending that is neither sustainable nor socially desirable. The problem is that economic science provides little guidance as to their occurrence, impact, or likely persistence of such episodes. So it is difficult for policymakers to frame a response to them. I do not think we should ignore indicators of consumer and business confidence. If a shift in confidence is likely to introduce a substantial change in overall demand, monetary policy can and should respond with the aim of restoring demand growth to a pace consistent with potential supply. But I do not think the Fed has or should routinely take policy actions to boost expectations or bolster confidence. The third and final limitation Financial market participants seem to expect prompt and precisely calibrated monetary policy actions that yield predictably timed and measured economic results. Such expectations are just not realistic. on policymakers’ capacity to stabilize the economy in the short run is a familiar one: Monetary policy is a blunt instrument with an impact subject to long and variable lags. This is hardly news. In recent months, it has become a mantra in business news broadcasts that Fed interest rate cuts can take six to nine months or more to begin boosting the economy. What I’d like to call attention to is the irony that while there seems to be broader recognition that monetary 4 Q1 2002 Business Review policy is a blunt instrument, there also seems to be more strident calls for the Fed to use it with surgical precision. Financial market participants seem to expect prompt and precisely calibrated monetary policy actions that yield predictably timed and measured economic results. Such expectations are just not realistic. The danger I see in such unrealistic expectations is that not meeting them — which is inevitable — could unnecessarily traumatize financial markets and undermine broader public confidence, thereby unnecessarily debilitating the performance of the economy. Let me now turn to the third, and last, topic I want to address: What are the implications of all this for the Fed’s conduct of monetary policy? First and foremost, as I said earlier, monetary policy can and should provide a stable price environment. The Fed has been making substantial progress toward this goal in the U.S. over the past several decades. Its precise methods and strategies have varied, but focus and persistence were primary ingredients in the Fed’s success. I think monetary policy can and should also contribute significantly to the short-run stability of the real economy. However, we must admit that the state of our economic knowledge and the efficacy of our monetary policy tools are limited in some fundamental ways. We cannot eliminate the business cycle entirely. What we can do is mute the impact of large and persistent negative shocks to the economy. The way to do this is to take full advantage of the knowledge and policy leverage we have available. I think the Fed has done this relatively well in recent years and continues to do so. I have been participating in FOMC meetings for almost two years as a Fed president. Over this period I have seen that in making monetary policy decisions, the Fed uses the organizational structure of the FOMC to its best advantage. Reserve Bank presidents are constantly collecting up-to-date intelligence on current and likely future economic and financial conditions from their Banks’ boards of directors and through the contacts they make in the everyday course of operating a Reserve Bank. The insights from this direct contact, coupled with the information from surveys like our Bank’s Business Outlook Survey, sharpen the picture we get from the other available statistics. I believe the composite picture of national economic conditions that emerges as the presidents and governors convene around the FOMC table is as accurate and up-to-date a representation as occurs anywhere in government or the private sector. Nonetheless, not all uncertainties are resolved around that table, and I think the decisions that the FOMC makes reflect a prudent approach to dealing with the uncertainties remaining. We generally move in careful increments at a measured pace. That kind of persistent, incremental action in what we perceive to be the right direction is likely to contribute more to economic stability than aggressive attempts at fine-tuning. Implementation of a monetary policy committed to price stability and achievable real sector stabilization ultimately generates the reasonable market expectations and public confidence we seek. Looking ahead, we will continue trying to increase our knowledge and improve our policy strategies. Whether we can, in fact, achieve essential price stability and increase our capacity to stabilize the real economy, only time will tell. Meanwhile, in the interest of maintaining public confidence, I think it is important for the Fed to establish realistic public expectations about what monetary policy can and cannot do. BR www.phil.frb.org A Summary of the Conference on Real-Time Data Analysis BY TOM STARK I n October 2001, the Federal Reserve Bank of Philadelphia hosted a conference on the use of real-time data by macroeconomists. The conference focused on five topics: data revisions, forecasting, policy analysis, financial research, and macroeconomic research. Below, Tom Stark presents a summary of the conference papers. Almost nine years ago, the Research Department of the Federal Reserve Bank of Philadelphia began a project to investigate the importance of revisions to economic data. In its early stages, the project consisted of collecting economic data as they existed at various points of time in the past. We assembled an initial data set of key macroeconomic variables — called the real-time data set for macroeconomists — and made the data available on our web site.1 As part 1 For more information on the real-time data set for macroeconomists, see the article by Dean Croushore and Tom Stark, “A Funny Thing Happened on the Way to the Data Bank: A Real-Time Data Set for Macroeconomists,” Federal Reserve Bank of Philadelphia Business Review, September/ October 2000. Tom Stark is a senior economic analyst in the Research Department of the Philadelphia Fed. www.phil.frb.org of its research program, the department hosted a two-day conference in October 2001 on the use of real-time data in economics. Economists from the Federal Reserve System and academia presented nine papers, many of which relied on the Philadelphia Fed’s data set, illustrating the importance of data revisions in economic analysis. This article summarizes the research presented at the conference. As anyone who follows the economy knows, economic data are revised often. In fact, many economic variables undergo a nearly continuous process of revision. And those revisions can be very large, sometimes large enough to change economists’ view of economic conditions in the past — and sometimes large enough to change the results of empirical studies. So, what are real-time data? Simply put, real-time data are the data as they existed prior to subsequent revisions. Since the data undergo many revisions, a real-time data set is one that tracks the values of observations as those values are revised over time. Research on the effect of data revisions on economic analysis has been ongoing since at least the early 1960s, but such research has never really been in the forefront of economic analysis. Indeed, as noted in the opening paragraph of Frank Denton and John Kuiper’s often cited 1965 study: “The problem of measurement error has received rather limited attention in the estimation of econometric models and the application of such models to forecasting. The customary treatment has been to ignore the problem altogether, or else refer to it and then hastily assume, for the purpose at hand, that such errors do not exist” (italics added).2 One reason for such neglect is that analyzing the effect of data revisions is not easy to do: It is time-consuming to collect all the data necessary to track how economic observations change over time. However, in recent years, researchers, such as those at the Philadelphia Fed, have begun to assemble the real-time data required for such analyses. As a consequence, economic researchers are beginning to place more emphasis on the problems associated with revising data. As we will see below, researchers are using real-time data to study the efficiency with which government statisticians construct early releases of data, to see how revisions affect forecasts, to show how economic 2 For more information on this study, see the article by Frank T. Denton and John Kuiper, “The Effect of Measurement Errors on Parameter Estimates and Forecasts: A Case Study Based on the Canadian Preliminary National Accounts,” Review of Economics and Statistics (May 1965), pp. 198-206. Business Review Q1 2002 5 policymakers (such as the members of the Federal Open Market Committee) make their decisions, to examine whether financial assets are priced according to economic fundamentals, and to test how well previous economic studies stand up to revisions in the data. DATA REVISIONS A logical precursor to any study of the effect of data revisions on economic analysis is to ask: What is the nature of such revisions? Are the revisions big or small? Are they predictable? And how does the data revision process compare across different countries? Jon Faust, of the Federal Reserve Board, presented a paper that shed some light on these issues. Faust and his co-authors John H. Rogers and Jonathan Wright use the Organization for Economic Cooperation and Development’s Main Economic Indicators to assemble a data set of preliminary announcements of real GDP growth in the seven largest industrial countries. They define a revision as the difference between “final” real GDP growth, as measured in 1999’s data, and the preliminary announcement. The study’s sample begins in 1965:Q1 for the United States, Canada, and the United Kingdom, 1970:Q1 for Japan, 1979:Q4 for Italy and Germany, and 1987:Q4 for France and ends in 1997:Q4. Faust, Rogers, and Wright report that the root-mean-square error of revisions is large for all countries. Indeed, their data indicate that over the full sample “the final annualized growth rate is more than a percentage point different from the preliminary at least half the time in these data.” This is an important finding because it suggests that data revisions have the potential to change the way economists view the state of the economy, when that view is based on data that have been revised many times — a theme that some of the other conference papers expanded on. 6 Q1 2002 Business Review But perhaps the most surprising finding is the degree to which these large data revisions are predictable, in some countries, on the basis of data available at the time of the preliminary announcement. In an initial analysis, the authors found that the preliminary announcement itself explained more data might yield provisional estimates that are too optimistic at cyclical peaks and too pessimistic at troughs. The Fed researchers began their investigation by examining revisions to real output growth around peaks and troughs, as defined by the National Bureau of Economic Economic data are revised often. In fact, many economic variables undergo a nearly continuous process of revision. than 40 percent of the variation in data revisions in Italy, Japan, and the United Kingdom. This result is notable because it suggests that the statistical agencies in those countries may not be using information efficiently when they construct their preliminary estimates. However, some agencies may be better than others in processing information: The study concludes that there’s some evidence of predictability of revisions in Canada, France, Germany, and the U.S., but “the measured degree of predictability is rather modest.” In the conference’s second paper on data revisions, Karen E. Dynan, of the Federal Reserve Board, presented very detailed evidence on the behavior of data revisions in the United States. A particularly timely analysis given the recent performance of the U.S. economy, Dynan’s paper, coauthored with Douglas W. Elmendorf, uses the Philadelphia Fed’s real-time data set to study whether the provisional estimates of the Bureau of Economic Analysis (BEA) are susceptible to revision around cyclical turning points. Dynan began her talk by discussing the timing of the BEA’s data releases for the national income and product accounts, noting that early releases are based on incomplete source data and, consequently, incorporate the BEA’s “judgmental assumptions and trends.” The authors posit that the BEA’s use of extrapolations to estimate missing source Research (NBER). However, they quickly discovered that their ability to pin down precise estimates of the behavior of revisions at turning points was hindered by the small number of business cycles in the U.S. data. Noting that their basic theory also suggests provisional estimates should be particularly prone to revisions during periods of accelerating or decelerating growth, Dynan and Elmendorf investigated the relationship between revisions to provisional estimates of growth and changes in the rate of growth, the latter measured in the data available in the second quarter of 2000. Their statistical analysis indicates that the BEA’s provisional estimates do not fully capture accelerations and decelerations in growth, suggesting “some tendency to miss economic turning points.” Discussant David DeJong, of the University of Pittsburgh, noted that there are many ways to define a data revision, depending on the vintage of data taken to represent the revised value, and questioned the emphasis both papers placed on using the most current data for that purpose. In particular, DeJong suggested that policymakers, forecasters, and other economic decision-makers might be more interested in the properties of data revisions constructed on the basis of revised values that are www.phil.frb.org released at a date closer to the date of the preliminary value. FORECASTING Data revisions can present particularly thorny problems for econometric model builders and forecasters. Recent research suggests that failure to account for data revisions when building a model can often result in suboptimal specification decisions. And revisions to a model’s initial values can often change that model’s forecasts. Two papers at the conference discussed these issues. Evan Koenig, of the Federal Reserve Bank of Dallas, Sheila Dolmas, and Jeremy Piger, of the Federal Reserve Bank of St. Louis, present theoretical and empirical evidence on a novel way to use the observations of a real-time data set to produce highly accurate short-run forecasts for the growth rate of U.S. real output. Koenig first noted the theoretical implications for forecast accuracy of assuming that the revisions to a forecasting equation’s dependent and independent variables are unforecastable. In such a case, Koenig noted, forecast accuracy improves when an analyst estimates his model using preliminary observations on the dependent variable and values for the right-hand-side variables measured at the same time the dependent variable is measured. In other words, Koenig and his co-authors find that forecast accuracy is enhanced when an analyst estimates his model using as many vintages of data as there are observations in the sample. That result is striking because it stands at odds with the practice of professional forecasters, who estimate their models on the basis of the latest available observations, not the preliminary observations. The authors test their theoretical results using the data by building a small-scale forecasting model for predicting within-quarter www.phil.frb.org real output growth. The model relates the growth rate of real output to the growth rates of monthly industrial production, real retail sales, and nonfarm payroll employment. The authors find confirming evidence that their novel way of using real-time observations to estimate a model yields gains in forecast accuracy — as suggested by their theoretical results — compared with how professional forecasters estimate their models. Though some questions may remain about how well this result holds up with alternative sample periods, models, and variables, Koenig, Dolmas, and Piger’s analysis has the potential to change the way economists implement estimation and forecasting methods — and the manner in which economists collect their observations. Athanasios Orphanides, of the Federal Reserve Board, and Simon van Norden, of Ecole des Hautes Etudes Commerciales, Montreal, and CIRANO, study the effect of data revisions on measures of the output gap and the reliability of inflation forecasts that are based on those measures. The study In recent years, there has been an explosion of interest in estimating how the Fed reacts to changes in the economy. uses the Philadelphia Fed’s real-time data set to construct 12 alternative measures of the output gap, finding that (almost) all of these measures appear to be related to future rates of inflation when the analysis is conducted in-sample. That result is reassuring because many theoretical models of the economy predict such a relationship. However, when the analysis is extended to an out-ofsample setting, using real-time estimates of the output gap measures, the study finds virtually no evidence that any measure of the output gap helps to predict inflation. Orphanides and van Norden conclude that their results “bring into question the practical usefulness of output-gapbased Phillips curves for forecasting inflation and the monetary policy process.” The results also demonstrate rather nicely the pitfalls associated with any model specification process that ignores the presence of data revisions. Sharon Kozicki, of the Federal Reserve Bank of Kansas City, discussed both forecasting papers. In commenting on the Koenig, Dolmas, and Piger paper, Kozicki questioned whether the paper’s results would hold in all forecasting situations. Regarding Orphanides and van Norden’s analysis, Kozicki wondered how closely the paper’s simulated real-time forecasts would match actual real-time forecasts. In particular, Kozicki noted that many of the paper’s specification decisions might not have been made in real time. Kozicki also noted that none of the paper’s proposed measures of the output gap were formed on the basis of the production-function measures of potential output that were sometimes used in the past, and she argued for “real-time econometric techniques — not just real-time data.” POLICY ANALYSIS In recent years, there has been an explosion of interest in estimating how the Fed reacts to changes in the economy — estimates such as the wellknown Taylor rule, which relates the federal funds rate to the rate of inflation and the output gap — and evaluating the stabilization properties of such rules. However, much of that work assumes, either explicitly or implicitly, that real-time data issues are Business Review Q1 2002 7 not very important and that Fed policy can be adequately described as depending on just a few variables. Two conference papers questioned these assumptions. Ben S. Bernanke, of Princeton University, and Jean Boivin, of Columbia University, analyze past monetary policy decisions within a statistical framework that permits policymakers to possess extremely large information sets. Boivin noted that Fed policymakers have a reputation for looking at a large set of variables in setting monetary policy — that is, Fed policymakers appear to operate within a “data-rich environment.” That stands in contrast to the approach taken in traditional empirical analyses of the Fed’s behavior, which, for statistical reasons, usually assumes the Fed’s information set consists of just a few variables. Bernanke and Boivin overcome the statistical difficulties associated with large data sets by using a dynamic factor model to summarize the information contained in each of several different data sets, the largest of which contains 215 variables. The authors find: (1) the choice between real-time data and current data is not as important for forecast accuracy as conditioning the forecasts on a large number of variables; and (2) Federal Reserve Greenbook forecasts could have been made more accurate by using factor-model methods. These results are interesting because they suggest that policymakers who make decisions on the basis of forecasts might make better decisions if those forecasts reflect the information from a very large set of variables. In an analysis of the Fed’s monetary policy decisions, Bernanke and Boivin show how to use factormodel methods to obtain estimates of policy feedback parameters when the policymaker uses a large information set. They also show how to test for the limited-information-set restrictions 8 Q1 2002 Business Review imbedded in Taylor-type policy rules. These results constitute important breakthroughs in the analysis of policy rules because traditional analyses do not permit the policymaker to use large information sets and may thus mismeasure the magnitudes of feedback parameters. The Bernanke and Boivin methodology may also lead to improved estimates of monetary policy shocks, permitting economists to better understand important features of the economy. Yash Mehra, of the Federal Reserve Bank of Richmond, examines the ability of the Taylor rule to describe Fed policy over two periods: 1968:Q1 to 1979:Q2 and 1979:Q3 to 1987:Q4, corresponding to periods in which U.S. inflation accelerated and decelerated, respectively. Although the Taylor rule has been the subject of extensive investigation, Mehra finds the existing literature lacking in several important respects. First, some analyses are constructed on the basis of feedback parameters not estimated on real-time data. Second, some analyses rely on predictions from the Taylor rule that are not conditioned on the (real-time) observations that policymakers would have known when their decisions were made. Third, some analyses rely on questionable real-time estimates of the output gap. Mehra uses the Philadelphia Fed’s real-time data set for constructing improved (real-time) estimates of the output gap and for estimating and forecasting the Taylor rule. On this basis, he finds: (1) in the 1960s and 1970s, monetary policy, as measured by the Taylor rule, responded to rising inflation in a far “too timid” fashion, a result not found in some previous studies; (2) the speed with which monetary policy adjusts to changes in fundamentals, as given in the Taylor rule, is much higher than estimated in previous studies. Mehra attributes these differences to his use of real-time data. Discussant Athanasios Orphanides, of the Federal Reserve Board, suggested that an understanding of past policy decisions is vital for identifying periods in which monetary policy may have erred. Such knowledge, Orphanides argued, is key for improving future policy decisions. Toward that end, Orphanides suggested several avenues for future research on monetary policy rules, including the proper concept of the output gap, the appropriate measure of inflation, the functional form, and whether the rule should be forward or backward looking. Orphanides also suggested that researchers could gain valuable insights into past monetary policy decisions by studying the historical transcripts of FOMC meetings. FINANCIAL RESEARCH Perhaps no field of study in economics is potentially as sensitive to the choice between real-time data and revised data as financial economics. Financial economists have a long history of studying how macroeconomic news announcements affect asset prices. However, to date, much of that research has rested on measures of announcements taken from revised data. But because the revised observations are available only well after the fact, there is reason to view the results of such studies with some skepticism. Two papers at the conference reported on how financial asset prices are affected by news on macroeconomic variables, such as prices and output, when those variables are measured in real time. Peter Christoffersen, of McGill University, and CIRANO, Eric Ghysels, of the University of North Carolina, and CIRANO, and Norman R. Swanson, of Purdue University, use real-time and revised data from the Philadelphia Fed’s data set and apply Chen, Roll, and Ross’s 1986 methodology to study whether www.phil.frb.org macroeconomic risks are rewarded in the stock market.3 Christoffersen et al. follow Chen, Roll, and Ross in measuring risk on the basis of the covariance between an equity portfolio’s return and the unanticipated component of macroeconomic news announcements (for real output, inflation, and credit risk), but they diverge from that methodology in considering alternative ways to measure news. As in the Chen, Roll, news value of macroeconomic releases depends on revised data and constant expectations, the authors estimate that the financial markets do not price real output risks. However, that finding is reversed when real-time data are used. Another important finding is that the measure of expectations — fixed or autoregressive — plays an important role in estimating how markets price risk. In summarizing their results, Christoffersen, Ghysels, and Swanson Perhaps no field of study in economics is potentially as sensitive to the choice between real-time data and revised data as financial economics. and Ross study as well as many others, they measure news using revised values of macroeconomic data. However, Christoffersen et al. theorize that measuring news in that way carries the potential for “serious mismeasurement of macroeconomic news.” So, they also measure the news content of macroeconomic data releases using unrevised (real-time) data. The researchers also consider two alternative measures of expectations for constructing the unanticipated component of macroeconomic news releases, one based on constant expectations and the other on expectations given by an autoregressive process. The study finds important differences in the estimated return to macroeconomic risks when the risks are estimated using revised data and when they are measured using unrevised data. For example, when the 3 For more information on this methodology, see the article by Nai-Fu Chen, Richard Roll, and Stephen A. Ross, “Economic Forces and the Stock Market,” Journal of Business 59 (July 1986), pp. 383-403. www.phil.frb.org conclude that “real-time macroeconomic data should not be overlooked when carrying out a variety of empirical analyses for which the timing and availability of macroeconomic information may matter.” Frank Diebold, of the University of Pennsylvania and the NBER, presented some findings on the link between high-frequency exchange-rate movements and economic fundamentals, a topic of considerable importance, since some research suggests little link between the two. Diebold and co-authors Torben G. Andersen, of Northwestern University and the NBER, Tim Bollerslev, of Duke University and the NBER, and Clara Vega, of the University of Pennsylvania, construct an extensive data set on U.S. dollar spot exchange rates and macroeconomic news announcements to study how exchange rates respond to new information. The data set consists of nearly 500,000 observations on continuously recorded five-minute exchange-rate returns for the U.S. dollar exchange rates for the mark, pound, yen, Swiss franc, and the euro over the period January 3, 1992, to December 30, 1998. This novel data set also contains a rather extensive set of “news” measures, defined as the standardized difference between an announcement and market expectations for the announcement, collected from the International Money Market Services’ real-time data set. These news measures are for U.S. and German data releases and cover variables such as employment, retail sales, industrial production, and consumer prices. The data set includes 40 such measures. The researchers specify a statistical model to capture the conditional mean and conditional variance dynamics of exchange rates in response to macroeconomic news — though the primary focus is on understanding conditional mean dynamics. The paper’s most important finding is that U.S. dollar exchange rates respond quickly and significantly to U.S. news announcements. That result is important because it suggests that “high-frequency exchange-rate dynamics are linked to fundamentals,” a result that many existing studies failed to find. Interestingly, the study finds much more limited evidence that German news announcements affect the exchange rate, a result the authors attribute to differences in the extent to which exact release times are known in the respective countries. The study also finds evidence indicating that news announcements have timing, size, and sign effects on exchange rates. Mark Watson, of Princeton University, discussed both papers. He suggested that Christoffersen et al. should consider how their estimates of the market’s valuation of risk would be affected under alternative assumptions about the relationship between real-time and revised data. In particular, Watson noted that under some assumptions, such estimates would be unaffected by the choice between real-time and Business Review Q1 2002 9 revised data. Watson praised Andersen, Bollerslev, Diebold, and Vega’s paper and suggested that their future research might address exchange rates’ response to news leaks. MACROECONOMIC RESEARCH Dean Croushore and Tom Stark, of the Federal Reserve Bank of Philadelphia, present evidence on the extent to which key studies in empirical macroeconomics hold up under revisions in the data. However, in contrast to most other papers at the frequencies. One notable result is that benchmark revisions to the level of variables in the national income and product accounts appear to follow “the typical spectral shape of macroeconomic data,” characterized by high power at low frequencies. On the basis of these results, the authors argue that it is worthwhile to check whether the conclusions of some key studies in macroeconomics are sensitive to benchmark revisions. For each study examined, Croushore and Stark replicate the Revisions to provisional estimates mainly reflect new source data, while benchmark revisions can reflect redefinitions, changes in base years, and changes in weighting techniques, features not usually accounted for in theoretical models of the economy. conference, in which the focus was on revisions to provisional observations, Croushore and Stark emphasize the process of revisions in going from one benchmark revision — or “vintage” — to another. That distinction is important: revisions to provisional estimates mainly reflect new source data, while benchmark revisions can reflect redefinitions, changes in base years, and changes in weighting techniques, features not usually accounted for in theoretical models of the economy. Using spectral techniques to study differences in the quarterly growth of variables in the national income and product accounts, the authors find it hard to characterize the benchmark-revision process. In some cases, prominent differences occur at business-cycle frequencies; in other cases, differences show up at seasonal 10 Q1 2002 Business Review original results — using a vintage of data from the Philadelphia Fed’s realtime data set that is closest to the vintage used in the original study. Then, they test how well their results hold up using different vintages of data. The authors find that some results are sensitive to data revisions and others are not. For example, the results of Kydland and Prescott’s 1990 study of key correlations among macroeconomic variables remain intact when tested on additional vintages. However, the conclusions of Robert Hall’s 1978 study on consumption behavior appear quite sensitive to data revisions. Croushore and Stark also note some sensitivity of Blanchard and Quah’s 1989 structural vector autoregression (VAR) results when the model is estimated on alternative vintages of data, a finding that the Fed researchers trace to the estimation technique used in structural VARs.4 Discussant Ken West, of the University of Wisconsin, opined that real-time data have many important applications, including forecasting and modeling the behavior of economic decision-makers, such as monetary policymakers, whose actions depend on provisional data releases. However, West expressed concern about applying real-time data in more general settings in which the actions of decisionmakers may not hinge so crucially on provisional data releases. SUMMARY The increased availability of real-time data has stimulated renewed interest in the problems associated with data revisions and the potential benefits of using real-time data in empirical studies. The papers presented at the Philadelphia Fed’s October conference highlighted many of the important problems and illustrated how real-time data can be used to gain improved understanding of economic relationships. If the many striking findings reported at the conference are any indication, realtime data analysis is here to stay. BR 4 For more information on the studies mentioned in this paragraph, see the articles by Finn E. Kydland and Edward C. Prescott, “Business Cycles: Real Facts and a Monetary Myth,” Federal Reserve Bank of Minneapolis Quarterly Review, Spring 1990; Robert E. Hall, “Stochastic Implications of the Life CyclePermanent Income Hypothesis: Theory and Evidence,” Journal of Political Economy 86 (December 1978), pp. 971-87; and, Olivier Jean Blanchard and Danny Quah, “The Dynamic Effects of Aggregate Demand and Supply Disturbances,” American Economic Review 79 (September 1989), pp. 655-73. www.phil.frb.org CONFERENCE PAPERS Andersen, Torben G., Tim Bollerslev, Francis X. Diebold, and Clara Vega. “Micro Effects of Macro Announcements: RealTime Price Discovery in Foreign Exchange,” manuscript, September 2001. Dynan, Karen E., and Douglas W. Elmendorf. “Do Provisional Estimates of Output Miss Economic Turning Points?” manuscript, Federal Reserve Board, September 2001. Mehra, Yash. “The Taylor Principle, Interest Rate Smoothing and Fed Policy in the 1970s and 1980s,” Federal Reserve Bank of Richmond Working Paper 01-05, August 2001. Bernanke, Ben S., and Jean Boivin. “Monetary Policy in a Data-Rich Environment,” manuscript, October 2000. Faust, Jon, John H. Rogers, and Jonathan Wright. “News and Noise in G-7 GDP Announcements,” manuscript, Federal Reserve Board, August 2001. Orphanides, Athanasios, and Simon van Norden. “The Reliability of Inflation Forecasts Based on Output Gap Estimates in Real Time,” manuscript, September 2001. Koenig, Evan F., Sheila Dolmas, and Jeremy Piger. “The Use and Abuse of ‘Real-Time’ Data in Economic Forecasting,” manuscript, August 2001. These papers are available on our web site at www.phil.frb.org/econ/conf/rtdaconf.html. Christoffersen, Peter, Eric Ghysels, and Norman R. Swanson. “Let’s Get ‘Real’ About Using Economic Data,” manuscript, June 2001. Croushore, Dean, and Tom Stark. “A RealTime Data Set for Macroeconomists: Does the Data Vintage Matter?” Federal Reserve Bank of Philadelphia Working Paper 99-21, December 1999. www.phil.frb.org Business Review Q1 2002 11 The Changing Faces of the Third District: A Snapshot of the Region from the 2000 Census BY THEODORE M. CRONE ast year, the government began to release data from the 2000 census. Thus far, several patterns have emerged about the changing demographics of the Third District states — Pennsylvania, New Jersey, and Delaware. In this article, Ted Crone describes some of these patterns and tells us what they mean for economic growth in this region. L Every 10 years the national census provides a profile of the American people — who we are and where we live. The initial data from the 2000 census were released in March 2001, and additional details will be released through 2003. From the data released so far, several patterns have emerged about the changing demographics of the three states in the Third District — Pennsylvania, New Jersey, and Delaware. Growth rates varied widely across the states. And the movement of people into the region from other states and from abroad significantly increased the ethnic and racial diversity of many areas in the tri-state region. Migration, birth rates, and natural aging also altered the age distribution of the region’s population. For example, the young working-age population declined in both the nation and the region. As this cohort moves through its working years, its lower numbers will limit the natural growth of the prime workingage population (25-54) over the next decade. This will translate into slower growth of the labor force and employment. Ultimately, it will mean slower growth in gross domestic product (GDP), since GDP growth is a combination of employment growth and productivity growth. Thus, the 2000 census not only gives us a record of population and demographic changes over the past 10 years; it also provides a glimpse of changes to come over the next decade. the third slowest growing state.1 New Jersey’s population increased somewhat less than the national average, and the state ranked 32nd in population growth (Table 1, see next page). The differences in population growth among the three states reflect differences in the three components of growth — natural increase (births minus deaths), net domestic migration, and net international migration.2 The 2000 census provides no direct measure of the components of state and local population growth, but the Census Bureau estimates the components of change between census years.3 And there were sharp differences in the 1 Pennsylvania added more than three times the number of people as Delaware, but Pennsylvania is a much larger state. However, some states that were less than one-fourth the size of Pennsylvania in 1990 (Nevada, Oregon, and Utah) added more residents than Pennsylvania. 2 This decomposition of population change is simply an accounting identity. Net domestic migration is the number of people who move into the state or locality from other parts of the U.S. minus those who move out of the area to other places in the U.S. Net international migration is the number of people who move into the state or locality from another country minus the number who move out of the area to some other country. For these calculations the Census Bureau considers movement to and from Puerto Rico international migration. 3 Ted Crone is vice president in charge of the urban/regional section of the Philadelphia Fed’s Research Department. 12 Q1 2002 Business Review GROWTH RATES VARIED WIDELY ACROSS REGION At the state level, population growth ranged from above average in Delaware, the 13th fastest growing state, to well below average in Pennsylvania, In the years between the decennial censuses the Bureau uses these estimates of the components of growth to derive estimates of total population in states and counties. The sources for data on births and deaths are the state and county records on vital statistics; domestic migration is estimated through address matching of federal tax returns; and data on international migration come from the immigration and naturalization service. (continued on next page) www.phil.frb.org TABLE 1 Population Growth 1990-2000 (Percent) Rank State 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 Nevada Arizona Colorado Utah Idaho Georgia Florida Texas North Carolina Washington Oregon New Mexico Delaware Tennessee South Carolina Virginia Alaska California Arkansas United States Montana Minnesota New Hampshire Maryland Mississippi Alabama Indiana Kentucky Oklahoma Wisconsin Hawaii Missouri New Jersey Wyoming Illinois Kansas South Dakota Nebraska Vermont Michigan Louisiana Massachusetts New York Iowa Ohio Rhode Island Maine Connecticut Pennsylvania West Virginia North Dakota 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 www.phil.frb.org Growth 66.3 40.0 30.6 29.6 28.5 26.4 23.5 22.8 21.4 21.1 20.4 20.1 17.6 16.7 15.1 14.4 14.0 13.8 13.7 13.2 12.9 12.4 11.4 10.8 10.5 10.1 9.7 9.7 9.7 9.6 9.3 9.3 8.9 8.9 8.6 8.5 8.5 8.4 8.2 6.9 5.9 5.5 5.5 5.4 4.7 4.5 3.8 3.6 3.4 0.8 0.5 relative importance of these components across the three states. Delaware is the only state in the Third District in which more people moved in from other states than moved out to other states. According to the 1999 estimates, Delaware’s population increased more than 5 percent in the 1990s because of net domestic migration (Figure 1). Many of these inmigrants in the 1990s probably came from Pennsylvania, since in 1990 more than one-fourth of Delaware residents born in other states were born in Pennsylvania. Both Pennsylvania and New Jersey lost population because of domestic migration. The Census Bureau estimated that between 1990 and 1999 Pennsylvania lost more than 2 percent of its population because of migration within the U.S., and New Jersey lost about 5 percent. International in-migration compensated for New Jersey’s loss to other states, but in Pennsylvania international migration had little effect on population growth. Pennsylvania’s growth also suffered from a low natural rate of increase. In the 1990s the birth rate in Pennsylvania was about 18 percent lower than the national average and deaths per 1000 were about 16 percent higher than average. Both these statistics are driven by the fact that Pennsylvania’s population is older than the nation’s in terms of both median age and percent of the population 65 and older. Region’s Growth: Concentrated in Delaware, New Jersey, and Southeastern Quadrant of Pennsylvania. Every county in Delaware grew as fast as or faster than the national average in the 1990s. A 3 (continued from previous page) The decennial census provides no direct measure of these components because there is no count from the census of how many people moved out of a state to another country and no count of how many people moved in from other states or out to other states between census years. FIGURE 1 Estimated Change in State Population 1990–1999 By Components of Change* Percent 8 5.9 5.3 6 4.9 5.3 4 2 2.3 1.4 1.0 0 -2 Natural Increase Net Domestic Migration Net International Migration -2.1 -4 -6 -4.9 PA NJ DE * These components of change will not sum to the change in the Census count from 1990 to 2000 because they do not include the final year and the Census underestimated the population growth for the U.S. and the three states in the Third District. Business Review Q1 2002 13 few counties in New Jersey also matched or exceeded the national growth rate, and most counties in New Jersey grew more than 6 percent; only Salem County, in the southern part of the state, lost population (Figure 2). County growth in Pennsylvania ranged from an increase of more than 65 percent in sparsely populated Pike County in the northeastern corner of the state to a loss of more than 6 percent in Cambria County in the Johnstown metro area. Of the 67 counties in Pennsylvania, 19 lost population in the 1990s; most of them were in the western and northeastern parts of the state. More than half the counties with population losses were in the state’s 14 metropolitan areas. In Pennsylvania the population increased more slowly in the metro areas than in the nonmetro areas, a reversal of the national pattern in which metro areas grew slightly more rapidly than nonmetro areas.4 Only 25 of the nation’s 331 metro areas lost population in the 1990s, and five of them were in Pennsylvania.5 Even in the nine metro areas in Pennsylvania that had population increases in the 1990s, the central cities in all but Allentown, Lancaster, and Reading lost residents.6 Two counties in the Philadelphia metro area lost population (Philadelphia County in Pennsylvania and Salem County in 4 The population of Pennsylvania’s metro areas increased 3.1 percent compared with 5.1 percent for nonmetro areas. The one nonmetro county in Delaware (Sussex) also grew faster than the other two counties in the state. There are no nonmetro counties in New Jersey. 5 The 331 metro areas include all the metropolitan statistical areas and primary metropolitan statistical areas. In neighboring New York, six metro areas lost population, and in Ohio, three metro areas lost population. 6 In New Jersey the central cities of Newark and Trenton also lost population. 14 Q1 2002 Business Review FIGURE 2 County Population Growth 1990–2000 FIGURE 3 Increase in Population from International In-Migration 1990–2000* * These percentages represent the growth in population due to foreign in-migrants who arrived in the U.S. in the 1990s and were still here in 2000. These do not include the foreign born who live in institutions, college dormitories, or other group quarters. The data do not reflect the net effect of international immigration because they do not include those who move from the U.S. to other countries. New Jersey), but the metro area as a whole grew, albeit slowly (3.6 percent). The importance of the Philadelphia metro area for the tri-state region is difficult to overstate. It contains almost one-quarter of the population of the three states and more than 30 percent of Pennsylvania’s population. Philadelphia remains the fourth largest metropolitan area in the nation, but it grew more slowly than any of the other 10 largest metro areas. (See Population Changes in the Philadelphia Metro Area: 1990 –2000, page 20.) www.phil.frb.org IMMIGRATION PLAYED IMPORTANT ROLE IN VARIATION OF LOCAL GROWTH RATES The census count for the U.S. in 2000 was higher than expected, in part because international in-migration was higher than estimated in the years between censuses. Immigrants accounted for an increase of 5.4 percent in the nation’s population in the 1990s.7 The robust U.S. economy in the 1990s, which produced some of the lowest unemployment rates in 30 years, was a magnet for foreign immigrants. Differences in wage rates and unemployment rates between countries are major factors in international immigration.8 Moreover, when they come to the United States, immigrants tend to settle in those metropolitan areas that already have a high proportion of foreign-born residents. Economic factors play a role in this decision as well. Connections to family, friends, and previous immigrants from their home country tend to lower the cost of immigrating and increase the probability of success for new immigrants.9 7 These data are based on the 12 monthly census samples during 2000 and do not include the foreign-born population living in institutions, college dormitories, or other group quarters. Also, those born in Puerto Rico or U.S. island areas are not considered international immigrants in these data. These percentages do not represent the net effect of international migration on the population of the nation or the individual states because some people emigrate from the U.S. to other countries, and they are not picked up in the census surveys. The percentages in Figure 3 represent the growth in population due to foreign immigrants who arrived in the U.S. in the 1990s and were still here in 2000. 8 See Douglas S. Massey, Joaquin Arango, Graeme Hugo, Ali Kouaouci, Adela Pellegrino, and J. Edward Taylor, “An Evaluation of International Migration Theory: The North American Case,” Population and Development Review, 20 (1994), pp. 699-751. 9 William H. Frey, “Immigration, Domestic Migration, and Demographic Balkanization in America: New Evidence for the 1990s,” Population and Development Review, 22 (1996), pp. 741-63. www.phil.frb.org Both the strength of the local economy and the presence or absence of a large foreign-born population help explain the pattern of foreign immigration in the tri-state region. International in-migrants boosted New Jersey’s population almost 8 percent. But they had a much more modest effect on population growth in Delaware and Pennsylvania10 (Figure 3). And almost all the international immigration in Pennsylvania was in the eastern part of the state.11 The influx of immigrants into New Jersey in the 1990s can be explained in part by the large number of foreign-born who were already in the state. New Jersey’s percentage of residents who are foreign-born is much higher than the U.S. average (Table 2). Pennsylvania and Delaware have much lower percentages of foreign-born residents than the U.S. average. Delaware’s exceptionally strong economy and low unemployment rates, however, attracted a large number of immigrants in the 1990s, and the foreign-born population almost doubled.12 In Pennsylvania the number of foreignborn increased only about one-third. The state has a relatively small percentage of foreign-born residents, 10 Delaware’s high growth was fueled by domestic migration. According to the Census Bureau’s 1999 estimates, Delaware’s population grew more than 5 percent in the 1990s because of domestic migration. By 2000 more than 40 percent of the state’s residents were born in another state compared with less than 30 percent for the national average and for the state of New Jersey. Only about 16 percent of Pennsylvania’s residents were born in another state. TABLE 2 Percent of Population That Was Foreign Born US PA NJ DE 1990 2000* 8.0% 3.1% 12.5% 3.3% 10.9% 4.1% 17.4% 5.5% * The 2000 percentages are based on the 12 monthly Census samples in 2000 and do not include the foreign born living in institutions, college dormitories, and other group quarters. and it had a relatively slow-growing economy in the last decade. Immigration in the 1990s greatly increased the ethnic and racial diversity in the nation and in some parts of the tri-state region. Nationally, almost 80 percent of the foreign-born population is from Asia or Latin America. In New Jersey it is about 70 percent, and in Pennsylvania and Delaware, about 60 percent of foreignborn residents are from Asia or Latin America. These two groups continued to represent the majority of international immigrants in the 1990s. Nationwide more than 16 percent of the population is Asian or Hispanic.13 Asians and Hispanics also exceed 16 percent of the population in New Jersey as a whole and in nine of the state’s 21 counties. In six New Jersey counties the proportion of the population that is Asian or Hispanic is 20 percent or 11 The Census Bureau estimated in 1999 that net international migration increased population 1 percent or more in only five Pennsylvania metro areas in the 1990s (Philadelphia, Allentown, Lancaster, Reading, and State College). 12 The foreign-born population increased more than 50 percent in New Jersey and in the nation. 13 In the census Asian is a racial category and Hispanic is an ethnic category, but there is little or no overlap, and the proportion of the two groups combined is a good proxy for the diversity of the population due to immigration over the years. Business Review Q1 2002 15 FIGURE 4 Proportion of Population That Is Asian or Hispanic higher. Among the three states in the region, Pennsylvania has the lowest proportion of residents who are either Asian or Hispanic (5 percent), but several counties in the eastern part of the state moved above the 5 percent or 10 percent levels in the 1990s (Figure 4). But with the exception of Centre County, which includes Penn State University, all the counties in the western half of the state and most in the northern part of the state have populations that remain less than 5 percent Asian or Hispanic. In the state of Delaware, New Castle and Sussex counties have passed the 5 percent level for residents who are either Asian or Hispanic. Most of the counties in the tri-state region that grew rapidly in the 1990s also became more racially and ethnically diverse, in part, through international immigration. 16 Q1 2002 Business Review AGE DISTRIBUTION OF POPULATION CHANGED SIGNIFICANTLY IN REGION The natural aging process along with the components of growth — births, deaths, and domestic and international migration — contributes to shifts in the age distribution of the population. In some parts of the tri-state region, these shifts had significant implications for the local economy. Nationwide, the share of the population under 18 increased slightly in the 1990s, and the share of those 65 and older declined slightly. But the most significant shift in the age distribution of the population was among the working-age population. The median age in the U.S. increased primarily because the older workingage population (45 to 64) increased more than 30 percent and the younger working-age population (20 to 34) declined more than 5 percent. This shift in the age distribution is the result of the baby boomers, born between 1946 and 1964, and those born in the birth-dearth years in the 1970s moving through their life-cycles.14 These differences in growth rates among various age groups and changes in the age distribution of the population have important economic consequences. School-Age Population: Large Change Can Have Major Impact. In the nation and in all three states in the region the number of school-age children grew more rapidly than the general population in the 14 There were almost 4 million births per year in the U.S. between 1946 and 1964, the baby boom years, and only about 3.2 million births per year between 1972 and 1978, the birthdearth years. www.phil.frb.org 1990s15 (Figure 5). But since primary and secondary education is a local government function, differences in growth rates for the school-age population at the county and schooldistrict levels are more important than differences at the state level, and there was a wide dispersion across the counties in the three states. Changes in school-age population ranged from an increase of more than 100 percent (Pike County, Pennsylvania) to a decline of 15 percent (Cambria County, Pennsylvania). More than half the counties in western Pennsylvania and many in northern Pennsylvania had declines in their school-age populations (Figure 6). The Pennsylvania counties with increases of 10 percent or more were mostly in the southeastern and south-central parts of the state. Even Philadelphia County, which had a loss in total population of more than 4 percent, had an increase in school-age population of more than 8 percent, and a few Philadelphia suburban counties had increases greater than 25 percent. All of the counties in Delaware and most of the counties in New Jersey had school-age population growth of more than 10 percent, and several had increases greater than 25 percent. Nationally, public education accounts for more than half of local government employment.16 In Pennsylvania and New Jersey it accounts for 60 percent and in Delaware for more than 70 percent of local government employment. Because of the large increases in 15 Because of the age breakdown of the population currently available from the 2000 census, we count those five to 17 years old as the school-age population. In fact, when the census is taken in April, most students in grades one through 12 are between six and 18 years old. 16 This does not include state employees involved in education. www.phil.frb.org FIGURE 5 Growth of General Population and School-Age Population 1990–2000* * Because of the age breakdown available from the 2000 Census, we count those between ages five and 17 as the school-age population. FIGURE 6 County School-Age Population Growth 1990–2000* PA Above 25% 10 - 25% 0 - 10% NJ DE Negative * Because of the age breakdown available from the 2000 Census, we count those between ages five and 17 as the school age population. school-age population, these jobs increased faster than overall employment and faster than other local government employment in each of the three states in the region. Since the major source of funding for public education is the property tax, increases in property taxes reflect increases in the number of school-age children. On an inflationadjusted basis, property tax revenue in Delaware and New Jersey increased 25 and 22 percent, respectively, between 1991-92 and 1997-98. In Business Review Q1 2002 17 Pennsylvania, where the school-age population grew more slowly than in the other two states, property tax revenue increased only 8 percent.17 Changes in Size of Elderly Population: Demand for Health Care. In the United States, per capita spending on health care for those 65 and over is more than four times the per capita spending on those under 65.18 Nationwide, the population 65 and older grew somewhat more slowly than the overall population in the 1990s, so this age group declined slightly as a share of the population. This relieved some of the upward pressure on per capita health-care expenditures nationwide. In Pennsylvania and Delaware, however, the population 65 and older grew slightly faster than the population as a whole. But the largest increases in the population 65 and over will come after 2010 when the first wave of baby boomers turns 65. Prime Working-Age Population: More Rapid Growth Than General Population Nationally and Regionally. The official United Nations definition of the working-age population encompasses people between the ages of 15 and 64.19 But in the U.S. the labor force participation rates of those under 25 are relatively low, and many of those workers are part-time. Moreover, after age 54, workers begin to retire in large numbers, and the labor force participation rate for this age group drops significantly.20 Therefore, those between 25 and 54 are considered members of the prime working-age population. Labor force participation in this age group is higher than 80 percent. Two major factors have determined the growth and age-distribution of the working-age population and ultimately the size of the labor force in recent years — (1) the aging of the baby boomers and those born in the birth-dearth years and (2) foreign immigration. All the members of the baby boom generation were in their prime working years in 1990 and remained in that working-age group through 2000, so the prime workingage population grew faster than the overall population in the last decade. But growth in this age group was slower in the 1990s than in the 1980s because the oldest of those born in the birth-dearth years entered their prime working years in the late 1990s (Table 3). Had it not been for strong foreign in-migration, growth of the prime working-age population would have decelerated even more in the 1990s. Figure 7 shows both the actual growth of this age group and the growth of the group due to the natural aging of the population.21 In Pennsylvania, outmigration reduced the growth of the prime working-age population below what would have resulted just from the natural aging of the population. 20 17 These increases in revenue reflect changes in both tax rates and the assessed value of property in the state. The data on property tax revenue by state can be found at www.census.gov/govs/ www/estimate.html. 18 Uwe E. Reinhardt, “Health Care for the Aging Baby Boom: Lessons from Abroad,” Journal of Economic Perspectives, 14 (Spring 2000), pp. 71-83. 19 The U.S. Bureau of Labor Statistics considers only those 16 and over who are working or looking for work as members of the labor force. 18 Q1 2002 Business Review For labor force participation rates by age and labor force projections, see Howard N. Fullerton, “Labor Force Projections to 2008: Steady Growth and Changing Composition,” Monthly Labor Review (December 1999), pp. 19-32. 21 To calculate the growth that would have been due to the natural aging of the population, we took the total number of people in five- or 10-year age groups and moved them forward 10 years, taking account of the average death rate for each age group. For age-specific death rates, see National Vital Statistics Report, Vol. 47, No. 28, December 13, 1999, Table 1: “Life Table for the Total Population: United States, 1997.” TABLE 3 Growth of Prime Working-Age Population (25–54) US PA NJ DE 1980s 1990s 24.6% 11.9% 20.2% 27.3% 15.2% 6.8% 10.7% 18.1% But growth of the prime working-age population in the nation, in New Jersey, and in Delaware was greatly increased by in-migration. For the nation and New Jersey, that increased growth was dependent on international in-migration. For Delaware, it was highly dependent on in-migration from other states.22 We can also estimate the natural growth of the prime workingage population between 2000 and 2010. In this decade, the natural rate of increase of the prime working-age population will be negative for the nation and for all three states in the region (Figure 8). The leading edge of the baby-boom generation will move out of their prime working years, and the youngest of those born in the birth-dearth years will move into their prime working years. In terms of overall labor force growth, the slow natural growth of the prime working-age population will be partially offset by two factors. First, foreign immigration is expected to continue at a strong rate. In recent years almost half of foreign immigrants have been in their prime working years, and about one-quarter have been between 22 Figure 3 indicates that Delaware’s population did not increase much because of international in-migration. www.phil.frb.org FIGURE 7 Actual Growth of Prime Working-Age Population and Growth Due to Natural Increase 1990–2000 Actual 15.2 US Natural Increase 8.2 6.8 PA 7.2 10.7 NJ 4.2 18.1 DE 8.6 0 5 10 15 20 Percent FIGURE 8 Estimated Natural Rate of Growth for Prime Working-Age Population (25–54) 1990–2000 and 2000–2010 Percent 15 10 Estimated Natural Increase 1990-2000 Estimated Natural Increase 2000-2010 8.6 8.2 7.2 4.2 5 0 -0.6 -5 -10 -0.9 -3.9 US -5.9 PA 25 and 34 years old.23 The second factor partially offsetting the slow natural growth of the prime workingage population will be the rapid increase of the oldest cohort in the workingage population, that is, those between NJ DE 55 and 64. Even though this older group has a much lower labor force participation rate than the prime working-age group, their numbers will increase significantly.24 When all the factors that determine labor force growth are considered — natural growth of the working-age population, foreign immigration, and labor force participation rates — the Bureau of Labor Statistics estimates that labor force growth will be lower in the next 15 years than at any time since 1950.25 SUMMARY In general, population growth in the tri-state region lagged growth at the national level in the 1990s. The major exceptions were growth in the state of Delaware and parts of New Jersey. In-migration from other states boosted Delaware’s growth, and international in-migration significantly increased growth in New Jersey and some areas of eastern Pennsylvania. Foreign immigration also increased the racial and ethnic diversity of those areas. The school-age population increased more than the overall population in the nation and in the three states in the region. But contrary to the national pattern, the number of people 65 and older also increased somewhat faster than the general population in Pennsylvania and Delaware. But the large increase in the number of people over 65 will come after 2010. Most important for economic growth in the region is the growth of the prime working-age population. The growth rate for this group slowed in the 1990s and is likely to slow even further in the current decade. Growth in the labor force will depend heavily on foreign in-migration and on raising the labor force participation rates of those who are beyond their prime working years. BR 24 23 In both cases these percentages are higher than the percentages of residents in those age groups. For the data on the age distribution of immigrants, see 1997 Statistical Yearbook of the Immigration and Naturalization Service, p. 52, Table 12. www.phil.frb.org The 55- to 64-year-old group will increase strongly because the leading edge of the baby boom generation will enter this age group in the current decade. The natural increase for this group nationally will be 45 percent. For Pennsylvania and New Jersey the increase will be greater than 40 percent, and for Delaware the increase will be greater than 35 percent. 25 See Working in the 21st Century, Bureau of Labor Statistics, June 2001. The projected annualized growth between 2000 and 2015 is 1.0 percent. Labor force growth in the 1990s was 1.2 percent at an annual rate. Business Review Q1 2002 19 Population Changes in the Philadelphia Metro Area: 1990–2000 he Philadelphia metro area grew not only more slowly than the other 10 largest metro areas in the country but also more slowly than some other large metro areas in the Northeast and Midwest like Baltimore, Boston, and St. Louis that are not in the top 10a (Figure A). One reason for the slower growth in the Philadelphia area was that growth from foreign immigration was lower in Philadelphia than in any of the other 10 largest metro areas except Detroit. Even though Philadelphia’s growth was relatively slow in the 1990s compared with other large metro areas, it grew more rapidly than at any time since the 1960s.b The Philadelphia metro area grew more slowly in the 1990s than any metro area in Delaware or New Jersey,c and it ranked seventh in growth among the 14 metro areas in Pennsylvania. Not every municipality in the Philadelphia area grew slowly in the 1990s. The slow metro-area growth was accompanied by considerable spreading-out of population from the municipalities in and around the city of Philadelphia to the outer suburbs. The city of Philadelphia and many of the close-in, densely populated municipalities on both sides of the Delaware River lost population in the 1990sd (Figure B). Most of the municipalities whose populations increased 20 percent or more were located in outer Chester and Montgomery counties and in central Bucks County. The rapid growth of the less dense outer suburbs and declines in the densely populated inner suburbs represented a continuation of the decentralization of the metro area that has been taking place for several decades.e Figure A Figure B Metro Area Population Growth* Phila. Area Municipalities Population Growth T D la At al nt a W la H s as hi ous ng to S. ton n F. /O , DC ak la C nd hi ca N go Lo ew s Yo An rk g Ba ele lti s m o Bo re st St on .L ou i Ph De s ila tro de it C lph le ve ia Pi lan tts d bu rg h Percent 40 35 30 25 20 15 10 5 0 -5 * This graph includes the 10 largest metro areas (Los Angeles, New York, Chicago, Philadelphia, Washington, Detroit, Houston, Atlanta, San Francisco/Oakland, and Dallas) as well as other metro areas in the Northest and Midwest with populations greater than two million (Boston, St. Louis, Baltimore, Pittsburgh, and Cleveland). a The 10 largest metro areas in terms of population are Los Angeles, New York, Chicago, Philadelphia, Washington, Detroit, Houston, Atlanta, San Francisco/Oakland, and Dallas. Boston, St. Louis, Baltimore, Pittsburgh, and Cleveland are included in Figure A because they are metro areas in the Northeast and Midwest with populations greater than 2 million. b The metro area actually lost population in the 1970s. c The Philadelphia metro area spans two states; five of the metro-area counties are in Pennsylvania, and four are in New Jersey. d The major exceptions to this pattern were losses in some sparsely populated municipalities in Salem County and the loss of population in some municipalities in eastern Burlington County that include parts of the Pinelands Preservation Area, where development is restricted. e See Gerald A. Carlino, “From Centralization to Decentralization: People and Jobs Spread Out,” Federal Reserve Bank of Philadelphia Business Review, November/December 2000, pp. 15- 27. 20 Q1 2002 Business Review www.phil.frb.org Oil Prices Strike Back BY SYLVAIN LEDUC W hen oil prices rise, how should monetary policy respond? Or should it respond at all to developments in the oil markets? In this article, Sylvain Leduc argues in favor of a central bank that follows an inflation-targeting rule. To shed some light on the issues involved, he reviews what has happened historically to oil prices and output in the U.S. It had been a good 10 years since we last saw them appear in the wake of the gulf war and about 20 years since they made it very big on the international scene. But just like in B movies in which the villain never dies, rising oil prices have come back from the dead. Oil prices rose dramatically, both in nominal and in real terms, in the first year of the new millennium. In fact, in 2000, the real price of oil reached a level not seen since 1973, at the time of the first major oil shock. Data like these are given a lot of weight in policy circles because, historically, developments in the oil sector appear to be important for the performance of the U.S. economy. Indeed, most recessions in the post-WWII era have been preceded by a rise in oil prices. And since the 1980s, there has been a resurgence of the Sylvain Leduc is a senior economist in the Research Department of the Philadelphia Fed. view that changes in gross domestic product (GDP) over the business cycle are mostly driven by supply-side factors, such as oil shocks. Supporting this viewpoint, economists who study business cycles theorize that a large part of these fluctuations in GDP can be accounted for by changes in productivity growth in the business sector, which affects the supply of goods to the marketplace. This is obviously not the end of the story. Since the law of supply and demand lies at the center of economics, it’s not surprising that another camp emphasizes demand-side factors — things that affect total demand for goods and services in the economy — as the driving force behind economic downturns. Economists who support this side of the debate point out that one such factor is monetary policy, which has tightened substantially before most recessions. Many economists on this side of the debate argue that recessions are often a byproduct of a central bank’s policy of avoiding outbursts of inflation.1 1 www.phil.frb.org See the article by Christina Romer. Of course, developments on both the supply and the demand side of the economy can contribute to movements in output, inflation, and other important macroeconomic variables. It’s possible that a rise in oil prices initially causes output to fall and inflation to rise and that the central bank, in dealing with these developments, amplifies or alleviates the initial movements in output. So what, then, do rising oil prices imply for the conduct of monetary policy? Should the central bank react in a particular way, if at all, to developments in the oil markets? This article will argue that movements in output and inflation could be smaller if central banks followed a rule that targets the inflation rate. But to shed some light on these questions, we first need to review what happened historically to oil prices and output (as measured by real GDP) in the United States. WHAT ARE THE FACTS? THE EFFECTS OF MOVEMENTS IN OIL PRICES ON OUTPUT There are few reliable relationships in economics; however, there is one between output and oil-price increases. In 1983, James Hamilton, an economist now at the University of San Diego, demonstrated that five of the six recessions between 1947 and 1975 were preceded by a significant increase in the price of oil (the exception was the recession of 1960-61). Since the publication of Hamilton’s work, economists have gathered more evidence that rising oil prices are important for the performance of the U.S. economy. Business Review Q1 2002 21 Figure 1 shows the increase in oil prices and the recessions that the U.S. economy has experienced since World War II, as indicated by the shaded bars.2 The figure demonstrates that the striking relationship between oil-price increases and the poor performance of the U.S. economy, which Hamilton documented for an earlier period, has continued: eight out of the nine recessions since 1947 were preceded by (or coincided with) a rise in oil prices. Of course, you may argue that this is not evidence that the rise in oil prices caused the U.S. recessions. The relationship could be just a coincidence, or fluctuations in some other economic factor could have caused both the increase in oil prices and a recession, without any causal link between the two. In fact, you could also point out that since the mid-1980s, there have been many episodes when the price of oil rose and the U.S. economy kept expanding. Using statistical techniques, Hamilton showed that the oil-price increases preceding most recessions are of a particular nature: They are mostly due to external factors not immediately related to the U.S. economy.3 Specifically, Hamilton documented that these increases in the price of oil were mostly the result of political and economic disruptions in the Middle East that were unrelated to developments in the U.S. economy. For instance, the dominant factor underlying the increase in oil prices in 1978-79 was the fall in oil production due to the Iranian revolution. Similarly, the price of oil 2 Empirically, the reverse is not true: A fall in the price of oil does not lead to an increase in real GDP. The reasons for this asymmetric relationship between movements in the price of oil and economic activity are still being debated. 3 Economists refer to shocks stemming from factors outside the economy as exogenous. 22 Q1 2002 Business Review FIGURE 1 Oil-Price Increases and Recessions Percent Change 1.0 0.9 0.8 0.7 0.6 0.5 0.4 0.3 0.2 0.1 0.0 47 50 53 56 59 62 65 68 71 74 77 80 83 86 89 92 95 98 01 The net oil-price increase is calculated as the quarterly change in the logarithm of the price of oil. If the quarterly change is negative, the entry is set to zero. increased in 1990 mainly because of the gulf war. Since the early 1970s, the decisions of the Organization of Petroleum Exporting Countries, or OPEC, have also been an important factor underlying movements in oil prices. Indeed, the price of oil rose to unprecedented levels in 1973-74 following OPEC’s decision to impose an embargo on oil exports to the U.S. (and the Netherlands) to protest their support of Israel in the war against the Arab countries. Although these events did increase the price of oil, it would be hard to argue that they also caused the U.S. recessions without introducing a causal link between oil-price increases and economic performance. Overall, the empirical evidence indicates that a 10 percent increase in the price of oil due to exogenous factors leads output to contract by about 2 percent four quarters following the shock. Although most economists agree that increases in the price of oil may have a significant impact on real GDP in the U.S., they are still debating the channels through which these effects occur. WHY DOES OIL MATTER? The impact on a firm’s cost of production is probably the most obvious way in which developments in the oil market affect the economy. Firms need various forms of energy, including oil, to make their production plants work, and in this sense, a rise in the price of oil acts just like an increase in the price of any other input into the production process. To the extent that a firm’s machinery relies on oil to function (and there are few alternative fuel sources), an increase in oil prices will lead firms to decrease their use of oil and to cut back on the use of their machinery, thus causing production to fall. Moreover, since the United States imports about 50 percent of the oil it consumes, the U.S. economy depends largely on foreign producers to satisfy its energy needs. Thus, a large part of the gains from rising oil prices accrues to foreign producers. Basically, an increase in oil prices acts just like a tax on U.S. consumers and companies. In the case of a tax, we first need to assess how the government spends tax revenues before we can determine the impact of the tax on the economy. www.phil.frb.org Similarly, we need to study how oilexporting countries spend their revenues from oil sales before we can determine how a rise in oil prices affects the U.S. economy. If oil-exporting countries were to spend all of their oil revenues on U.S. goods, a rise in the price of oil would have only minor effects on the overall level of economic activity in the United States. However, we may realistically assume that only part of oil revenues will be used to buy U.S. products. As a result, demand and production will fall in the U.S., following a rise in oil prices. Finally, oil-price increases may not affect all firms equally. In response to a rise in oil prices, consumer demand for products that depend on oil, such as cars or air travel, will fall, lowering production and employment levels in these industries. But if it is costly to shift labor across sectors of the economy (for instance, from the car industry to less oil-dependent sectors such as the services sector), employment and production in the U.S. overall will also fall following a rise in oil prices. In a 1988 article, James Hamilton showed that small changes in oil prices may lead to large movements in output if FIGURE 2 Federal Funds Rate it is costly to relocate workers across industries. IS OIL REALLY THAT IMPORTANT? In general, economists agree that a rise in the price of oil can have a negative impact on the level of economic activity. They disagree, however, on the extent of the impact. In particular, they find it unlikely that oil shocks by themselves could explain the severity of the 1974 and 1980 recessions. The price of oil rose dramatically before these two recessions. However, many economists remain unconvinced that gyrations in the price of a factor of production like oil, which accounts for a relatively small share of production costs, can have a significant impact on economic activity. For instance, Julio Rotemberg and Michael Woodford estimated that, for the U.S. economy, oil costs’ share of total production costs was only around 2 percent. Therefore, economists in this camp argue that it is not the rise in oil prices per se that causes the drop in economic activity, but rather restrictive monetary policies set by the Federal Reserve.4 Figure 2 shows U.S. recessions since the third quarter of 1954, but this time plotted against movements in the federal funds rate, instead of increases in oil prices.5 Looking only at this picture, one could argue that most recessions in the U.S. since 1954 were preceded by a rise in the federal funds rate. An increase in the federal funds rate means that monetary policy is tighter and money growth is lower. So, it is possible that tighter monetary policy causes most economic downturns and that oil-price increases play only a minor role. In a seminal work, Milton Friedman and Anna Schwartz documented the importance of monetary policy for the U.S. business cycle. They showed that contractions in the money supply preceded most major movements in output from 1867 to the 1960s. This extremely influential work has shaped many economists’ views on the source of economic fluctuations. Therefore, it’s not very surprising that Hamilton’s finding that rising oil prices caused most recessions in the post-WW II era was often received with skepticism. So which view is right? Is it rising oil prices alone that cause most recessions, or is it restrictive monetary policy? Or, as many economists have theorized, is it the way the central bank responds to rising oil prices that ends up triggering economic downturns? 4 Note that Hamilton’s study never argued that monetary policy was not a potentially important channel through which oil-price increases affected the economy. 5 The federal funds rate is the interest rate that banks charge one another on overnight loans. By injecting dollars into or retiring dollars from the financial system, the Federal Reserve affects the amount of reserves in the banking system, thereby controlling the federal funds rate. A good description of this mechanism can be found online at http://www.frbsf.org/publications/ federalreserve/monetary/tools.html. www.phil.frb.org Business Review Q1 2002 23 Indeed, the central bank rarely stays indifferent to current economic developments when considering the future course of monetary policy. Before deciding whether to adjust the federal funds rate, policymakers look at a wide array of economic indicators, such as prices, industrial production, employment, and so on. To the extent that the Fed responds predictably to certain changes in the economy, we may conjecture that movements in the federal funds rate immediately before declines in real GDP were, in part, policymakers’ response (directly or indirectly) to the impact of oil prices on output and inflation. But do we know how the Federal Reserve reacts to movements in output and inflation? Recently, some economists have argued that the Fed’s responses to economic developments can be summarized by a simple rule. The rule is often referred to as Taylor’s rule because it was first developed by Stanford economist John B. Taylor. Basically, it says that the central bank acts as if it is adjusting the federal funds rate in order to minimize inflation’s deviation from a target and current output’s deviation from potential output.6 According to the rule, the federal funds rate rises whenever the inflation rate is above its target or output is above potential. Similarly, the federal funds rate decreases whenever inflation is below its target or output is below potential. Research on Taylor’s rule shows that it tracks the Fed’s policy actions reasonably well (see John Taylor’s article). Using a methodology different from Taylor’s, economists Richard Clarida, Jordi Gali, and Mark Gerlter recently found that the federal funds rate rises more when inflation rises above its target than when the output gap increases. Interestingly, these authors also found that similar Taylor-type rules describe the behavior of many other central banks in industrialized countries. What happens when oil prices rise? If we believe the rule describes the central bank’s behavior (and this is, of course, a simplification), an increase in oil prices that gets translated immediately into a higher inflation rate should be followed by an increase in the federal funds rate.7 On the other hand, as I argued above, a rise in the price of oil also causes output to fall below potential, and according to the rule, the Fed should lower the federal funds rate. But, according to the estimates by Clarida, Gali, and Gertler, the Fed responds more to the rise in inflation than to the fall in output following an oil-price shock. Therefore, the federal funds rate tends to increase following a rise in oil prices.8 This means that it’s possible that a policy that raises the federal funds rate following an oil-price shock ends up amplifying the initial fall in GDP. But is this monetary policy channel really important? To answer that question we need a model to describe how oil prices and monetary policy affect the economy. 7 When the Bureau of Labor Statistics (BLS) calculates the price level, it uses two different measures. The first, often referred to as the headline price level, includes energy prices. Therefore, an increase in oil prices will raise that measure of prices. In the second measure, called the core price level, the BLS excludes energy (and food) prices. 8 6 Potential output is the amount of output that could be produced if all the factors of production, such as labor, plants, and equipment, were used optimally. The difference between current and potential output is called the output gap. 24 Q1 2002 Business Review Of course, the central bank’s reaction to rising oil prices would also depend on whether policymakers think the oil shock is persistent or transitory. If they think the latter more likely, they may prefer to keep the federal funds rate relatively constant and let the price level rise temporarily. OIL SHOCKS VS. MONETARY POLICY In the aftermath of the two oil shocks of the 1970s, economists developed models to study how monetary policy should respond to rising oil prices. At the end of the 1970s, economists Knut Mork and Robert Hall conducted an interesting early study of the 1973 oil shock’s impact on the economy. Mork and Hall built a model in which energy is used as a direct input in the production process. They found that an increase in energy prices can explain up to 75 percent of the 1974-75 recession. More strikingly, they found that the effects of the oil shock on output and employment could have been eliminated through a monetary expansion, but at the cost of generating a significant increase in the inflation rate. However, since one of the Federal Reserve’s goals is to achieve price stability, policymakers may find the cost of a monetary expansion too high. More recently, Keith Sill and I used a slightly different methodology to develop a small macroeconomic model that would identify the respective contributions of rising oil prices and monetary policy to economic downturns. Our model assumes not only that firms need oil for their machinery but also that the more intensively firms use their machinery, the more oil they need.9 For simplicity, the model assumes that the economy’s demand for oil is met entirely by foreign suppliers. Since we are interested in understanding the contribution of monetary policy to economic downturns, we need to take a stand on the 9 This approach to modeling oil usage was developed by Mary Finn. She shows that this setup is identical to one in which energy enters directly as an input into the production function, as in the research of Robert Rasche and John Tatom as well as that of In-Moo Kim and Prakash Loungani. www.phil.frb.org monetary transmission mechanism: that is, how movements in the money supply get transmitted to other variables in the economy. In particular, we need to state how monetary policy can affect the real economy, such as investment and production, as opposed to the nominal side of the economy, such as prices. There are many different ways to do this, but we’ll focus on one: the banking system. We assume monetary policy affects real GDP via the banking system because firms need to borrow funds from banks to finance production. By changing the stock of money in circulation, the central bank can affect the interest rate applied to financial transactions and, therefore, the amount of borrowing and production in the economy.10 To capture the way the Fed conducts monetary policy, we assume it uses a simple Taylor-type rule, similar to the one estimated by Clarida, Gali, and Gertler. We conduct different exercises in which we assume, in each one of them, that the nominal price of oil initially rises. This would indirectly capture OPEC’s decision to cut production to raise prices. We study the extent to which output and inflation in our model are affected by how much the central bank responds to a change in the output gap as opposed to the deviation of inflation from its target — that is, by the weights in the Taylor-type rule we assume the central bank uses. We will try to answer the question: Does output fall less following an oil shock if the monetary authority places a lot of weight on the output gap in its rule? Some Experiments. Figures 3A, 3B, and 3C show how a rise in the price of oil affects output, inflation, and the short-term interest rate in the FIGURES 3A, 3B, AND 3C Effect of a Rise in Oil Prices on Output a Effect of a Rise in Oil Prices on Inflationb Effect of a Rise in Oil Prices on the Nominal Interest Ratec a The figure describes the response of output in the model to a doubling in the price of oil, when the central bank places different weights on the output gap in the Taylor-type rule. b The figure describes the response of inflation in the model to a doubling in the price of oil, when the central bank places different weights on the output gap in the Taylor-type rule. c 10 A nice discussion of this channel can be found in the article by Lawrence Christiano. www.phil.frb.org The figure describes the response of the nominal interest rate in the model to a doubling in the price of oil, when the central bank places different weights on the output gap in the Taylor-type rule. Business Review Q1 2002 25 model over time. The vertical axis shows the difference between the value of the variable following an oilprice shock and what it would have been absent the shock. Therefore, a negative value on the vertical axis means that following a rise in oil prices, the variable falls below what it otherwise would have been without the oil-price shock. The responses are also plotted for different weights that the central bank places on the output gap in its rule, for a given weight on inflation.11 For instance, assume that the central bank’s rule assigns a weight of 0.27 to the output gap and that the price of oil suddenly doubles before slowly falling back to its initial value. Figure 3A shows that output initially falls approximately 4 percent, relative to what it would have been without the rise in the price of oil. Furthermore, it shows that this difference shrinks as the price of oil returns to its initial value, although it takes some time for the effect to fully dissipate (about four and a half years). Similarly, Figures 3B and 3C show that the inflation rate climbs to about 2.5 percent and that the short-term interest rate increases about 0.8 percent, before each one slowly comes down to the value it would have had, absent the rise in the price of oil. Although it might seem counterintuitive, when the central bank increases the weight on the output gap, it actually ends up magnifying the economic downturn.12 For example, as seen in Figure 3A, when the central bank places a weight of 0.27 on the output gap in its rule, the drop in output is much smaller than when that weight equals 0.47. Why does this happen? In our framework, when the central bank wants to alleviate the drop in output caused by the rise in oil prices by lowering the interest rate, it must increase the growth rate of money. This puts upward pressure on the inflation rate.13 Since inflation increases a lot following such a policy and since the Fed reacts more strongly to inflation than the output gap in the Taylor-type rule, it ends up having to reverse course and raise the interest rate. Firms that have to borrow to finance production then decide to borrow less and produce less, amplifying the initial drop in output. The result of this analysis suggests that in our model a central bank using a Taylor-type rule could achieve both a lower output gap and lower inflation by placing a lot of weight on inflation and a small weight on the output gap.14 12 (continued) deviation from inflation from its target if it were to follow the rule in setting policy. The weights, however, do not measure the central bank’s preferences over output and inflation. Indeed, our results show that by putting more weight on inflation in the rule, the central bank achieves a better outcome with respect to output and inflation. 13 Following an increase in the money stock, inflation increases in the long run because these extra dollars ultimately end up being spent on goods and services, thus raising the price level and the inflation rate. If firms do not adjust prices, inflation may not rise that much in the short run. 14 11 We set a weight of 2.15 on inflation, meaning that for each basis point that inflation deviates from its target, the central bank would respond by raising the fed funds rate by 2.15 basis points. Note that 2.15 is the estimate used by Clarida, Gertler, and Gali. 12 Notice that the weights in the Taylor-type rule measure the degree to which a central bank would respond to an output gap and to a (continued) 26 Q1 2002 Business Review An Inflation-Targeting Rule. This finding suggests that adopting a monetary policy rule that targets the inflation rate may be beneficial. In fact, the literature has proposed a wide array of policies as alternatives to the type of interest-rate rule that the Fed seemingly follows. Among these alternatives, inflation targeting is a popular candidate. Under inflation targeting, the central bank lets the money supply change in order to keep the inflation rate constant.15 In a recent book, economists Ben Bernanke, Thomas Laubach, Frederic Mishkin, and Adam Posen argue in favor of the Federal Reserve’s adopting an inflationtargeting rule. The goal of the Federal Reserve would then be clearer: keep inflation within a small bracket around, say, 2 percent. They argue that this would have the virtue, among others, of stabilizing people’s expectations about the Fed’s policies and, therefore, lead to a simpler decision process for investors who must take into account the central bank’s next move. Would the typical drop in output following a rise in oil prices be alleviated if the central bank followed an inflation-targeting rule instead of the Taylor-type rule estimated by Clarida, Gali, and Gertler? We found that in our model, economic downturns are indeed much less severe when the central bank targets the inflation rate. Remember, though, that this happens in our model and may not happen in the real world. Empirically, in the real world, the inflation rate responds with a long lag to movements in monetary policy. But in our model, inflation jumps immediately following an increase in the growth rate of money. To determine whether this difference between the model and reality is significant, we introduced price stickiness into our framework, which dampens movements in inflation following a change in monetary policy. (continued) 14 (continued) Price stickiness occurs when the prices of some goods are slow to respond to changes in the economy. We found that our results are not significantly changed by the introduction of this new feature. See my working paper with Keith Sill for details. 15 We assume that the central bank uses only the money supply to keep inflation from deviating from its target. Note that this strategy allows the nominal interest rate to fluctuate. It differs from a Taylor-type rule with no weight on output and a very high weight on inflation, since under the Taylor-type rule the central bank sets an interest-rate target. www.phil.frb.org Figure 4 compares output’s response to a rise in oil prices when the Fed targets the inflation rate versus when it follows the Taylor-type rule estimated by Clarida, Gali, and Gertler. The picture clearly shows that the recession is not as deep under an inflationtargeting rule. This happens because the rise in the price of oil makes a firm’s machinery more expensive to use. As a result, the firm cuts its production. Since we have assumed that firms need to borrow funds from banks to finance production, the fall in production leads to a lower demand for banks’ financing. This, in turn, puts downward pressure on the interest rate banks charge on their loans. Since under an inflationtargeting rule the money supply and the nominal interest rate change as necessary to keep inflation steady, the central bank lets the nominal interest rate fall, following the rise in the price of oil, instead of raising it to fight inflationary pressures, as the Taylor-type rule dictates. The fall in the interest rate then alleviates the financing cost of the firm and, thereby, attenuates the drop in output. Monetary Policy’s Response Matters. So it appears that monetary policy in our framework can contribute to economic downturns or it can alleviate the bad effects of oil-price shocks, depending on which strategy the central bank uses. Our results suggest that placing too much weight on the output gap may be counterproductive. Other authors have also found that placing too much weight on the output gap may lead to unwanted economic developments (see Bad Mandate or Bad Measurement?). Our results, like those of Mork and Hall, also suggest that monetary policy can be used to alleviate the impact of oil shocks on output if the central bank targets the inflation rate. Moreover, by definition, an inflation target has the additional benefit of checking a dramatic rise in inflation, www.phil.frb.org FIGURE 4 Downturn Following an Oil-Price Increase Under Different Monetary Policies as Mork and Hall found when they allowed for a large increase in the money supply in their experiment. Interestingly, a recent study by economist Athanasios Orphanides shows that since 1979, the Fed has acted as if it were assigning a much lower weight to the output gap in its Taylor-type rule than it did previously, in other words, that the Fed has operated with different Taylor-type rules before and after 1979.16 Since the first two oil shocks of the 1970s, the U.S. economy appears more resilient to increases in the price of oil, and we conjecture that this change in the way the Fed conducts monetary policy (along with the adoption of more energy-efficient technologies) contributed significantly to this development. Using Orphanides’ estimates of the Fed’s Taylor-type rules, we found that in our model, the total impact of an oil-price increase on output is approximately halved when we assume that the central bank follows a post-1979 Taylor-type rule compared with the more activist rule of the early 1970s. 16 For details on Orphanides’ research, see Bad Mandate or Bad Measurement? CONCLUSION Over the last decade, the Federal Reserve has often been praised and, to a certain extent, credited for the longest expansion in the country’s history. However, the Fed is not without its critics. An important branch of macroeconomics, including such prominent economists as Milton Friedman, assigns a significant role to the central bank in causing the ups and the downs of the economy. However, since the beginning of the 1980s, other influential economists have minimized the Fed’s role in causing changes in GDP over the business cycle. In their view, movements in the economy are the results of changes in supply-side factors such as the growth rate of productivity or oil shocks. As we’ve discussed, both oil prices and the federal funds rate have risen before most recessions. But the rise in the federal funds rate was likely due to the central bank’s reaction to inflationary pressures resulting from these oil shocks. This systematic response of policymakers to developments in the economy can play an important role in determining the business cycle — different strategies have different effects. BR Business Review Q1 2002 27 Bad Mandate or Bad Measurement? T he 1970s were plagued not only by important recessions but also by an extremely large increase in the inflation rate (Figure). Recently, different economists have tried to understand the reasons underlying this historical episode. When Milton Friedman said, “Inflation is always and everywhere a monetary phenomenon,” he meant that if one is interested in understanding the growth rate of prices in the economy, one should look at the behavior of the growth rate of money. Most authors have found empirical evidence that over long periods, an increase in the growth rate of money leads, approximately, to a onefor-one increase in the inflation rate, with no effect on the level of economic activity.a Another way to say this is that in the long run, the only variable that the central bank can control is the inflation rate. The central bank’s impact on output can only be short-lived. So, one should look at the growth rate of the money supply to understand inflation. However, a more interesting question is: Why would a central bank let the money supply grow to such an extent that it leads to an increase in long-run inflation? Recently, two different views have been proposed: an expectations trap and measurement problems. Expectations Trap. Theoretical work by V.V. Chari, Lawrence Christiano, and Martin Eichenbaum demonstrated that this can occur if a country does not assign the right mandate to the central bank. Economists Lawrence Christiano and Christopher Gust used Chari, Christiano, and Eichenbaum’s insights to make sense of the 1970s. a See the article by George McCandless, Jr., and Warren Weber for an empirical study of the relationship between the growth rate of money, inflation, and output. 28 Q1 2002 Business Review The theory argues that without the right mandate, the central bank can be stuck in an expectations trap, a state in which people’s expectations about inflation force the central bank to act in a certain way. The reasoning is as follows. Suppose that people expect a rise in the inflation rate, for reasons possibly not related to economic events. Since they expect higher future inflation, workers would like higher wages to keep up with the cost of living. Firms must then decide if they can agree to these demands. Since firms also expect the inflation rate to rise in the future, they will probably agree to increase wages: Higher inflation makes it easier for firms to pass on the increase in wages to consumers by raising prices. Now, the central bank faces a dilemma. On the one hand, it can increase the supply of money and create more inflation, just as people in the economy initially expected. Or it can contract the supply of money (and, as a result, raise short-term interest rates) to fight the rise in expected inflation. If the central bank chooses the former avenue, the economy is stuck in an expectations trap. That is, the inflation rate increases just because people initially believed that it would increase. If the central bank chooses the second avenue, it may create a recession, a path that it may find difficult to follow. Christiano and Gust showed that a similar line of argument can explain monetary policy and the run-up in inflation in the 1970s. The reason for the expectations trap resides in the dual mandate assigned by Congress to the Federal Reserve System: price stability and full employment. Because of the second mandate, the Fed is likely to accommodate a sudden rise in expected inflation to avoid risking the chance of a recession. Moreover, Christiano and Gust argue that by making price stability the sole goal of monetary policy, policymakers could avoid these expectations traps. As long as the central bank can credibly commit to keeping the inflation rate within a preannounced range, people www.phil.frb.org will assume that the inflation rate will not move outside this range. Christiano and Gust’s analysis, like ours, suggests that too much emphasis on the output gap may lead to worse economic outcomes. Measurement Problems. Another view that has received a lot of attention is the one proposed by economist Athanasios Orphanides. He argues that the increase in the inflation rate in the 1970s was not so much due to an expectations trap but to a mismeasured level of economic activity. Orphanides argues that the output gap was badly measured in the early 1970s, in part, because of the beginning of the productivity slowdown.b The slowdown b Labor productivity (output per hour) in the nonfarm private business sector fell from 2.63 percent over the period 1950-72 to 1.13 percent over the period 1972-95. See the article by Robert Gordon. in productivity meant that potential output was lower: The economy could produce less than before using the same amount of inputs. Initially, however, economists did not perceive the slowdown, so they assumed that potential output was higher than it really was. Since the output gap is the difference between current and potential output, the mismeasurement of potential output translated into a larger output gap. Using these statistics, the Fed necessarily thought that the output gap was worse than it was and responded by reducing the federal funds rate (and increasing the money supply) by more than would have been dictated by a correctly measured output gap. In Orphanides’ view, the result was the huge increase in inflation depicted in the figure. As in our analysis, Orphanides also showed that by placing a lower weight on the output gap, the Fed could have avoided some of the problems it faced in the 1970s. Figure Inflation Rate www.phil.frb.org Business Review Q1 2002 29 REFERENCES Bernanke, Ben S., Mark Gertler, and Mark Watson. “Systematic Monetary Policy and the Effects of Oil Price Shocks,” Brookings Papers in Economic Activity, 1 (1997), pp. 91-142. Bernanke, Ben S., Thomas Laubach, Frederic S. Mishkin, and Adam S. Posen. Inflation Targeting: Lessons from the International Experience. Princeton: Princeton University Press, 1999. Chari, V. V., Lawrence J. Christiano, and Martin Eichenbaum. “Expectation Traps and Discretion,” Journal of Economic Theory, 2 (1998), pp. 462-92. Chatterjee, Satyajit. “Real Business Cycles: A Legacy of Countercyclical Policies?” Federal Reserve Bank of Philadelphia Business Review (January/February 1999). Christiano, Lawrence J. “Modelling the Liquidity Effect of a Money Shock,” Federal Reserve Bank of Minneapolis Quarterly Review, 15, 1 (1991), pp. 3-34. Christiano, Lawrence J., and Christopher Gust. “The Expectations Trap Hypothesis,” Federal Reserve Bank of Chicago Economic Perspectives, 24, 2 (2000), pp. 21-39. Clarida, Richard, Jordi Gali, and Mark Gertler. “Monetary Policy Rules and Macroeconomic Stability: Evidence and Some Theory,” Quarterly Journal of Economics, 115 (2000), pp. 147-81. Clarida, Richard, Jordi Gali, and Mark Gertler. “Monetary Policy Rules in Practice: Some International Evidence,” European Economic Review, 42 (1998), pp. 1033-67. 30 Q1 2002 Business Review Finn, Mary G. “Variance Properties of Solow’s Productivity Residual and Their Cyclical Implications,” Journal of Economic Dynamics and Control, 19 (1995), pp. 1249-82. Friedman, Milton, and Anna J. Schwartz. A Monetary History of the United States, 1867-1960. Princeton: Princeton University Press, 1963. Gordon, Robert J. “Has the ‘New Economy’ Rendered the Productivity Slowdown Obsolete?” manuscript, 1999. Hamilton, James D. “Oil and the Macroeconomy Since World War II,” Journal of Political Economy, 91 (1983), pp. 228-48. Hamilton, James D. “A Neoclassical Model of Unemployment and the Business Cycle,” Journal of Political Economy, 96 (1988), pp. 593-617. Kim, In-Moo, and Prakash Loungani. “The Role of Energy in Real Business Cycle Models,” Journal of Monetary Economics, 29 (1992), pp. 173-90. Leduc, Sylvain, and Keith Sill. “A Quantitative Analysis of Oil Shocks, Systematic Monetary Policy, and Economic Downturns,” Federal Reserve Bank of Philadelphia Working Paper 01-09 (2001). McCandless Jr., George T., and Warren E. Weber. “Some Monetary Facts,” Federal Reserve Bank of Minneapolis Quarterly Review, 19, 3 (1995), pp. 2-11. Orphanides, Athanasios. “Monetary Policy Rules, Macroeconomic Stability, and Inflation: A View from the Trenches,” manuscript, 2001. Rasche, Robert H., and John A. Tatom. “Energy Price Shocks, Aggregate Supply, and Monetary Policy: The Theory and International Evidence,” In K. Brunner and A. H. Meltzer, eds., Supply Shocks, Incentives, and National Wealth: Carnegie-Rochester Conference Series on Public Policy, 14, Amsterdam: North-Holland, 1981. Romer, Christina D. “Changes in Business Cycles: Evidence and Explanations,” Journal of Economic Perspectives, 13 (1999), pp. 23-44. Rotemberg, Julio J., and Michael Woodford. “Imperfect Competition and the Effects of Energy Price Increases on Economic Activity,” Journal of Money, Credit, and Banking, 28 (1996), pp. 550-77. Sims, Christopher A. “Comment and Discussions,” Brookings Papers on Economic Activity, 1 (1997), pp. 143-48. Taylor, John B. “Discretion Versus Policy Rules in Practice,” Carnegie-Rochester Conference Series on Public Policy, 39 (1993), pp. 195-214. Tobin, James. “Stabilization Policy Ten Years After,” Brookings Papers on Economic Activity, 1 (1980), pp. 19-71. Mork, Knut A., and Robert E. Hall. “Energy Prices, Inflation, and Recession,” Energy Journal, 15 (1980), pp. 31-63. www.phil.frb.org Is the Personal Bankruptcy System Bankrupt? BY LORETTA J. MESTER O ver the past few years, several attempts have been made to reform the U.S. bankruptcy system, to help stem perceived abuses of the system. In this article, Loretta Mester outlines the components of reform proposals. She then looks at the empirical research on bankruptcy to evaluate the rationale for reforming the system and the effectiveness of proposed changes. Neither a borrower nor a lender be; For loan oft loses both itself and friend, And borrowing dulls the edge of husbandry. Polonius Act 1, Scene 3 Hamlet by William Shakespeare Over the past several years, Congress has attempted to pass legislation to resolve perceived problems in the personal bankruptcy system in the U.S. Although it has not proposed anything as drastic as Polonius recommended, Congress has proposed several significant changes to the current system. In the latest try, separate Loretta Mester is senior vice president and director of research at the Philadelphia Fed. www.phil.frb.org bills were passed in the House (HR 333) and in the Senate (S 420) in March 2001, but Congress adjourned before reconciliation of the bills could be completed. Legislation is again being considered this year, but regardless of the outcome, the debate about the U.S. personal bankruptcy system is unlikely to be resolved anytime soon. Indeed, a number of studies provide evidence on the rationale for changing the current system, on whether reform is necessary, and on whether the proposed revisions will have the intended effect. After reviewing the current personal bankruptcy system and the proposed changes, we’ll discuss some of the findings of these recent studies. These studies do shed some light on the debate and cast some doubt that the proposed changes will yield benefits as significant as intended. They also suggest further research is necessary to resolve all the issues. CURRENT PERSONAL BANKRUPTCY SYSTEM IN THE U.S. As Joseph Pomykala discusses in his interesting article, the word “bankruptcy” has two roots. “Banca rotta” is Latin for broken board. In medieval Italy, “creditors would break the workbenches of defaulting merchants over the merchants’ heads.” “Banqueroute” is French for debtors on the lam (route), as bankruptcy was considered an act of debtor fraud. As Pomykala points out, before the mid-18th century, bankruptcy was considered a crime, and in England, certain bankrupt debtors were subject to capital punishment. The U.S. modified English law to be less harsh. For example, the Pennsylvania Bankruptcy Act of 1785 allowed for those convicted of bankruptcy to be flogged while nailed by the ear to a pillory, after which the ear would be cut off. (Of course, how much more lenient this was is clearly debatable.) Bankruptcy protection has been part of U.S. federal law since 1898. Indeed, Article I, Section 8 of the U.S. Constitution authorizes Congress to enact “uniform Laws on the subject of Bankruptcies” (see the article by Leonidas Mecham). The structure of the current bankruptcy system was established in the Bankruptcy Reform Act of 1978. The idea is to allow a “fresh start” (within limits) for honest people who, often through unfortunate circumstances beyond their control, have gotten into trouble with debt and to allow for creditors to be repaid in an orderly fashion with the debtor’s available assets. Business Review Q1 2002 31 The bankruptcy provisions allow for a sharing of risk between borrowers and creditors, offering some insurance to borrowers if they find themselves unable to repay their debts. The insurance gives consumers whose income may be low today but is expected to rise in the future the confidence to borrow now to pay for consumption. This raises consumers’ economic well-being. If things go as planned, they will repay their debts. If some adverse event, like a job loss, prevents them from repaying, they can file for bankruptcy and protect their future income from creditors. Bankruptcy procedures better enable consumers to smooth their consumption over time, thereby increasing economic efficiency. The bankruptcy system also provides a joint debt-collection system for a debtor’s creditors (see the CBO study for a nice review of personal bankruptcy fundamentals). A bankruptcy system that is too harsh would lower economic welfare by discouraging borrowing and risk sharing. However, a system that is too lenient and allows debtors to escape from their commitments too easily can hurt economic efficiency by causing creditors to restrict the supply of credit and raise its cost to creditworthy borrowers. The federal bankruptcy courts administer the bankruptcy system. All bankruptcy cases are filed in these federal courts. There are 94 bankruptcy districts in the U.S. and its territories, each with a court. Pennsylvania is broken into three districts: eastern, central, and western, while New Jersey and Delaware are each separate and single districts. The courts in all three states (along with the District of the Virgin Islands) are part of the third federal circuit. A bankruptcy case is often overseen by a court-appointed trustee. Under current law, individuals considering bankruptcy can file under Chapter 7 or Chapter 13 of 32 Q1 2002 Business Review the bankruptcy act.1 Chapter 7, sometimes called straight bankruptcy, is liquidation — a filer hands over his assets (with some exemptions) to the trustee, who then sells the assets and uses the proceeds to repay the debtor’s creditors. The remaining debts (with some exceptions) are then discharged — that is, wiped clean — and the debtor retains control of his or her future income.2 In many cases there are few assets available to repay creditors, and most unsecured debt, such as credit card debt, is not repaid in bankruptcy (see the CBO study). In other cases, a debtor may want to keep control of an asset, like a car, that is pledged as collateral against a loan. In this case, the debtor can “reaffirm” the debt — the debtor and creditor agree that the debtor will pay the creditor all or part of the debt, even though the debtor has filed for bankruptcy, and the creditor will not repossess the property. A person can file for bankruptcy under Chapter 7 every six years. Chapter 13 involves adjustment of debts of an individual with regular income. Under this chapter, some debts are reduced, but then debtors and creditors devise a plan by which the debtors repay their remaining obligations out of their future incomes. Repayment is made in installments over a threeyear period (which the court can 1 They may also file under Chapter 11, but this is rare. Chapter 12, which is similar to Chapter 13, applies only to family farmers in financial distress. See Report 106-49 on S. 625, U.S. Senate, and Mecham for fuller descriptions of the U.S. personal bankruptcy system. 2 According to Mecham, 18 categories of debt cannot be discharged under Chapter 11; there are fewer restrictions under Chapter 13. Certain types of tax claims, debts for spousal or child support or alimony, and debts for willful and malicious injuries to person or property are examples of nondischargeable debts. extend to five years). The debtor must repay creditors at least as much as they would have received under Chapter 7, and claims entitled to priority must be paid in full. The debtor cannot take on any new debt without the trustee’s approval, and the debtor’s secured and unsecured debt must be less than certain specified limits to allow him or her to file under Chapter 13. In exchange, debtors retain more of their assets than they would under Chapter 7. At the end of the repayment period, any remaining unpaid debt is discharged. Generally, someone can file for bankruptcy under Chapter 13 as often as he or she wants (except that, in some cases, the filing cannot be within 180 days of dismissal of a previous case). As part of the filing under either chapter, the debtor submits a list of his assets, income, liabilities, creditors, and debts. After a debtor files under either chapter, an automatic stay stops creditors from collecting on unpaid debts. This prohibits creditors from filing lawsuits against the debtor for repayment, trying to garner the debtor’s wages, or making telephone calls demanding repayment (see Understanding the Federal Courts). WHY THE CALL TO REFORM? Bankruptcy protection is designed to help people get out from under the burden of excessive debt and to get a fresh start. Some people, though, point to abuses of the system: Debtors who actually have the ability to repay sometimes escape their obligations. Knowing this escape route exists may give borrowers an incentive to take on more debt, to the extent that lenders are willing to supply them credit. Of course, lenders should respond to a lenient bankruptcy law that permits many debts to be discharged by restricting credit or raising the cost of credit so that only the better credit risks could borrow. A bankruptcy system that is too www.phil.frb.org lenient would lead to economic inefficiencies whereby the supply of credit was too restricted and its cost too high. Over the years, changes have been made to the system to try to limit the scope for abuse. For example, in 1984, judges were permitted to dismiss Chapter 7 cases if they thought granting relief would constitute a “substantial abuse” of the bankruptcy code. But the term “substantial abuse” was not defined, and creditors and trustees were not allowed to present evidence to the judge in a particular case on whether relief should be viewed as substantial abuse (see Report 106-49, Senate). Filings Have Increased in Recent Years. One factor pointed out in the most recent round of legislative consideration of the bankruptcy system was the substantial increase in the number of filings that began in the 1980s. Total bankruptcy filings, including personal and business filings, hit a record 1.43 million in the year ended June 1998, and they have been very high since then, with 1.39 million filings in the year ended June 2001 (Figure 1).3 Indeed, the 400,000 filings in the second quarter of 2001 was the most ever for a three-month period. Over 97 percent of these filings are personal as opposed to business, and 70 percent of personal filings per year are typically Chapter 7 filings (Figure 2).4 FIGURE 1 Annual Number of Bankruptcy Filings (12-month periods ending in June) Note: Shaded areas represent economic recessions. Source: Bankruptcy data from Administrative Office of the U.S. Courts. FIGURE 2 Number of Personal Bankruptcy Filings, Total and by Chapter (12-month periods ending in June) Annual number of filings in thousands 1600 1379 1352 1400 1200 1000 985 Total Personal Filings Chapter 7 Personal Filings Chapter 13 Personal Filings 1349 1240 969 864 951 800 600 3 The number of personal filings before and after 1979 are not directly comparable because the Bankruptcy Reform Act of 1978 allowed for spouses to file a joint petition for bankruptcy protection. See the CBO study for further discussion. 4 Note that some of the personal filings could actually represent business failures because some small businesses are funded by the personal credit lines of their owners. See Appendix A of the CBO study for further discussion of the data on personal bankruptcy filings. www.phil.frb.org 400 394 382 375 398 200 0 1998 1999 2000 2001 Source: Administrative Office of the U.S. Courts. Business Review Q1 2002 33 Figure 3 shows the number of personal bankruptcies per thousand households (the bankruptcy rate) in the three states in the Third Federal Reserve District (Delaware, New Jersey, and Pennsylvania) and in the U.S.5 As you can see, these numbers ranged between nine and 13 bankruptcies per 1000 households in 2001. In other words, in 2001, 0.9 to 1.3 percent of households filed for bankruptcy in the nation and in our three states. (Note that there is much wider variation in the personal bankruptcy rate across the other states in the U.S. In 2001, Tennessee, which had 24.5 filings per thousand households, had the highest rate; Iowa, which had 4.1 filings per thousand households, had the lowest rate. See the Table.) The number of personal bankruptcy filings began accelerating in the 1980s, with an especially large increase between 1995-98.6 For example, from 1961 to 1980, the personal bankruptcy rate rose, on average, about 3 percent per year. Since 1980, the average increase in the personal bankruptcy rate has been almost 8 percent per year, with an especially sharp increase of 14 percent per year between 1995 and 1998 (Figure 4). We can look at the increase another way: In 1980, there was one personal bankruptcy filing for every 336 households in the U.S.; in 2001, there was one personal bankruptcy filing for every 78 households. Some of the rise in bankruptcies in the 1980s can be attributed to bad economic times: There was a short recession from January 1980 to July 5 Delaware has about 300,000 households, New Jersey about 3.1 million households, Pennsylvania about 4.8 million households, and the U.S. about 106 million households. FIGURE 3 Personal Bankruptcy Rate, Total and by Chapter (12-month periods ending in June) Note: Sum of Chapter 7 and Chapter 13 does not equal total because there are a few personal business filings under Chapter 11. Personal bankruptcy rate is the number of personal bankruptcy filings per thousand households. Sources: Administrative Office of the U.S. Courts and U.S. Census. FIGURE 4 Personal Bankruptcy Rate and Growth in Personal Bankruptcy Rate (12-month periods ending in June) Note: Shaded areas represent economic recessions. Personal bankruptcy rate is number of personal filings per year per 1000 households. Sources: Bankruptcy data from Administrative Office of the U.S. Courts; household data from the Bureau of Census, www.census.gov. 6 In contrast, business filings, which increased in the early 1980s, fell back in the latter half of the 1980s and in the 1990s. 34 Q1 2002 Business Review www.phil.frb.org TABLE Personal Bankruptcy Rate by State in Year Ended June 2001 (Number of Personal Bankruptcies per 1000) State Tennessee Utah Georgia Nevada Alabama Mississippi Arkansas Indiana Maryland Idaho Oklahoma Washington Louisiana Kentucky Oregon Illinois Virginia Ohio West Virginia New Jersey Missouri California Florida Wyoming Kansas Hawaii Arizona Rhode Island New Mexico Michigan Colorado Pennsylvania Montana Washington, DC North Carolina Nebraska Wisconsin Delaware New York Texas Connecticut South Carolina Maine Minnesota North Dakota South Dakota New Hampshire Massachusetts Vermont Alaska Iowa www.phil.frb.org Total Nonbusiness Filings No. of Households (2000 Census) Personal Bankruptcy Rate (Nonbus. Filings per 1000 Hhs) 54,730 16,915 63,800 15,833 36,116 20,561 19,466 42,537 32,956 7,578 20,892 34,087 24,730 23,609 19,667 66,817 38,361 61,906 9,630 39,575 27,989 143,174 78,702 2,343 12,554 4,712 22,036 4,690 7,420 41,251 16,921 47,708 3,578 2,410 30,215 6,382 19,624 2,730 63,642 66,290 11,144 12,868 4,198 15,216 2,021 2,228 3,545 16,676 1,580 1,361 9,662 2,232,905 701,281 3,006,369 751,165 1,737,080 1,046,434 1,042,696 2,336,306 1,980,859 469,645 1,342,283 2,271,398 1,656,053 1,590,647 1,333,723 4,591,779 2,699,173 4,445,773 736,481 3,064,645 2,194,594 11,502,870 6,337,929 193,608 1,037,891 403,240 1,901,327 408,424 677,971 3,785,661 1,658,238 4,777,003 358,667 248,338 3,132,013 666,184 2,084,544 298,736 7,056,860 7,393,354 1,301,670 1,533,854 518,200 1,895,127 257,152 290,245 474,606 2,443,580 240,634 221,600 2,336,306 24.51 24.12 21.22 21.08 20.79 19.65 18.67 18.21 16.64 16.14 15.56 15.01 14.93 14.84 14.75 14.55 14.21 13.92 13.08 12.91 12.75 12.45 12.42 12.10 12.10 11.69 11.59 11.48 10.94 10.90 10.20 9.99 9.98 9.70 9.65 9.58 9.41 9.14 9.02 8.97 8.56 8.39 8.10 8.03 7.86 7.68 7.47 6.82 6.57 6.14 4.14 1980 and a long one from July 1981 to November 1982 (recessions are shown by shaded bars in the figures). But the rapid rise in the bankruptcy rate in the 1990s is more difficult to understand, since this was a period of very good economic conditions — economic growth averaged 3.2 percent per year in the 1990s, and the unemployment rate had fallen to 4 percent by the end of the decade. Even this is not unprecedented: The rate of bankruptcy filings rose rapidly in the mid-1980s in the midst of an economic expansion, and it has risen in other periods of economic expansion as well. (Note that the rise is not necessarily a bad thing, as it accompanied an increase in credit availability to households.) Households did increase their borrowing in the 1990s, and the rise in household debt-service burdens — that is, required payments on mortgages and other consumer debt as a percentage of disposable income — in that decade may explain part of the rise in the bankruptcy rate (Figure 5, see next page). Yet the debt-service burden was at comparable levels in the mid-1980s, and the bankruptcy rate was much lower.7 So factors other than debtservice burdens appear to play some role in the decision to file. The most recent increase in filings in the first half of 2001 might be the result of pending legislation to change the bankruptcy system, as people contemplating bankruptcy may have accelerated their filings to get in under the old rules. Still, the fact that the bankruptcy rate rose in the late 1990s during good economic times and remains high is taken by many as an indication that the system is being abused by people who actually have 7 There have been periods — for example, between 1988 and 1991 — when debt-service burden and filings moved in opposite directions. Business Review Q1 2002 35 the wherewithal to repay their debts. This view has led many observers to believe that the bankruptcy system needs to be revamped and has led to proposed legislation to change the system. Whether changes to the bankruptcy system will have much of an effect on the rate of filings depends both on what changes will be enacted and whether the bankruptcy system itself has encouraged filings. BANKRUPTCY REFORM LEGISLATION Although there have been many attempts to pass legislation over the past several years, bankruptcy reform legislation has yet to be signed into law. In March 2001, the House and Senate passed their own versions of bankruptcy reform legislation (HR 333 and S 420); similar bills were passed the previous year by the 106th Congress, and hearings were held in 1997 by the 105th Congress. Last year, Congress adjourned before reconciliation of the bills could be completed, but bankruptcy reform legislation is again being considered this year. While the versions passed by the House and the Senate in 2001 differed in some ways, those differences have narrowed as legislation has worked its way through several sessions of Congress. In last year’s bills, there was general agreement on basic aspects of reform. Here I’ll review eight proposed changes to the bankruptcy system. The first five of these reforms favor creditors by limiting the benefits to debtors from declaring bankruptcy. The last three reforms might be considered debtor protections. (1) Chapter 7 Means Testing. The bankruptcy system would be changed to what proponents of the bills call a “needs-based” system. If a debtor has sufficient income to repay a large part of his or her debts, he or she could not pursue Chapter 7 liquidation but only a Chapter 13 repayment plan. 36 Q1 2002 Business Review FIGURE 5 Personal Bankruptcy Rate and Debt-Service Burden Note: Shaded areas represent economic recessions. Source: Bankruptcy data from Administrative Office of the U.S. Courts; debt-service burden data from the Federal Reserve Board. Debt-service burden is household required payments on mortgage and consumer debt as a percentage of disposable personal income. Quarterly personal bankruptcy rate is quarterly filings per thousand households. Note, sum of quarterly filings equal annual filings. Correlation between debt-service burden and quarterly personal bankruptcy rate was 0.84 from 1980 to 1987, – 0.71 from 1988 to 1991, and 0.88 from 1992 to 2000. A means test would be applied to determine which debtors would be forced into Chapter 13 and how much debt would have to be repaid over a five-year period. Debate has centered on whether means testing is necessary. Creditors favor such testing, saying that some debtors have abused the current bankruptcy system. Consumer groups say only 3 percent to 5 percent of Chapter 7 filers would have to repay some of their debt under the proposed means tests, and they argue that reform is not necessary, since the rate of filings dropped in 1999 after the record rate in 1998. As discussed below, arguments for and against means testing are in dispute. The 2001 bills barred Chapter 7 filing if, after living expenses and the cost of other necessities, the debtor could afford to repay at least $100 per month over a five-year period. The bills generally used the standards the IRS uses to figure out living expenses for people owing back taxes, but the bill specified the use of actual costs of other necessities (for example, child care, union dues, and so forth). Some income, such as Social Security and war crimes compensation, would be excluded from the calculations. Those earning less than the median income in the applicable state would qualify for Chapter 7 regardless of their ability to repay. Under current law, if a debtor files under Chapter 13, the repayment period is three years unless the court, for cause, extends it to a maximum of five years. This provision would remain the same for debtors whose family income is less than the median family income in the applicable state. However, for families with higher income (who would be forced to file under Chapter 13 by the means test), the repayment plan would be extended to five years. www.phil.frb.org (2) Nondischargeable Debts. Bankruptcy courts currently presume that if a debtor bought more than $1000 in luxury goods or services or took $1000 in cash advances in an open-ended credit plan within 60 days before filing for bankruptcy, these debts were fraudulently incurred, and so they are not dischargeable. Any debt incurred to pay an existing nondischargeable debt is nondischargeable if it was incurred with the intent of not repaying. Nondischargeable debts include certain taxes, family support obligations, and debts arising from fraud. The proposed legislation would make more debts nondischargeable in bankruptcy. The 2001 bills extend the definition of fraudulently incurred (and, therefore, nondischargeable) debt. In the House bill, the threshold for presumption of fraudulent purchases of luxury goods was lowered to $250 within 90 days of filing for bankruptcy, and the threshold for cash advances was lowered to $750 within 70 days of filing. The Senate bill agreed with the House, except that the threshold for luxury goods was lowered to only $750. (3) Homestead Exemption. Many states allow a debtor to keep possession of his or her residence (up to some limit) when filing for bankruptcy; the residence would not be available to pay off creditors. Five states (Florida, Iowa, Kansas, South Dakota, and Texas) put no limit on the exemption for a primary residence; other states are quite restrictive (for example, New Jersey allows no homestead exemption).8 One of the major differences between the House and Senate bills passed last year was their treatment of the homestead exemption. The Senate version would 8 These data are as of January 1, 2000. See footnote 13, page 490 of Report 107-3, House of Representatives. www.phil.frb.org put a federal cap of $125,000 on the exemption for a primary residence. This cap would apply to all states. The House bill maintained states’ ability to opt out of the federal limit and reestablish an unlimited or other exemption by passing legislation.9 Both bills lengthened the time from six months to two years that a debtor must live in a state before being able to claim that state’s exemption. (4) Lien Strip-Down. Currently, debts are secured only to the value of the collateral, with any remainder treated as unsecured debt. Thus, a debtor could pay the amount of the collateral’s current market value and keep the collateral — this is a stripdown. For example, a debtor could purchase a car, file for bankruptcy, and keep the car by paying off the value of the car, even though this is less than the amount he contracted to pay in the loan. Debtors who bought furniture, which has little resale value, are able to keep the furniture by repaying the low resale price rather than paying off the much larger debt. Consumer groups argue that changing the rules to allow creditors to repossess the collateral would force reaffirmations, wherein debtors agree to pay certain debts and not discharge them in bankruptcy. The 2001 legislation sought to limit strip- 9 Prior to 1978, there was no federal exemption; the states controlled exemptions. The Bankruptcy Reform Act of 1978 set federal exemptions for certain assets, including personal goods, tools of trade, autos, and homesteads. But individual states were allowed to opt out and set their own limits, and by 1983, all of them had. The Bankruptcy Reform Act of 1994 raised the federal exemption level and tied it to the Consumer Price Index starting in 1998. As of January 1, 2000, the federal exemption was $16,150 per debtor; 35 states had set their own exemption levels and did not allow the federal exemption; and the remaining states allowed debtors to choose between their state exemption or the federal exemption when filing for bankruptcy. See footnote 13, page 490 of Report 107-3 Part 1, House of Representatives and the study by Jon Nelson. downs. The House bill barred stripdowns for a motor vehicle acquired by the debtor within five years prior to filing bankruptcy and for other personal property bought within one year prior to filing. The Senate bill differed from the House bill in setting the period for motor vehicles at three years prior to filing. (5) Repeat Filings. Currently, debtors can file for Chapter 13 bankruptcy at any time (except, in some cases, within 180 days of a prior dismissal). Debtors can file for Chapter 7 bankruptcy six years after a discharge in bankruptcy. The legislation would have limited repeat filings. Both bills would have barred a Chapter 7 filing within eight years of prior discharge. The Senate bill would have barred a Chapter 13 discharge within three years of a prior discharge under a Chapter 7, 11, or 12 filing, and within two years of a prior discharge under Chapter 13. The House bill was harsher, disallowing a Chapter 13 discharge within five years of any prior discharge. (6) Reaffirmation of Debts. Consumer advocates say creditors are pressuring debtors into reaffirming debts — in other words, pressuring them into saying that they will pay the debt and not discharge it in bankruptcy. Such reaffirmations are supposed to be filed with and approved by the bankruptcy court. But Sears was recently held liable in a class action based on reaffirmations of debt that were not filed with bankruptcy courts. The case involved Sears’ pressuring debtors into reaffirming their debts with Sears rather then having them discharged in bankruptcy. Sears admitted to criminal fraud and paid a fine of $60 million. The bills passed in 2001 sought to stem abusive reaffirmation agreements by mandating that certain specific disclosures be made in writing to the debtor, explaining the terms of the underlying credit agreement and the reaffirmation. The bills asked the attorney general to Business Review Q1 2002 37 enforce prohibitions against abusive reaffirmations. (7) Consumer Credit Disclosures. Some argue that creditors have led consumers into bankruptcy by misleading them about the true cost of credit. The legislation sought to make credit card companies disclose more about the consequences to the borrower of making only the minimum monthly payment. The bills would have amended the Truth-in-Lending Act to require disclosures on credit-card bills. Credit card issuers would have had to provide generic examples of the consequences of making only minimum payments on bills in terms of how long it would take to pay off the debt. Issuers also would have had to give a toll-free number where holders could get specific information about repayment scenarios for their own accounts. Enhanced disclosures for home-equity loans, introductory loan rates, and late payment deadlines and penalties would have been required. Both bills would have barred the termination of a credit card just because a customer hadn’t incurred a finance charge. In testimony before the House in March 1999, Federal Reserve Governor Edward Gramlich said the Board of Governors wondered whether repeated disclosures to consumers might create “information overload.” He said the Board does not generally favor laws that restrict a creditor’s discretion to determine which accounts or transactions it deems as economically viable. The Board’s view is that if creditors are terminating accounts of customers who use their credit cards only for transactions purposes, they are doing so because they consider these accounts unprofitable. (8) Credit Counseling. Provisions are included to address the issue of unsophisticated borrowers. These provisions are intended to ensure that the debtor made a good-faith effort 38 Q1 2002 Business Review to negotiate a repayment plan with his creditors on his own. However, the requirement might also delay filings until debtors are unable to come up with a repayment plan in Chapter 13. Both bills required the debtor to have undergone credit counseling within 180 days before filing under Chapter 7 or Chapter 13. The implications of the empirical work [on bankruptcy] to date is mixed. As suggested by this overview, while the House and Senate bills passed in 2001 differed in some of their details, there was general agreement on major items of reform. The real question is whether these reforms are necessary and, if so, whether they will be effective. EVIDENCE RELEVANT TO BANKRUPTCY REFORM There have been several studies of personal bankruptcy. The implications of the empirical work to date is mixed as to whether the proposed bankruptcy reform is needed and whether it will have the intended effect. Partly at issue is the extent to which borrowers respond to the incentives provided by the bankruptcy law — borrowing more than they otherwise would and declaring bankruptcy even though they could eventually repay their debts — or, alternatively, whether they declare bankruptcy when they face an unexpected hardship that makes it impossible for them to repay their debts. There are four main issues. First, I’ll discuss empirical evidence on the source of the recent rise in bankruptcies and the extent to which market forces place limitations on the number of bankruptcies. If these forces are effective, reforms are less needed. Similarly, I’ll discuss empirical evidence on the relationship between social forces (stigma) and the number of bankruptcy filings. A lessening in the effectiveness of these social forces would help support arguments for reform. Next I’ll discuss evidence on the extent to which the current bankruptcy system is being abused. High levels of abuse favor the reformers. Finally, I’ll discuss evidence concerning the efficacy of proposed reforms. Even if one believes the current system needs reform, it is not clear that the proposals will yield the desired effect. (1) Market Forces. In a study done in 2000, Lawrence Ausubel argued that market forces have tempered some of the recent acceleration in personal bankruptcy filings and that legislation is unnecessary. As the bankruptcy rate increased, lenders responded by tightening their credit standards, thus leading to the decrease in the number of bankruptcies between 1998 and 1999. In Ausubel’s view, if there ever was a “bankruptcy crisis,” it is self-correcting. In congressional testimony in 1998, Ausubel argued against the meanstest approach because, in his view, the immediate cause of the record number of bankruptcies is the high level of household debt, which he attributes in part to aggressive lending tactics. In a 1999 study, Ausubel found that in randomized trials on preapproved creditcard solicitations conducted by a major U.S. issuer of credit cards, offers that included higher interest rates and fees tended to attract riskier borrowers with higher delinquency, charge-off, and bankruptcy rates than offers with better terms. In other words, issuers face a so-called adverse selection problem.10 10 Adverse selection in the credit-card market has also been documented in my paper with Paul Calem. The fact that credit-card rates are very sticky and don’t tend to come down when other rates do is partly attributable to this adverse selection problem. www.phil.frb.org Ausubel places some of the blame for higher bankruptcies on the creditors who have issued the debt. He favors the approach of earlier proposed legislation that would restrict the claims of lenders who caused a debtor’s ratio of unsecured debt to income to exceed 40 percent. He also favors a time priority in bankruptcy: unsecured lenders would be repaid in the order in which they lent, with the earliest lender repaid first and the latest lender repaid last, giving the later lenders more incentive to monitor the borrower’s credit position. Joanna Stavins reviewed studies that have shown riskier borrowers have gotten better access to creditcard loans over time, noting that the rise in credit-card borrowing in the mid-1990s has coincided with the increase in bankruptcy filings. Using data from the Terms of Credit Card Plans, a survey of about 200 of the largest bank credit-card issuers conducted twice a year by the Federal Reserve Board, Stavins provided empirical evidence that credit-card issuers that offer higher rates and fees in order to compensate for higher risk do tend to experience higher delinquency rates (measured by the fraction of outstanding credit-card loans 60 days or more overdue), a finding similar to Ausubel’s. But unlike Ausubel, Stavins found that these issuers did not seem to have higher charge-off rates (the fraction of outstanding credit card loans that are written off). Indeed, her empirical results showed that banks that charged higher rates and fees earned higher net income from credit cards than banks that charged lower fees. This implies that at least over the period covered (1990-99), when the economy was in good shape, it was profitable for issuers to extend credit to riskier borrowers. Whether that would continue to be true in an economic downturn remains to be seen. www.phil.frb.org The relationship between debt levels and bankruptcies is more complicated than the studies by Ausubel and Stavins might suggest. There is no doubt a strong correlation between debt burdens (measured by the debt-to-income ratio, debt-toassets ratio, or debt-service burden) and number of bankruptcies, since a higher debt burden means a negative shock can have a more severe effect on a household. However, we also know that in the aggregate, debt and debt-to-assets seem to increase after households see their incomes rise and expect their future incomes to rise. Debt seems to facilitate growth rather than inhibit it. Debt levels have been expanding not only because of aggressive lending but also because of was removed from the report, the more creditworthy past filers initiated new credit relationships, especially highlimit credit cards, at a much faster rate than normal — evidence that the flag was a constraint on their getting credit.11 Nevertheless, some of the negative effects from filing may have declined over time. Filing for bankruptcy may be more accepted these days, since it has become more common. There are other costs associated with filing that may have fallen as the number of filings has risen. For example, it is easier to find information on how to file (the forms and the information are readily available on the Internet); more people have experienced bankruptcy, so there are more people who can give Another factor that may have contributed to the increase in the bankruptcy rate is the decreased social stigma associated with declaring bankruptcy. the expanding economy and because technological advances have allowed creditors to offer loans to more borrowers at a lower cost (see my 1997 article on credit scoring). (2) Stigma. Another factor that may have contributed to the increase in the bankruptcy rate is the decreased social stigma associated with declaring bankruptcy. Certainly, bankruptcy continues to have a negative connotation. It can harm a person’s reputation, and it can make it more difficult to gain access to credit in the future. Federal law allows credit bureaus to continue to report a bankruptcy filing in the person’s credit report for up to 10 years after the filing. A study by David Musto showed that this does restrict the person’s access to credit. His work using credit-file data from 1994-97 showed that when the bankruptcy flag advice; and there are more bankruptcy lawyers competing for business. In an interesting study, David Gross and Nicholas Souleles assembled a panel of over 25,000 individual credit-card accounts, chosen to be representative of all open accounts in June 1995. They studied the behavior of these accounts for the next 24 months or until they first defaulted 11 Similarly, Stavins presented some data from the Survey of Consumer Finances for 1998 indicating that among those who have ever filed for bankruptcy (8.51 percent of respondents), the average level of credit-card debt is largest for those who filed nine or more years ago. But she also found that the average credit-card debt for someone who filed one or two years ago was higher than for those who filed three to nine years ago. Stavins posited that this might be because once someone files under Chapter 7, he or she cannot file again for six years, so issuers might feel relatively safe lending in the initial period after a filing. Business Review Q1 2002 39 or were closed in good standing, in an attempt to see whether the recent increase in bankruptcies is better explained by supply effects or demand effects. That is, did lenders increase the supply of credit to less creditworthy borrowers, who account for the increase in bankruptcy filings? Or, even after researchers control for creditworthiness, have people become more willing to default over time? Has their demand for bankruptcy increased? According to Gross and Souleles’ estimates, riskier borrowers — for example, those with lower credit scores, larger credit card balances, and smaller monthly payments — are much more likely to default. Default rates were also higher for people living in states where unemployment was higher, house prices were lower, and fewer residents had health insurance.12 The authors documented that there was an increase in credit to riskier borrowers. But increases in credit limits and other changes in risk-composition explain only a small part of the significant increase in default rates between 1995 and 1997. They found that all accounts, even those with the same risk characteristics, age, and other economic fundamentals, became more likely to default over the sample period. And this increase in the probability of bankruptcy — about 0.06 percentage point per month between the start of the sample period in June 1995 to its end in June 1997 — is comparable to that which would occur if the credit score of every account in the sample were reduced by one standard deviation, which would be a very large increase in the overall riskiness of the sample. While not conclusive, this evidence 12 Gross and Souleles also documented a seasoning effect: They found that the probability of delinquency rises from the time the account is opened until it is about two years old, then the probability falls. 40 Q1 2002 Business Review is consistent with the stigma hypothesis, that is, that a decline in the cost of declaring bankruptcy — either the social, legal, or informationgathering costs — is largely responsible for the increased level of filings.13 (3) Abuse. A basic premise of bankruptcy reform legislation is that the current system is being abused by people who declare bankruptcy when they can still afford to repay their debts. Here the evidence is very mixed, and the results depend on the various assumptions made about the type of means test that would be enacted and the way different types of debt would be handled. A 1997 study by John Barron and Michael Staten published by the Credit Research Center at Georgetown University estimated that if all secured debt was reaffirmed, about 32 percent of Chapter 7 debtors in their sample could repay about 31 percent of their nonhousing, nonpriority debts. This would be an average payment per filing of $3570 over five years. The study was based on a sample of 3798 families who filed for bankruptcy in 13 major U.S. cities during the spring and summer of 1996. It was not a 13 There is a debate in the legal literature about whether an economic modeling approach or a sociological approach is the appropriate methodology for studying bankruptcy decisions, especially when stigma is the factor being investigated (see Michelle White’s 1997 article for an interesting discussion). The economic modeling approach, favored, for example, by White (1997), assumes that consumers act to maximize their welfare and, therefore, may act strategically when it comes to filing for bankruptcy, as we’ll discuss below. The sociological approach, favored, for example, by Teresa Sullivan, Elizabeth Warren, and Jay Westbrook (1989) and Rafael Efrat (1998), assumes that people file for bankruptcy when their financial problems become severe enough that they can no longer handle their debt; it is not something they anticipate or plan for, and they do not act strategically regarding filing for bankruptcy. My training puts me in the economic modeling camp; hence, the studies I review here largely follow that approach. nationally representative sample, and a review by the General Accounting Office disputed some of the study’s findings on a number of methodological grounds. For one Increases in credit limits and other changes in risk-composition explain only a small part of the significant increase in default rates between 1995 and 1997. thing, the study used the information debtors provided at the time of filing about their income, expenses, and debts without verification and assumed that the filer’s ratio of income to expenses remained constant over the five-year repayment period. Visa and MasterCard have funded several studies, including three by Ernst &Young (by Tom Neubig and co-authors). The latest of these studies, published in March 1999 and based on a nationally representative sample of 1997 filings, estimated that 10 percent of Chapter 7 debtors would be affected by a means test for ability to pay either 25 percent or more of unsecured debts or $5000 over five years. This would yield $3 billion in debt recovery over five years. But these results assumed that the debtors remained in payment plans for the full five years and that their incomes rose as fast as their expenses and debt (Report 106-49, Senate, p. 88). Marianne Culhane and Michaela White got significantly different results from the Ernst & Young studies. Their results were based on a different sample of bankruptcy filings (from 1995) and different assumptions about, among other things, how long www.phil.frb.org debtors remain in their payment plans and debtors’ automobile expenses. They estimated that 3.6 percent of Chapter 7 debtors could afford to repay some of their debts, with total recoveries of $450 million over five years. Because of their different sample, even when Culhane and White changed their assumptions to those used by Ernst &Young, they still estimated significantly lower recoveries — $930 million — than Ernst & Young. One of the drawbacks of many of these types of studies is that they assume lender behavior would remain the same after bankruptcy reform. But if lenders lend even more aggressively, the number (and cost) of bankruptcies might increase after reform. The studies also assume that borrower behavior would remain the same, but models suggest that this need not be the case.14 For example, one potential drawback of the means test as proposed in the legislation is that shifting debtors with incomes above a certain threshold from Chapter 7 to Chapter 13 essentially imposes a high tax on future earnings, since Chapter 13 requires debtors to use some portion of their future earnings to repay their debt. Michelle White’s 1999 study pointed out that this sets up perverse incentives, reducing debtors’ willingness to work and even giving them an incentive to quit their jobs to avoid the tax. She and Hung-Jen Wang have proposed combining Chapters 7 and 13 so that debtors filing for bankruptcy would have to use their assets and their future earnings, after certain exemptions, to repay their debts. Their simulations suggest that this system would not have deleterious effects on debtors’ incentives to work. Wenli Li also developed a general equilibrium model of bankruptcy chapter choice and showed that individuals with fewer assets but higher income were more likely to choose Chapter 7, while those with more assets and lower income were more likely to choose Chapter 13 and they work less.15 The author’s theoretical analysis of proposed reforms indicated they will affect borrowers differently, depending on their level of wealth and income. For example, a means test that shifts filers into Chapter 13 would hurt the ones with few assets and medium income levels. Anticipating this, those filers with a higher probability of filing for Chapter 13 will reduce their borrowing but also not work as hard. This has important implications for trying to assess the economic benefits of bankruptcy reform or the amount of debt that borrowers would be able to repay in Chapter 13. A corollary of the premise that people are abusing the bankruptcy system by filing when they can repay is that people act strategically when filing for bankruptcy. Strategic behavior. A corollary of the premise that people are abusing the bankruptcy system by filing when they can repay is that people act strategically when filing for bankruptcy. But the empirical evidence on just how strategically people are behaving when they file is mixed. “Forum shopping” is one strategy. The exemption levels for personal bankruptcy vary widely across states. For example, as discussed earlier, a single filer receives no exemption for In addition, these studies do not account for the administrative expenses of imposing a means test. www.phil.frb.org amount of debt that is dischargeable under bankruptcy less any assets over the exemption level, which would have to be given up under bankruptcy. The authors found that for each $1000 increase in benefits, the probability of a household’s filing rises, on average, by 0.021 percentage point, which would imply a 7 percent increase in filings per year.16 To see what this effect means, 16 15 14 a home in New Jersey but an unlimited exemption in Texas. So there is an economic incentive to move to a state with a higher exemption before declaring bankruptcy. In their provocative 1999 study, Ronel Elul and Narayanan Subramanian, using data from the Panel Study of Income Dynamics (PSID), estimated that about 3 percent of all moves to states with higher exemptions are driven by bankruptcy considerations. This percentage doubles for moves made by households “at risk” for bankruptcy (that is, with an estimated probability of filing for bankruptcy equal to the average filer). Using data from the University of Michigan’s Panel Study on Income Dynamics (PSID) for 1984-1995, Scott Fay, Erik Hurst, and Michelle White found evidence that borrowers respond to the incentives to file for bankruptcy. They measured these benefits as the In an empirical study, Ian Domowitz and Robert Sartain found that Chapter 13 filers more often tend to be married and employed and have higher income and higher equity-todebt ratios than Chapter 7 filers. One drawback of the study is that only 254 households included in the PSID had filed for bankruptcy. The rate of filings in the PSID was only half of the national rate, suggesting that PSID households underreported their bankruptcy filings. See Fay, Hurst, and White and the CBO study for further discussion of this point. Business Review Q1 2002 41 consider that there were about 580,000 personal filings in the year ended June 1989, about the middle point of the period covered in the study. A 7 percent increase in filings would have meant 40,000 more filings in 1989. If the size of the effect were the same in 2001, an increase of $1000 in benefits would have added 94,000 filings to the 1.35 million personal filings in 2001. Culhane and White also studied strategies and found that sophisticated debtors could avoid being classified as having the ability to repay under a means test by taking on more debt or increasing charitable contributions. Note that if such strategic behavior is the rule, then estimates of cost savings under means testing that do not account for these reactions will be overstated. Other studies reviewed by Michelle White (1998b) did not find that personal bankruptcy rates are significantly related to the incentives to file.17 Indeed, the real conundrum might not be why the bankruptcy rate increased so much in the 1990s, but why it didn’t increase more18 and why more people haven’t moved to Texas and Florida where the homestead exemption is unlimited. Two reasons suggested by White (1998a) include the fact that sometimes creditors do not take legal action against borrowers who default, so borrowers have less incentive to file for bankruptcy protection, and that borrowers may want to preserve the option to file in the future, so they refrain from filing immediately. Another possibility is that there is indeed stigma associated with filing, which deters households from declaring bankruptcy. (4) Efficacy of Repayment Plans. The reform proposals assume that Chapter 13 repayment plans work. But a 1994 study by the Administrative Office of the U.S. Courts found that 36 percent of debtors who voluntarily entered Chapter 13 repayment plans between 1980 and 1988 completed their plans. The study did not indicate why 64 percent of the plans failed. Only 14 percent of all Chapter 13 cases were converted to Chapter 7. This finding on the efficacy of repayment plans is important and contrary to the assumption in many of the studies that found that bankruptcy reform would lead to significant cost savings, since the studies assume that debtors remain in the repayment plans the full five years. SUMMARY Congress continues to try to pass legislation to reform the bankruptcy system. While the rate of bankruptcy filings has risen over the past several years, the reason for that increase is still debatable and that means the rationale for reform is debatable too. Some proponents of reform argue that people with the wherewithal to repay their debts are taking advantage of the system and that significant cost savings would be forthcoming from reform. Others argue that the real reason bankruptcies have increased is that the level of debt has increased, perhaps because lenders have encouraged risky borrowers to take on excess levels of debt. The empirical work on the causes and incentives to file for bankruptcy and whether the proposed bankruptcy reform will have the desired effect is decidedly mixed. While this means that proponents on each side in the debate can find the ammunition they need by choosing the right study, it also means that more research is necessary in order to fully understand the bankruptcy phenomenon. BR 17 White (1998b) and “Notes” review the literature on whether debtors behave strategically regarding bankruptcy decisions. See also the CBO study. 18 Fay, Hurst, and White found that about 18 percent of households would have benefitted from filing for bankruptcy over the 1984-94 period, and White (1998a) found that 15 percent of households would have benefitted from filing in 1992. Yet, on average, fewer than 1 percent of households filed over these periods. 42 Q1 2002 Business Review www.phil.frb.org REFERENCES Ausubel, Lawrence. “Testimony Before the Subcommittee on Commercial and Administrative Law of the Committee on the Judiciary of the U.S. House of Representative,” hearing on Consumer Bankruptcy Issues, March 10, 1998. Ausubel, Lawrence. “Adverse Selection in the Credit Card Market,” Working Paper, University of Maryland, 1999. Ausubel, Lawrence. “Personal Bankruptcies Begin Sharp Decline: Millennium Data Update,” manuscript, University of Maryland, January 18, 2000. Barron, John M., and Michael E. Staten. “Personal Bankruptcy: A Report on Petitioners’ Ability to Pay,” Credit Research Center Monograph #33, Georgetown University, October 1997. Calem, Paul, and Loretta J. Mester. “Consumer Behavior and the Stickiness of Credit Card Interest Rates,” American Economic Review, 85 (December 1995), pp. 1327-36. Congressional Budget Office (CBO). “Personal Bankruptcy: A Literature Review,” September 2000. Culhane, Marianne B., and Michaela M. White. “Taking the New Consumer Bankruptcy Model for a Test Drive: MeansTesting Real Chapter 7 Debtors,” American Bankruptcy Institute Law Review (1999), pp. 27-77. Domowitz, Ian, and Robert L. Sartain. “Determinants of the Consumer Bankruptcy Decision,” Journal of Finance 54 (February 1999), pp. 403-20. Mester, Loretta J. “What’s the Point of Credit Scoring?” Federal Reserve Bank of Philadelphia Business Review (September/ October 1997), pp. 3-16. Efrat, Rafael. “The Moral Appeal of Personal Bankruptcy,” Whittier Law Review, 20 (1998), pp. 141-67. Musto, David K. “The Reacquisition of Credit Following Chapter 7 Personal Bankruptcy,” Wharton Financial Institutions Center Working Paper 99-22, June 1999. Elul, Ronel, and Narayanan Subramanian. “Forum-Shopping and Personal Bankruptcy,” manuscript, Wharton School, University of Pennsylvania, July 1999. Fay, Scott, Erik Hurst, and Michelle J. White. “The Household Bankruptcy Decision,” American Economic Review, forthcoming. Gross, David B., and Nicholas S. Souleles. “An Empirical Analysis of Personal Bankruptcy and Delinquency,” Review of Financial Studies, 15, 2002, pp. 319-47. Li, Wenli. “To Forgive or Not to Forgive: An Analysis of U.S. Consumer Bankruptcy Choices,” Federal Reserve Bank of Richmond Economic Quarterly, 87 (Spring 2001), pp. 1-22. Mecham, Leonidas Ralph. Bankruptcy Basics. Administrative Office of the United States Courts, revised second edition, June 2000. Nelson, Jon P. “Consumer Bankruptcies and the Bankruptcy Reform Act: A TimeSeries Intervention Analysis, 1960-1997,” Journal of Financial Services Research (September 2000). Neubig, Tom, Gautam Jaggi, and Robin Lee. “Chapter 7 Bankruptcy Petitioners’ Repayment Ability Under H.R. 833: The National Perspective,” Ernst & Young, March 1999, www.ey.com/global/vault.nsf/US/ Chapter_7_Bankruptcy_Petitioners_ Repayment_Ability_Under_H.R._833:_ the_National_Perspective/$file/ Report99.pdf. Neubig, Tom, Fritz Scheuren, Gautam Jaggi, and Robin Lee. “Chapter 7 Bankruptcy Petitioners’ Ability to Repay: Additional Evidence from Bankruptcy Petition Files,” Ernst & Young, February 1998, www.ey.com/global/vault.nsf/US/ Chapter_7_Bankruptcy_Petitioners_ Ability_to_Repay:_Additional_ Evidence_from_Bankruptcy_Petition_Files/ $file/feb98_report.pdf. Continued on next page www.phil.frb.org Business Review Q1 2002 43 REFERENCES Continued from previous page Neubig, Tom, Fritz Scheuren, Gautam Jaggi, and Robin Lee. “Chapter 7 Bankruptcy Petitioners’ Ability to Repay: The National Perspective, 1997,” Ernst & Young, March 1998, www.ey.com/global/vault.nsf/US/ Chapter_7_Bankruptcy_Petitioners_ Ability_to_Repay:_the_ National_ Perspective,_1997/$file/Report97Fin.pdf. “Notes: A Reformed Model of Consumer Bankruptcy,” Harvard Law Review, 109 (April 1996), pp. 1338-56. Pomykala, Joseph. “Bankruptcy Reform,” Regulation (Fall 1997), pp. 41-78. Report 106-49 of the Committee on the Judiciary, Senate, 106th Congress, 1st Session, To Accompany S. 625, Bankruptcy Reform Act of 1999, Together with Additional and Minority Views, May 11, 1999. 44 Q1 2002 Business Review Report 107-3, Part 1, of the Committee on the Judiciary, House of Representatives 107th Congress, 1st Session, To Accompany H.R. 333, Bankruptcy Abuse Prevention and Consumer Protection Act of 2001, Together with Dissenting Views, February 26, 2001. Stavins, Joanna. “Credit Card Borrowing, Delinquency, and Personal Bankruptcy,” New England Economic Review (July/ August 2000), pp. 15-30. Sullivan, Teresa A., Elizabeth Warren, and Jay L. Westbrook. As We Forgive Our Debtors: Bankruptcy and Consumer Credit in America. New York: Oxford University Press, 1989. Understanding the Federal Courts. The Federal Judiciary, 1999, www.uscourts.gov/UFC99.pdf. White, Michelle J. “Economic Versus Sociological Approaches to Legal Research: The Case of Bankruptcy,” Law and Society Review, 25 (1997), pp. 685-709. White, Michelle J. “Why Don’t More Households File for Bankruptcy?” Journal of Law, Economics, and Organization 14 (1998a), pp. 205-31. White, Michelle J. “Why It Pays to File for Bankruptcy: A Critical Look at Incentives Under U.S. Bankruptcy Laws and a Proposal for Change,” University of Chicago Law Review, 65 (Summer 1998b), pp. 685-732. White, Michelle J. “Viewpoints: Too Many Incentives to Go Bankrupt,” American Banker, November 19, 1999. Wang, Hung-Jen, and Michelle J. White. “An Optimal Personal Bankruptcy Procedure and Proposed Reforms,” Journal of Legal Studies 29 (January 2000), pp. 255-86. www.phil.frb.org