The full text on this page is automatically extracted from the file linked above and may contain errors and inconsistencies.
www.clevelandfed.org/research/workpaper/index.cfm Working Paper 9010 CONSUMPTION AND FRACTIONAL DIFFERENCING: OLD AND NEW ANOMALIES by Joseph G. Haubrich Joseph G. Haubrich is an economic advisor at the Federal Reserve Bank of Cleveland. The author would like to thank Andrew Abel, Angus Deaton, Roger Kormendi, Andrew Lo, and seminar participants at the University of Pennsylvania, the Federal National Mortgage Association, and the winter Econometric Society meetings for stimulating discussions. Working papers of the Federal Reserve Bank of Cleveland are preliminary materials circulated to stimulate discussion and critical comment. The views stated herein are those of the author and not necessarily those of the Federal Reserve Bank of Cleveland or of the Board of Governors of the Federal Reserve System. September 1990 www.clevelandfed.org/research/workpaper/index.cfm 1. Introduction Consumption depends on income, so testing theories of consumption involves testing theories of income. A prominent recent example is the work by Campbell and Deaton (1989), which uncovers a paradox. They model income as having a unit root instead of as a fluctuation around a trend, and so they find that consumption looks too smooth: the permanent-income hypothesis does not hold. Like some previous researchers, they find that a difference-stationary process fits the data better than a trend-stationary process. The choice between a difference-stationary process and a trend-stationary process, however, ignores the intermediate class of fractionally differenced processes. Since fractional processes exhibit long-term dependence, they are often classified as having a unit root rather than as trend stationary. This makes permanent income seem rougher than it really is, while consumption, which responds to the true, fractional income, looks too smooth. Specifying consumption correctly removes the paradox. This paper reviews the techniques of fractionally differenced stochastic processes, calculates the stochastic properties of consumption when income follows a fractional stochastic process, and shows how this may explain the excess-smoothness results. 2. Fractional Methods Intuition suggests that differencing a time series roughens it, while summing a time series smooths it. A fractional difference between 0 and 1 can be www.clevelandfed.org/research/workpaper/index.cfm described as a filter that roughens a series less than does a first difference: The series is rougher than a random walk but smoother than white noise. This is apparent from the infinite-order moving-average representation. Let X, follow (1 - LldX, = where E, E,, is white noise, d is the degree of differencing, and L is the lag operator. If d = 0, X, is white noise, and if d - 1, X, is a random walk. However, as Granger and Joyeux (1980) and Hosking (1981) show, d need not be an integer. The binomial theorem provides the relation defined as with the binomial coefficient (;) (,Id = d(d - l)(d - 2)...(d k! - k + 1) for real d and nonnegative integer k. Using this definition, the autoregressive (AR) form of X, follows with the AR coefficient expressed compactly in terms of the gamma function www.clevelandfed.org/research/workpaper/index.cfm Manipulating equation (5) yields the corresponding moving average (MA) representation of X,: The time-series properties of X, depend crucially on the difference parameter, d. For example, when d is less than one-half,X, is stationary; when d is greater than minus one-half,X, is invertible (Granger and Joyeux [1980], Hosking [1981]). Likewise, the autocorrelation properties of X, depend on the parameter d. The MA coefficients, E$, indicate the effect of a shock K periods ahead and the extent to which current levels depend on past values. Using Stirling's approximation, we know that Comparing this with the decay of an AR(1) process highlights the central "long-memory" feature of fractional processes: They decay hyperbolically, at rate kl-d, rather than at the exponential rate, pk, of an AR(1) . For example, compare in Figure 1 the autocorrelation function of the fractionally differenced series (~-L)~.~"X, = c, with the AR(l)X, = 0.9X,-, + both have first-order autocorrelations of 0.90, the AR(1)'s c,. Although autocorrelation function decays much more rapidly. Figure 2A plots the impulse-response functions of these two processes. At lag 1, the MA coefficients of the fractionally differenced series and the AR(1) are 0.475 and 0.900, www.clevelandfed.org/research/workpaper/index.cfm respectively; at lag 10, they are 0.158 and 0.349, and at lag 100, they are 0.048 and 0.000027. The persistence of the fractionally differenced series is apparent at the longer lags. Alternatively, we may ask what value of an AR(1)'s autoregressive parameter will, for a given lag, yield the same impulse response as the fractionally differenced series (equation [I]). simply the k-th root of when d - 0.475. This value is %, and is plotted in Figure 2B for various lags For large values of k, this autoregressive parameter must be very close to unity. These representations also show how standard econometric methods can fail to detect fractional processes. Although a high-order ARMA process can mimic the hyperbolic decay of a fractionally differenced series in finite samples, the large number of parameters required would give the estimation a poor rating from the usual Akaike or Schwartz criteria. A n explicitly fractional process, however, captures that pattern with a single parameter, d. Granger and Joyeux (1980) and Geweke and Porter-Hudak (1983) provide empirical support by showing that fractional models often out-predict fitted ARMA models. The lag polynomials A(L) and B(L) provide a metric for the persistence of . Suppose % represents GNP, which falls unexpectedly this year. How much should this decline change a forecast of future GNP? To address this issue, define % as the coefficients of the lag polynomial, C(L), that satisfies the relation (1 given by equation (1). - L)% = C(L)e,, where the process % is One measure used by Campbell and Mankiw (1987) is www.clevelandfed.org/research/workpaper/index.cfm For large values of k, the value of B, measures the response of Xt+k to an innovation at time t, a natural metric for persistence. From equation (7), it is immediate that for 0 < d < 1, C(l) = 0, and that, asymptotically, there is no persistence in a fractionally differenced series, even though the autocorrelations die out very slowly. This holds true not only for d - 1/2 (the stationary case), but also for 1/2 < d < 1, when the process is nonstationary. From these calculations, it is apparent that the long-run dependence of fractional processes relates to the slow decay of the autocorrelations, not to any permanent effect. This distinction is important; for example, an IMA(1,l) can have small but positive persistence, but the coefficients will never mimic the slow decay of a fractional process. 3. Fractional Differencing and the Theory of Consumption The excess-smoothness paradox can be stated more precisely as follows. Assuming the standard certainty equivalence framework (for example, quadratic utility; see Hall [1978], Flavin [1981], and Zeldes [1989]), we can find how the variance of consumption depends on the income process: where - consumption, r - the real interest rate, C, www.clevelandfed.org/research/workpaper/index.cfm 8, = the MA coefficients of income Yt, 4t = the AR coefficients of Y,, A u: - the difference operator A - (1 - L), and - the variance of income shocks. Hansen and Sargent (1981) show that this formula holds for both stationary and nonstationary processes. Since consumption is a random walk (more generally a martingale) in this framework, the variance of the change in consumption (equation [9]) also represents the variance of innovations to consumption. Under the traditional assumption that income follows a trend-stationary process (because the shocks die out), the variance of innovations to consumption, var(ACt), should be less than the variance of innovations to income, i. This is what Friedman was trying to explain with the permanent-incomehypothesis -- namely, that consumption looks smoother than income. If, however, income is first-difference stationary, as researchers since Nelson and Plosser (1982) have claimed, the revision in permanent income exceeds the revision in actual income. Consumption imovation should then exceed income innovation, a:. Deaton (1987) finds that it does not. A numerical example based on the data used in this paper illustrates excess smoothness. Suppose income is a random walk. In that case, the variance of the change in consumption should equal the variance of the change in income, as intuition or equation (9) suggests. In fact, the figure for consumption is 11.65, while that for income is 61.14. www.clevelandfed.org/research/workpaper/index.cfm The key point to note, both in predicting the variance of consumption and in determining the variance of income innovations, is that we must make some assumptions or estimates of the income process. By making a different and better assumption about income -- fractional differencing - - the paradox can be resolved. Another advantage of assuming a fractional-differencing process for income is that it allows us to retain two assumptions jettisoned by others. First, the income process is univariate, and consumers have no information about it that is hidden from the econometrician. West (1988) shows that such hidden information can spuriously create excess smoothness, because true income surprises would then be less than measured income surprises. Various methods that correct for hidden information (Campbell and Deaton [1989], Flavin [1988]) still show excessive smoothness, however. Second, the permanent-income hypothesis is maintained throughout. Both Campbell and Deaton and Flavin show that departures from this can simultaneously produce both excess smoothness and excess sensitivity. The remainder of this section attempts to answer two basic questions. First, does there exist a difference parameter, d, that resolves the paradox -- that is, if income follows such a process, consumption will no longer look too smooth? Second, does actual income follow such a process? other words, will the fractional parameter that provides a solution fit the income data that we have? Using data for the United States, I proceed in four basic steps.l Section 3.1 reports estimates of the variance of income and consumption In www.clevelandfed.org/research/workpaper/index.cfm changes using both Generalized Method of Moments (GMM) and classical chi-squared techniques to determine the estimates' precision. In section 3.2, using the permanent-income hypothesis, I find a range of d in the income process that will produce the variance of consumption found in the first step. In section 3.3, I employ a test for fractional differencing in the income series. Finally, in section 3.4, I use simulations to estimate the probability that fractional parameters reported in section 3.2 would produce the value found in section 3.3. 3.1 Distribution of the Sample Variance I begin by estimating and comparing the variance of income changes and the variance of consumption changes. Calculating the distribution of the sample variance depends on assumptions about the underlying process. The classical approach assumes an i.i.d. sample from a normal distribution and then produces the familiar result that the scaled sample variance is distributed chi-squared with degrees of freedom one less than the sample size: This may be appropriate for consumption, which, according to theory, should follow a random walk. It has the advantage of being correct for finite samples. The GMM approach allows for heteroskedasticity and autocorrelation. Designed to handle much more complicated estimation problems (Hansen [1982], Hansen and Singleton [1982]), it reduces to a fairly simple form when used to www.clevelandfed.org/research/workpaper/index.cfm determine the distribution of the sample variance. (See Ng Lo [I9881 for a rigorous and clear demonstration of this.) In fact, it reduces to estimating the covariance matrix. Therefore, I use the Newey-West (1987) covariance matrix. This provides a positive, definite heteroskedastic and autocorrelation-consistent covariance matrix. The disadvantage is that it provides an asymptotic result. The Newey-West matrix also requires that a choice be made on the number of lags used to compute the matrix. The authors suggest using the fourth root of the sample size, but the convergence results for this small number depend on mixing conditions, which will generally be violated in the case of long-term dependence. In more general cases, they suggest employing the cube or square root, while Chatfield (1984, p. 141) recommends using twice the square root. With a sample size of 120 for the consumption series and 137 for the two income series, I use five lags. This follows Ng Lo (1988), who finds that this choice works well even in larger samples for a variety of series. Table 1 shows the sample variances for per-capita consumption of nondurables and services, plus both per-capita income measures used (labor and disposable). It also reports the 95 percent confidence bounds obtained using both the classical and GMM approaches. Since the GMM bounds are broader (because income shows autocorrelation), they are used in the next part of this exercise. 3.2 Implied Variance www.clevelandfed.org/research/workpaper/index.cfm The variance of income and consumption depends on an unobservable (to the econometrician) variable: shocks to income. If income follows a fractional process with parameter d, we have from Hosking (1981) that Likewise, the variance-of-consumptionformula (equation [9]) specializes in this case to where C, is consumption, Bt are the MA coefficients of income Y,, and A is the difference operator, A = (1 - L). The estimates for income and consumption variance give estimates of the shock variance, a:. Notice that the implied shock variance changes with different assumptions about the income process, that is, with changes in the differencing parameter, d. Inverting equations (11) and (12) yields the variance of income shocks as a function of d. Then, comparing the implied shock variances across income and consumption yields the d values that make the income process consistent with observed consumption behavior. Implementing the above procedure requires choosing an interest rate. I use three different quarterly rates: r = 0.2 percent, which corresponds to the long-run average rate used in Mehra and Prescott (1985); r = 1 percent, a www.clevelandfed.org/research/workpaper/index.cfm high interest rate; and r = 0.05 percent, a low interest rate. Using these numbers made a noticeable, if not dramatic, difference in the variance estimates. Tables 2A and 2B report the results of this investigation and make clear the choice of bounds on d used: 0.79 and 0.95 for labor income, and 0.72 and 0.96 for disposable income. 3.3 Testing for Fractional Differencing The next step ascertains whether the d values obtained above are consistent with the observed income process. This section tests for fractional differencing using the modified rescaled range (R/S) statistic developed by Lo ([forthcoming] and Haubrich and Lo [1989]). In section 3.4, I use simulations to determine the probability that the values obtained from the test could come from distributions with a d parameter in the range calculated above. The modified R/S statistic tests whether a process X, shows long-term dependence, (It is based on a statistic originally developed by Hurst [I9511 and popularized by Mandelbrot [1972].) More formally, consider a process defined as X, = p+ct, where p is an arbitrary but fixed constant. For the null hypothesis H, assume that the disturbances (c1) E(E~) = 0 for all t, (E~) satisfy the conditions www.clevelandfed.org/research/workpaper/index.cfm sup E [JE~~'] < (C2) w for some 8 > 2, t [A[: f $ = l in+a mE (c3) 2 Ej] j=l (c4) (E,) ] exists and u2 > 0, and is strong-mixing,with mixing coefficients , that satisfy Conditions (C2) through (C4) allow dependence and heteroskedasticity, but prevent them from being too large. Thus, short-term dependent processes, such as finite-order ARMA models, are included in the null hypothesis, as are models with conditional heteroskedasticity. Unlike the statistic used by Mandelbrot, the modified R/S statistic used here is robust to short-term dependence. A more in-depth discussion of these conditions appears in Phillips (1987), Haubrich and Lo (1989), and Lo (forthcoming). To construct the modified R/S statistic, take a sample XI, 3 , X,, with sample mean a = an(g) where [ max e l n En,choose q k - - X (Xj - X,) j=1 lags, and calculate: min k j=1 ... www.clevelandfed.org/research/workpaper/index.cfm Intuitively, the numerator in equation (14) measures the memory in the process via the partial sums. White noise does not stay long above the mean: Positive values are soon offset by negative values. A random walk will remain above or below zero for a long time, and the partial sums (positive or negative) will grow quickly, making the range large. Fractional processes fall in between. Mandelbrot (1972) refers to this as the "Joseph Effect" - seven fat and seven lean years. The denominator normalizes not only by the variance, but by a weighted average of autoco~ariances.~This innovation over Hurst's R/S statistic provides the robustness to short-term dependence. The partial sums of white noise constitute a random walk, so a(q) grows without bound as n increases. A further normalization makes the statistic easier to work with and interpret: Vn(q) - QJq)/*j(n). Haubrich and Lo derive the asymptotic distribution of V, calculating a mean and standard deviation of approximately 1.25 and 0.27. Tables 3A and www.clevelandfed.org/research/workpaper/index.cfm 3B present fractiles of the distribution of V and confidence intervals about the mean. Figure 3 plots the distribution and density. Note that the distribution is skewed, with most of its mass between three-fourths and two. Table 4 reports the results of the modified R/S statistic applied to first differences of labor income and disposable income. Note that none are significantly different from the mean at the 5 percent level. 3.4 Simulation Results Although the modified R/S statistic provides a good test (in terms of size and power) for detecting long-term dependence, it does not directly provide the d parameter. To better assess the chances that a d parameter from the correct range will fit the data, I use simulation methodology. Simulations employed here ran as follows. I used a Vax Fortran program (a modification of one written by Lo) to generate 10,000 series of length 135 (not quite matching the data-series length of 136, to compare this study to other papers). The series were generated to have fractional differencing parameter d for several d. I then computed the modified R/S statistic for each series and counted the number of times that this value fell below the value obtained from the income data above (Table 4). This gives the percentage of times the statistic would be that low if the income series actually had that d parameter. I emphasize low because in first-difference form the relevant d would be negative, which should show up as a low R/S statistic. Table 5 reports these results and also answers the question: If www.clevelandfed.org/research/workpaper/index.cfm the process is really fractionally differenced with a particular d, what is the probability that we would see the V,(q) number found in the data, or even a lower number? Of course, subtracting these numbers from one gives the probability of obtaining a higher R/S statistic. The reader may draw different conclusions from Table 5, but I think that the results provide mild support for the belief that fractional processes can explain the excess-smoothness problem. It seems unlikely that the actual d for either income process is smaller than the lower bounds obtained above; we would expect to see much lower numbers than those in Table 4. That is, Table 5 tells us that the probability of seeing that number or a lower one is very high for such a process with a d of -0.21 or -0.28. On the other hand, the chance of d = -0.04or -0.05 producing such a number is more reasonable. Earlier in this section, we saw what range d could fall into and still resolve the Deaton paradox. Now we see, in a general way, how likely it is that d could be in that range. The chance remains that d is too close to zero to resolve the paradox by invoking fractional methods. I submit that Table 5 opens the very real possibility that d falls into the relevant range. 4. Conclusion Judging the smoothness of consumption depends on the estimate of permanent income, which in turn depends on our estimate of income. Paradoxes under one specification -- -- excess smoothness when income is assumed to have a unit root do not arise when income is fractional. The explanation that I propose leaves intact two similar problems in the consumption literature. First, panel studies have found excess sensitivity of www.clevelandfed.org/research/workpaper/index.cfm precisely the opposite type Campbell and Deaton find in aggregate data. Consumption variance is too high given the estimates for income. Flavin finds a different type of excess sensitivity, namely, that consumption depends on past income; it is not a martingale (the expected future value equals today's value), as the permanent-income hypothesis predicts. Campbell and Deaton refer to this as the "nonorthogonality" problem. Nonetheless , without dropping either the permanent- income hypothesis or the univariate representation of income, fractional processes resolve the Deaton paradox. Theoretically, a fractional-income process matches the observed variance of both income and consumption. Empirically, on the basis of a new statistic and simulations, the evidence supports income following such a process. www.clevelandfed.org/research/workpaper/index.cfm References Auerbach, Alan J., and Kevin Hassett, "Corporate Savings and a Shareholder Consumption," Working Paper No. 2994, National Bureau of Economic Research, June 1989. Campbell, John, and Angus Deaton, "Why is Consumption So Smooth?" Review of Economic Studies, 56, 1989, pp. 357-373. , and N. Gregory Mankiw, "Are Output Fluctuations Transitory?" Quarterly Journal of Economics, 102, 1987, pp. 857-880. Chatfield, C., The Analysis of Time Series: An Introduction, 3rd ed. New York: Chapman and Hall, 1984. Deaton, Angus, "Life Cycle Models of Consumption: Is the Evidence Consistent with the Theory?" in Tnunan Bewley, ed., Advances in Econometrics: Fifth World Congress, vol. 2, New York: Cambridge University Press, 1987. Diebold, Francis X., and Glenn D. Rudebusch, "Is Consumption Too Smooth? Long Memory and the Deaton Paradox." Washington, D.C.: Board of Governors of the Federal Reserve System, March 1989. Flavin, Marjorie, "The Adjustment of Consumption to Changing Expectations about Future Income,"Journal of Political Economy, 89, 1981, pp. 974-1009. , "The Excess Smoothness of Consumption: Identification and Interpretation," University of Virginia Working Paper, November 1988. Geweke, John, and Susan Porter-Hudak, "The Estimation and Application of Long Memory Time Series Models," Journal of Time Series Analysis, 4, 1983, pp. 221-238. Granger, Clive, and Roselyne Joyeux, "An Introduction to Long-Memory Time Series Models and Fractional Differencing," Journal of Time Series Analysis, 1, 1980, pp. 14-29. Hall, Robert E., "Stochastic Implications of the Life Cycle-Permanent Income Hypothesis: Theory and Evidence," Journal of Political Economy, 86, 1978, pp. 971-987. www.clevelandfed.org/research/workpaper/index.cfm Hansen, Lars P., "Large Sample Properties of Generalized Method of Moments Estimators," Econometrica, 50, 1982, pp. 1029-1054. , and Thomas J. Sargent, "A Note on Wiener-Kolmogorov Prediction Formulas for Rational Expectations Models," Economics Letters, 8, 1981, pp. 255-260. , and Kenneth J. Singleton, "Generalized Instrumental Variables Estimation of Nonlinear Rational Expectations Models," Econometrica, 50, 1982, pp. 1269-1286. Haubrich, Joseph G., and Andrew W. Lo, "The Sources and Nature of Long-Term Memory in the Business Cycle," Working Paper No. 2951, National Bureau of Economic Research, April 1989. Hosking, J. R. M., "Fractional Differencing," Biometrica, 68, 1981, pp. 165- 176. Hurst, Harold E., "Long Term Storage Capacity of Reservoirs," Transactions of the American Society of Civil Engineers, 116, 1951, pp. 770-799. Lo, Andrew W., "Long-Term Memory in Stock Market Prices," Econometrica (forthcoming) . Mandelbrot, Benoit, "Statistical Methodology for Non-Periodic Cycles: From the Covariance to R/S Analysis," Analysis of Economic and Social Measurement, 1, 1972, pp. 259-290. Mehra, Rajnish, and Edward C. Prescott, "The Equity Premium: A Puzzle," Journal of Monetary Economics, 15, 1985, pp. 145-161. Nelson, Charles R., and Charles I. Plosser, "Trends and Random Walks in Macroeconomic Time Series: Some Evidence and Implications," Journal of Monetary Economics, 10, 1982, pp. 139-162. Newey, Whitney K., and Kenneth D. West, "A Simple, Positive Semi-Definite Heteroskedasticity and Autocorrelation Consistent Covariance Matrix," Econornetrica, 55, 1987, pp. 703-708. Ng Lo, Nancy, "An Econometric Analysis of the Role of the Price Discovery in Futures Markets." Ph.D. dissertation. Wharton School, University of Pennsylvania, 1988. Phillips, Peter C. B., "Time Series Regression With a Unit Root," Econornetrica, 55, 1987, pp. 277-301. Quah, Danny, "Permanent and Transitory Movements in Labor Income: An Explanation for 'Excess Smoothness' in Consumption," Journal of Political Economy, 98, 1990, pp. 449-475. www.clevelandfed.org/research/workpaper/index.cfm West, Kenneth D., "The Insensitivity of Consumption to News about Income," Journal of Monetary Economics, 21, 1988, pp. 17-34. Zeldes, Stephen P., "Optimal Consumption with Stochastic Income: Deviations from Certainty Equivalence," Quarterly Journal of Economics, 114, 1989, pp. 275-298. www.clevelandfed.org/research/workpaper/index.cfm Table 1 Sample Variances 95% Confidence Bounds Variance Consumption 11.65 GMM Classical 6.37 9.17 16.93 15.30 Labor income 65.35 GMM 36.43 Classical 52.18 94.28 84.24 Disposable income 61.14 GMM 28.90 Classical 48.82 93.38 78.81 Consumption = First difference of real per-capita consumption of nondurables and services, 1989:IQ-1989:IIQ (quarterly data, seasonally adjusted). Source: National Income and Product Accounts. Population = U.S. total resident population, including armed forces. Source: National Income and Product Accounts. Labor income = First difference of quarterly real percapita labor income, 1952:IQ-1986:IQ. Sources: Auerbach and Hassett (1989) and National Income and Product Accounts. Disposable income = As above. Source: Auerbach and Hassett (1989). www.clevelandfed.org/research/workpaper/index.cfm Table 2A Implied Income Innovation Variances Labor Income Implied variance from consumption Lower bound Upper bound d Interest rate = 0.05% Interest rate = 1% Interest rate = 0.2% Source: See table 1. Implied variance from income www.clevelandfed.org/research/workpaper/index.cfm Table 2B Implied Income Innovation Variances Labor Income Implied variance from consumption Lower bound Upper bound d Interest rate = 0.05% Interest rate = 1% Interest rate = 0.2% Source: See table 1. Imp1ied variance from income www.clevelandfed.org/research/workpaper/index.cfm Note, Tables 2A and 2B Approximations: Closed-form solutions f o r the i n f i n i t e sums used i n these calculations do not e x i s t . An upper bound on t h e f i n i t e sum of N terms and the i n f i n i t e sum is ;(l/l+r)'. The approximation i s i n f a c t b e t t e r . 10,000 terms were used f o r t h e i n t e r e s t r a t e s r = 0.01 and r = 0.002, leading t o e r r o r s of l e s s than 1 x lo-' and 1.05 x 20,000 terms used f o r r = 0.0005 give an e r r o r of l e s s than 0.09. www.clevelandfed.org/research/workpaper/index.cfm Table 3A Fractiles of the Distribution F,(v) Source: Haubrich and Lo (1989). Table 3B Symmetric Confidence Intervals About the Mean 7 Source: Haubrich and Lo (1989). www.clevelandfed.org/research/workpaper/index.cfm Table 4 R/S Analysis of Income Labor Income Disposable Income 1.193 1.261 1.310 1.268 1.140 1.245 1.062 1.176 1.018 1.170 Note: Both series per capita. Sources: See table 1. Table 5 Probability of Observing R/S Statistic Probability of LAG ( 1) Labor Income d=-0.21 d=-0.05 Source: Author's simulations. IVn(q) Disposable Income d=-0.04 d=-0.28 www.clevelandfed.org/research/workpaper/index.cfm 26 Footnotes 1. For an estimate of income with a view to explaining consumption anomalies in the spirit of this section, see the interesting (independent) work of Diebold and Rudebusch (1989). Quah (1990) explains the paradox using permanent and temporary movements in income. 2. These weights define the Bartlett window. Newey and West (1987) enumerate the advantages of this specification. www.clevelandfed.org/research/workpaper/index.cfm 27 Figure 1 - 9 1 .4 7 5 ~t pj f o r ( . . . -. . . . . . . ... . . .. . .. . ... . . .. . . . . . . . . . . . . .. ... . . . . . . . . - . . . . . . . . . . . . . . . .... ..;... . . . . ... . . . .. . . . :.......: . . . . . . . . .: ............ . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . ... . . . -.. . . .. . . . . .:. . . . . . . . . . . . . . . . . . -. .........-... - . .. - . . . . .. . .. . . . . . . .. . . .:. . . . ..1. ..:.. i . . i . . : . . . : . . . . . . . . .. . . . .. . . . . . . . . .. . + . .. .. . . . . . . . .\..: ............. ......... .; ........................... ...!. ...................... >... ..i.... ..... .:.. ..... ..., 4 --lo-... . . . . . . . . ..' " ' . . . . . . . .. . . . .. . . . . . . .. . . \ . . . . . . . . . ..;..W . .. . . . . . . . . . . . . ... . .. :..: . . . . . . . . . . . . . . . . : .: . . .. . .. . .. . . . . - ..!. . . : ..:.. .:.. . . ' : . ..:. . . . : . . 1 . .. . . .:...:.. ;... . . . . . . \ ............1 ...... :.. ......... ..; ..... . . . . . . i . ....: : . . . . :.. . : . .: ...;....: . . . . . . . . . . .. . . . .. . .. . . . .. . .. . ....... 0 . .../ :. . .; . . :. . : - . . .. . . .. . . . . . . ' . . . . : . . , . . :. . . . .; .1 . . .. . . . .. . .. . - . . 1...:. . . . .. . ,. . . . .. , . 4 -\ . . .. . . ... . . ... . . . ... . . . . . . .. 0 * i : - :... . " e e - :. ,' 9 - f "- 0 .. :\ . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 0 - . . . . . . \. . -. . . . . , ; ., . , , O 0. 30 . . . . . . . . . . .. . - . . . . . . . . . . . . . . . .. , - . . . . . , 60 .... . . . . . \ _ . .. . . :. . . . . . . . . . 90 . . I . 120 LAG Autocorrelation functions of an AR(1) with coefficient 0.90 [dashed line] and a fractionally differenced series Xt = (1 - L ) - ~ @with differencing parameter d = 0.475 [solid lie]. Although both processes have a &&order autocorrelationof 0.90, the fractionallydifferenced process decays much more slowly. Source: Haubrich and Lo (1989). www.clevelandfed.org/research/workpaper/index.cfm 28 F i g u r e 2A Impulse response function [solid line] of the fractiindly differenced time series Xt = (1 ~ ) for ~differencing e ~parameter d = 0.475. For comparisiin, the imp&-function of an AR(1) with autoregressiveparameter 0.90 is slso plotted [dashed lines]. - Source: Haubrich and Lo (1989). www.clevelandfed.org/research/workpaper/index.cfm Equivalent p of AR(I) .. ... . . . . . : LAG Values of aa AR(1)kr a u t o ~ parameta e required to generate the same k-th order autocorrelation as the &actionally diflaenced suies Xt = (1 L)'~Q for diff-~ing parameter d = 0.475 [wUd he]. Formdo, th& ia himply the k-th root of the frrctionally differenced eeries' hpubmpomw function [dashed line]. For large k, the autmegrasipe parameter must be very close to unity. - Source: Haubrich and Lo (1989). www.clevelandfed.org/research/workpaper/index.cfm 30 Figure 3 Distribution and density function of the range V of a Brownian bridge. Dashed curves are the n o d distribution and density functions with mean and variance equal to t h e ofV. Source: Haubrich and Lo (1989).