View original document

The full text on this page is automatically extracted from the file linked above and may contain errors and inconsistencies.

Vol. 29, No. 1

ECONOMIC REVIEW




FEDERAL RESERVE BANK
OF CLEVELAND

1993 Quarterl
Vol. 29, No. 1
Generational Accounts
and Lifetime Tax Rates,
1900-1991

2

by Alan J. Auerbach, Jagadeesh Gokhale,
and Laurence J. Kotlikoff
Unlike the federal budget, which typically measures receipts and ex­
penditures for one year at a time, generational accounts and lifetime tax
rates focus on long-term intergenerational wealth redistribution. The
accounts show that future generations can expect to pay, on average,
more than twice as much to the government as current (1991) new­
borns if living generations continue to be treated as they are under cur­
rent policy. Lifetime tax rates on successive generations have
increased from 22 percent for Americans born in 1900 to about 34 per­
cent for those born in 1991. Under the baseline economic assump­
tions presented here, future generations are slated to see that figure
rise to more than 70 percent on average.

Has the Long-Run Velocity
of M2 Shifted? Evidence from
the P* Model

14

Economic Review is published
quarterly by the Research Depart­
ment of the Federal Reserve Bank
of Cleveland. Copies of the Review
are available through our Public
Affairs and Bank Relations Depart­
ment, 1-800-543-3489.

Coordinating Economist:
William T. Gavin
Advisory Board:
Jagadeesh Gokhale
Erica L. Groshen
Joseph G. Haubrich

Editors: Tess Ferg
Robin Ratliff
Design: Michael Galka
Typography: Liz Hanna

by Jeffrey J. Hallman and Richard G. Anderson
The P-Star (P*) model forecasts inflation by exploiting the stability of
M2 velocity and the tendency of the real economy to operate near its
potential. While originally offered as a link between inflation and
money growth, inverting the model provides a test of one of its primary
assumptions: the constancy of M2’s long-run velocity, or V-Star (V*).
If V* has increased during the last three years, predictions of inflation
from the original P* model should be inferior to predictions from a
model that incorporates the new, higher V* In fact, the deceleration of
inflation through 1992:IIIQ was quite close to the original model’s pre­
diction, and simulations of the model under a variety of hypotheses
regarding changes in V* provide relatively little support for a dramatic
shift in that measure.

Opinions stated in Economic
Review are those of the authors
and not necessarily those of the
Federal Reserve Bank of Cleveland
or of the Board of Governors of the
Federal Reserve System.

Material may be reprinted
provided that the source is
credited. Please send copies of
reprinted material to the editors.

ISSN 0013-0281

Examining the
Microfoundations of
Market Incentives for
Asset-Backed Lending

27

by Charles T. Carlstrom and Katherine A. Samolyk
Many view the proliferation of securitization as a response to com­
petitive or regulatory pressures. But to what extent would assetbacked lending occur in a less regulated environment? This paper
addresses the extent to which models of credit intermediation
have been able to formalize some of the market-based forces driv­
ing this phenomenon. The authors examine four papers that
model some of the dimensions of asset-backed markets. An un­
derlying theme is that under certain conditions, the very informa­
tion costs that make financial markets important as conduits of
credit can also create nonregulatory incentives for asset-backed
lending as an efficient funding mode.




Generational Accounts
and Lifetime Tax Rates,
1900-1991
by Alan J. Auerbach,
Jagadeesh Gokhale, and
Laurence J. Kotlikoff

Introduction
Generational accounting is a new method for
determining how government deficits, taxes,
transfer payments, and other expenditures affect
the distribution of income and wealth among
different generations. The technique is still being
developed, and a number of the assumptions
used to estimate the accounts are controversial.
Auerbach, Gokhale, and Kotlikoff (1991),
Kotlikoff (1992), and Office of Management and
Budget (1992) explain the basic concept and
present some illustrative results. This article u p ­
dates the baseline generational accounts report­
ed in the 1993 federal budget and estimates the
effects of several new alternative policies. It also
extends the analysis for the first time to lifetime
net tax rates— the taxes that a generation pays,
less the Social Security and other transfer bene­
fits that it receives, as a percentage of income
over its entire lifetime.
The new analysis reveals the following:

Alan J. Auerbach is a professor of
economics at the University of
Pennsylvania and an associate of
the National Bureau of Economic
Research, Jagadeesh Gokhale is an
economist at the Federal Reserve
Bank of Cleveland, and Laurence J.
Kotlikoff is a professor of econom­
ics at Boston University and an
associate of the National Bureau of
Economic Research. A version of
this article appeared in Budget
Baselines, Historical Data, and
Alternatives for the Future, Office of
Management and Budget, January
1993.

• The net tax rates paid by future generations
will be substantially higher than those paid by
the baby boom and other current generations,
unless policy actions are taken now to mitigate
the increase.
• The generational imbalance between newly
bom and future Americans could be largely elimi­
nated either by imposing a cap on mandatory
spending (excluding Social Security) from 1993
through 2004 or by instituting an appropriate sur­
tax. Both policies would significantly raise the net
taxes paid by current Americans, but the increase
for the newly bom would be considerably more
under a surtax.

I. The Nature
of Generational
Accounts

The federal budget normally measures receipts
and outlays for one year at a time and reports
these estimates for only a few years into the fu­
•
The lifetime net tax rates paid by Americans
ture. Generational accounts, in contrast, look
in the baby boom and successive generations
ahead many decades, classifying taxes paid and
will likely be much higher than the rates paid
transfers received— such as Social Security, Medi­
by those born earlier.

care, and food stamps— according to the generation
http://fraser.stlouisfed.org/
that pays or receives the money. For an existing
Federal Reserve Bank of St. Louis

generation, taxes and transfers are estimated
year by year over members’ remaining lifespan.
These amounts are then summarized in terms of
one number, the present value of the generation’s
entire annual series of average future tax payments
net of transfers received. For future generations, the
accounts are based on the proposition that the gov­
ernment’s bills will have to be paid either by them or
by those now living. The calculations determine
how much future Americans will have to pay on
average to the government, above the amount they
will receive in transfers, if total government spend­
ing is not reduced from its projected path and if
those now living pay no more than anticipated.
Defined more precisely, generational accounts
measure, as of a particular base year, the present
value of the average future taxes that a member
of each generation is estimated to pay minus the
present value of the average future transfers that
he or she is estimated to receive. This difference
is called the “net payment” in the following dis­
cussion. A generation is defined as all males or
females bom in a given year.
Generational accounts can be used for two
types of comparison. First, they allow us to com­
pare the lifetime net payments by future gen­
erations, by the generation just bom, and by
different generations born in the past. Lifetime
net payments by generations born in the past
are based on estimates of actual taxes paid and
transfer payments received through 1991, as
well as on projections of taxes to be paid and
transfer payments to be received in the future.
Second, generational accounts can be used to
compare the effects of actual or proposed policy
changes on the remaining lifetime net payments
of currently living and future generations. Such
comparisons can be made equally well for policies
that change the totals of receipts or expenditures
and for those that change the composition of the
budget without affecting the deficit.
It should be noted that, as now constructed,
generational accounts have a number of limita­
tions. First, they include the taxes and transfers
of all levels of government— federal, state, and
local— and thus do not show the separate effect
of the federal budget as a whole. However, the
difference in the accounts due to a federal gov­
ernment policy change can be analyzed alone.
Second, generational accounts reflect only taxes
paid and transfers received. They do not impute
to particular generations the value of the govern­
ment’s purchases of goods and services for educa­
tion, highways, national defense, and so on. Thus,
the full net benefit or burden that any generation
receives from government fiscal policy as a whole
 is not totally captured. Still, the accounts can


reveal the effects of a policy change that affects
only taxes and transfers. In the future, it may be
feasible to impute the value of certain types of
government purchases to specific generations.
Third, generational accounting does not, as
yet, incorporate any policy feedback on the
economy’s growth and interest rates. Feedback
effects can be significant, but because they
generally occur slowly, their impact on the dis­
counted values used in the accounts may be
small. Moreover, there is reason to believe that
they would reinforce the conclusions derived
here. For example, policies that decrease cur­
rent generations’ net payments while increasing
the burden on future generations are likely to
reduce investment over time. This in turn will
lower real wage growth and raise real interest
rates, which on balance will harm future genera­
tions in absolute terms.
Finally, generational accounting divides people
bom in the same year into only two categories,
males and females, with each designated a “gen­
eration.” This is an important distinction, since the
sexes differ significantly in such characteristics as
lifetime earnings and longevity. However, the
method does not reveal differences with respect
to other characteristics, such as income levels or
race, nor does it show the wide diversity among
individuals within any particular grouping.
Thus, the results presented here should be
viewed as experimental and illustrative. They
are limited by the availability and quality of the
data, especially for earlier years. In addition, they
are necessarily based on a number of simplifying
assumptions (about which reasonable people
may disagree) concerning the pattern of future
taxes and spending, mortality and birth rates, the
interest rate used for discounting future taxes and
transfers to derive present values, and so forth.
The absolute amounts of the generational ac­
counts are sensitive to all of these assumptions.
Nevertheless, like the 75-year projections is­
sued each year by the Social Security trustees, the
accounts can be illuminating when considered in
light of their assumptions. Moreover, the most fun­
damental result— that future generations’ average
net payment will be relatively much larger than
that of the generation just bom — holds for a wide
range of reasonable changes in the assumptions.

II. Remaining
Net Payments by
Existing Generations
Tables 1 and 2 show the generational accounts as
of calendar year 1991 for every fifth generation of

T A B L E

1

Generational Accounts for Males:
Present Value of Taxes and
Transfers as of 1991
(thousands of dollars)
Taxes Paid

Transfers Received

Labor
Income
Taxes

Capital
Income
Taxes

Payroll
Taxes

Excise
Taxes

Social
Security

78.9
99.7
125.0
157.2
187.1
204.0
205.5
198.8
180.1
145.1
97.2
38.9
-23.0
-74.0
-80.7
-75.5
-61.1
-47.2
-3.5

29.2
37.5
47.8
61.1
73.5
80.4
80.4
77.6
71.0
59.8
45.8
30.2
16.2
5.7
2.4
1.1
0.6
0.2
0.0

10.1
12.9
16.5
21.2
26.5
33.1
39.9
46.8
52.3
55.4
55.3
52.2
46.4
39.0
30.9
23.6
18.0
15.0
7.1

31.8
41.0
52.3
67.1
81.3
89.5
89.8
87.0
79.9
67.6
52.0
34.5
18.6
6.6
2.7
1.3
0.7
0.3
0.0

28.2
33.3
38.7
44.6
48.3
49.1
48.5
47.8
46.9
44.5
40.7
36.2
30.8
25.6
20.4
15.5
11.0
7.6
1.7

166.5

n.a.

n.a.

n.a.

n.a.

Generation’s
Age in 1991

Net
Payment

0
5
10
15
20
25
30
35
40
45
50
55
60
65
70
75
80
85
90
Future generations

Health

Welfare

6.1
7.7
9.2
10.7
11.8
14.6
18.0
22.6
28.5
35.9
45.2
57.1
72.4
82.3
75.5
63.3
47.9
36.4
. 6.5

11.0
13.1
15.7
19.2
22.2
24.3
26.4
29.7
34.1
39.6
45.4
51.8
58.1
64.6
58.2
50.9
41.5
33.1
5.8

3.3
4.2
5.4
6.9
8.4
9.0
8.6
8.0
7.3
6.6
6.0
5.3
4.6
3.9
3.4
2.8
1.9
0.9
a

n.a.

n.a.

n.a.

n.a.

n.a.

n.a.

Percentage Difference in Net Payment
Future generations
and age zero

111.1

n.a.

n.a.

n.a.

n.a.

a. $0.05 thousand or less.
SOURCE: Office o f Management and Budget (1992).

males and females alive in that year. The first col­
umn, “Net Payment,” is the difference between
the present value of taxes that a member of each
generation will pay, on average, over his or her
remaining lifetime and the present value of
transfers that he or she will receive. The other col­
umns show the average present values of several
different taxes and transfers. All federal, state, and
local taxes and transfers are included in these cal­
culations. Federal spending and receipts are based
on the baseline calculations in the Office of Man­
agement and Budget’s Mid-Session Revieu>o f the
1993 Budget.
The present value of future taxes to be paid
by young and middle-aged generations far ex­
 ceeds the present value of the future transfers they


will receive. For males age 40 in 1991, for exam­
ple, the present value of future taxes is $180,100
more than the present value of future transfers.
The amounts are large because these genera­
tions are close to their peak taxpaying years.
For newborn males, on the other hand, the
present value of the net payment is much
smaller, $78,900, because they will pay very lit­
tle in taxes for a number of years.
Older generations, who are largely retired, will
receive more Social Security, Medicare, and other
future benefits than they will pay in future taxes.
That is, they have negative net payments. Females
have smaller net payments than males, mainly
because they earn less and thus pay less income
and Social Security taxes.

5

TABLE 2
Generational Accounts for
Females: Present Value of Taxes
and Transfers as of 1991
(thousands of dollars)
Taxes Paid

Transfers Received

Labor
Income
Taxes

Capital
Income
Taxes

Payroll
Taxes

Excise
Taxes

Social
Security

Health

Welfare

39.5
48.7
59.4
72.4
84.0
86.4
81.1
71.9
55.3
29.5
-2.2
-39.5
-80.8
-112.5
-110.6
-100.6
-83.3
-65.6
-9.8

15.1
19.4
24.7
31.4
37.1
38.5
36.2
33.3
29.0
23.1
16.7
10.8
5.6
2.0
0.8
0.4
0.2
0.1
0.0

3.7
4.8
6.1
7.9
9.8
12.3
15.5
19.1
22.3
24.8
26.1
26.0
24.4
21.7
18.0
13.8
9-3
4.7
0.5

16.5
21.2
27.0
34.6
41.3
42.9
40.5
37.4
32.7
26.2
19.0
12.3
6.4
2.3
0.9
0.4
0.2
0.1
0.0

27.3
32.0
36.8
41.8
45.0
46.1
46.1
46.1
45.2
43.2
39.5
35.2
30.3
25.3
20.6
15.8
11.6
8.9
1.6

5.8
7.3
8.7
10.0
11.1
13.7
17.0
21.3
26.9
34.2
43.5
55.6
71.4
80.3
74.2
63.0
49.5
36.8
5.6

9.6
11.5
14.0
17.3
20.0
23.2
26.9
32.1
38.8
47.4
55.4
64.4
73.1
80.8
74.4
65.8
53.3
41.1
6.0

7.7
9.9
12.5
16.0
18.2
16.5
13.4
10.7
8.2
6.1
4.6
3.7
3.1
2.7
2.4
2.1
1.7
1.4
0.2

83.4

n.a.

n.a.

n.a.

n.a.

n.a.

n.a.

n.a.

n.a.

n.a.

n.a.

Generation’s
Age in 1991

Net
Payment

0
5
10
15
20
25
30
35
40
45
50
55
60
65
70
75
80
85
90
Future generations

Percentage Difference in Net Payment
Future generations
and age zero

111.1

n.a.

n.a.

n.a.

n.a.

SOURCE: Office o f Management and Budget (1992).

Since the figures in these tables show the
rem aining lifetime net payments of particular
generations, they do not include the taxes paid
or transfers received in the past. This must be
kept in mind when considering the net payments
of those now alive. The portion of a generation’s
remaining lifetime net payment depends on
whether we are talking about 10-, 40-, or 65-yearolds. The fact that 40-year-old males can expect to
pay more in the future than they receive, in
present-value terms, while the reverse is true for
65-year-old males, does not necessarily mean that
federal, state, and local governments are treating
the 40-year-olds unfairly. Because 65-year-old
men paid considerable taxes when younger, and
 these are not reflected in their remaining lifetime
http://fraser.stlouisfed.org/
net payments, direct comparisons are impossible.
Federal Reserve Bank of St. Louis

The lifetime net payments of different genera­
tions can be compared only by using lifetime net
tax rates, discussed below.
Estimates of future net payments by generation
are affected by the amount of taxes, transfers, and
other government expenditures assumed year by
year in the baseline projection. These assumptions
can differ widely. As explained in the appendix,
the methods of projection generally seek to main­
tain current policy in some sense. However, cur­
rent policy can be interpreted in several ways,
especially for expenditures such as defense. Fur­
thermore, long-term Medicare and Medicaid
projections assume that, eventually, policy actions
or other forces will hold spending growth to the
overall rate of economic expansion (adjusted
for shifts in the age and sex composition of the

T A B L E

3

Percentage Difference in Net
Payments between Future
Generations and Age Zero
Productivity Growth Rate
Interest Rate

0.25

0.75

1.25

3.0
6.0
9.0

117
138
228

. 89
111
193

65
87
162

SOURCE: Office o f Management and Budget (1992).

population), even if the growth rate is quite
rapid for the next few decades.1

III. Net Payments by
Future Generations
Future generations— those bom in 1992 and later
— will be required to make a 111 percent larger
net payment to the government, on average, than
those bom in 1991. The average net payments of
$166,500 by future males and $83,400 by future
females are calculated assuming that the male-tofemale net payment ratio is the same for future
generations as for those bom in 1991. The calcula­
tions also assume that all future Americans of a
particular sex will make the same average net pay­
ment over their lifetimes after adjustments are
made for economic growth.
A growth adjustment is needed to compensate
for the fact that future generations will pay more
in taxes, net of transfers received, simply because
their incomes will be higher. To properly assess fu­
ture generations’ net payment relative to that of the
newly bom, it is necessary to calculate the net pay­
ment they will make above and beyond the amount
due to economic growth. Generational accounts
assume that all future generations will pay the same
net amount apart from this growth adjustment. The
net amount is the number shown in tables 1 and 2
for all future generations of the same sex.
A generational imbalance, as defined above,
is calculated in such a way that the generations
now alive, including the newly bom, do not pay
any more taxes (or receive any less transfers)
than projected in the baseline. This assumption
is an analytical device for determining the size of
the nation’s fiscal imbalance; it is not meant to
 ■ 1 A pure extrapolation of recent trends, in contrast, implies that
http://fraser.stlouisfed.org/
health care costs w ill eventually bankrupt the government.
Federal Reserve Bank of St. Louis

suggest that future generations will in fact close
the gap all by themselves. Any actual policy
change is almost certain to bear in some degree
on current generations as well as on those yet
to be born. If such a policy change is made, the
percentage difference in net payments between
the newly born and future generations would
be less than shown in tables 1 and 2. Policy
changes of this kind are discussed below.
The size of the imbalance between future
generations and the newly bom is sensitive to
assumptions about both the interest rate used
for discounting and the growth rate of the econ­
omy. Table 3 shows the percentage differential
under interest rates of 3.0, 6.0, and 9-0 percent
and productivity growth rates of 0.25, 0.75, and
1.25 percent. Although the difference ranges
from 65 percent to 228 percent, our basic con­
clusion, that future generations’ net payment
will be much larger than that of those just born,
still holds in every case.
The generational imbalance also depends on
the policy assumption that all future generations
of the same sex will have the same net payment
(after adjusting for growth). But suppose that the
future generations born between 1992 and 2001
pay only the same amount as those born in
1991- Because these future generations pay less
than previously assumed, those born after 2001
will have a net payment that is 186 percent
larger, rather than 111 percent larger, than that
facing the 1991 generation. The greater the num ­
ber of future generations who pay no more
than current newborns, the larger will be the
net payment required of generations w ho are
bom still later.

Change in the
Imbalance between
1990 and 1991
The estimated 111 percent imbalance in 1991
between newborns and future generations can
be compared with the estimated 79 percent im­
balance in 1990 reported in the fiscal year 1993
budget. The difference primarily reflects lower
baseline receipts projected for 1993-2004.
Based on last year’s projections, the estimated
1991 imbalance would be 81 percent. A second
factor is that another generation, the one born
in 1991, does not have to make the higher lifetime
net payments required of future generations.

T A B L E

4

Change in Generational Accounts
Due to Alternative Policies as of 1991
(thousands of dollars)
Males
Generation’s
Age in 1991

Females

Mandatory
Cap

Surtax

Mandatory
Cap

6.4
7.7
9.1
10.5
11.1
11.8
12.6
14.0
15.9
18.2
20.7
23.0
23.2
20.0
15.6
11.0
6.6
2.5
0.0,

16.1
19.2
22.4
25.3
26.1
25.5
24.0
21.8
18.8
15.1
11.2
7.6
4.9
3.1
2.0
1.2
0.7
0.3
0.0

5.4
6.6
7.9
9.3
10.411.8
13.5
15.9
18.7
22.0
25.6
29.2
30.3
27.4
22.7
16.9
10.2
3.6
0.0

0
5
10
15
20
25
30
35
40
45
50
55
60
65
70
75
80
85
90
Future
generations

-71.3

-57.2

-33.2

Surtax
7.5
8.9
10.4
11.4
11.6
11.1
10.4
9.4
8.2
6.8
5.3
4.0
2.8
1.9
1.2
0.6
0.2
a
0.0
-29.3

Percentage Difference in Net Payment
Future generations
and age zero

11.7

15.1

11.7

15.1

a. $0.05 thousand or less.
SOURCES: Office of Management and Budget (1992) and authors’ calculations.

IV. Illustrative
Policy Changes
Table 4 compares two alternative policies aimed
at rectifying the fiscal imbalance between the gen­
eration just bom and future generations. Both
would remove the imbalance to about the same
degree, but their distributive effects among differ­
ent generations vary tremendously.
The first of these policies is a cap on all man­
datory spending programs except Social Security
and deposit insurance. From 1993 to 2004, the sav­
ings from the cap would be calculated for each
mandatory program with beneficiaries as the dif­
ference between 1) baseline spending and 2)
spending limited to the growth in the number of
beneficiaries plus the inflation rate (with a little ad­
 ditional growth allowed in the first two years for


transition). Medicare and Medicaid are the largest
mandatory programs, and they produce most of
the total savings. For these two programs, spending
would be limited to the amount determined by the
cap. For all other mandatory programs (except So­
cial Security and deposit insurance), the required
savings would be spread across the board as a
proportionate reduction in spending. Employing
the economic assumptions used for the 1993 MidSession Review (and extended to the years after
1997), the consolidated budget is projected to be
balanced under the cap in 2004.2 Thereafter, the
spending growth rates for mandatory programs
would be the same as in the baseline calculations.
However, because the level of mandatory spending
in 2004 would be lower than under the baseline,
applying these same growth rates would produce
permanently lower levels of subsequent spending.
The cap on mandatory spending would largely
eliminate the imbalance in net payments between
future generations and those just bom. Future
generations would pay an average of 12 percent
more, instead of 111 percent more. The net pay­
ment by future males would be $71,300 less than
under the baseline, on average, and the net pay­
ment by future females would be $33,200 less.
All existing generations would face a larger
net payment. In terms of age, the biggest increase
would be for people who are now around 55 to
60. This is because the cap would mainly reduce
transfer payments for health care, especially
Medicare, which is received almost totally by the
elderly. The increase in net payments would be
higher for females than males at almost every
age, because females live longer, and the cap
would primarily reduce transfers to the elderly.
The second policy is a surtax on the federal
individual income tax. From 1993 to 2004, the
amount of the surtax would equal the spending
reduction required by the mandatory cap. After
2004, the surtax would increase at the same rate
as other taxes generally do.
The surtax would reduce the generational im­
balance by almost as much as the mandatory
cap. Future generations would pay 15 percent
more on average than those just born, com­
pared to 12 percent under the cap and 111 per­
cent under the baseline. The average future
male would pay $57,200 less, and the average
future female would pay $29,300 less. All exist­
ing generations would pay more.
The distributional effect of the surtax would
be quite different from that of the mandatory
cap, however. The surtax would bear much
■ 2 The budget would not necessarily be balanced in all later years.
Generational balance over a period taken as a whole is consistent with
some years of deficit, and the illustrative policies do not entirely eliminate
the imbalance.

TABLE 5
Lifetime Net Tax Rates, Gross
Tax Rates, and Transfer Rates
(percent)
Males

Females

Average of Males and Females

Net Tax
Rates

Gross Tax
Rates

Transfer
Rates

Net Tax
Rates

Gross Tax
Rates

Transfer
Rates

Net Tax
Rates

Gross Tax
Rates

Transfer
Rates

1900
1910
1920
1930
1940
1950
I960
1970
1980
1990
1991

17.8
21.8
24.2
26.4
28.2
30.6
32.3
33.6
34.1
33.9
33.9

19.6
24.6
27.7
30.5
33.0
36.8
39.6
41.7
42.4
42.7
42.7

1.8
2.8
3.5
4.1
4.8
6.2
7.2
8.1
8.3
8.7
8.8

35.3
35.7
34.0
34.4
32.7
30.6
31.5
32.5
33.1
32.9
32.8

43.9
49.6
50.4
52.8
50.6
46.9
47.9
50.3
51.6
52.0
52.0

8.7
13.9
16.5
18.5
17.9
16.3
16.4
17.8
18.5
19.1
19.2

21.5
24.7
26.3
28.1
29.3
30.6
32.1
33.2
33.8
33.6
33.5

24.8
29.8
32.5
35.3
37.3
39.9
42.3
44.5
45.5
45.7
45.8

3.3
5.2
6.2
7.2
8.0
9.3
10.2
11.3
11.7
12.2
12.2

Future
generations

71.5

n.a.

n.a.

69.3

n.a.

n.a.

71.1

n.a.

n.a.

Generation’s
Year of Birth

SOURCE: Office o f Management and Budget (1992).

more on the relatively young; the cap, on the
relatively old. For example, a 65-year-old male
would pay $3,100 more under the surtax than
under the baseline, but $20,000 more under the
cap; in contrast, a 20-year-old male would pay
$26,100 more under the surtax but $11,100 more
under the cap. This is because the surtax is paid
disproportionately by younger people earning
income, whereas the cap disproportionately re­
duces transfer payments to the elderly.
The second distributional difference is be­
tween males and females. The surtax bears more
on males; the cap, on females. This is primarily
due to the fact that males tend to have higher in­
comes and pay more income taxes, whereas fe­
males tend to live longer and receive more
health care transfers.
The two policies also have different distribu­
tional effects between existing and future gener­
ations. The reduction in net payments by future
generations is less under the surtax: $14,000 less
for males, on average, and $4,000 less for fe­
males. This is partly because a larger imbalance
remains between future generations and those
just born, 15 percent compared to 12 percent.
The improvement for future generations is less
under the surtax because older generations do
not pay as much more.




V. Historical
Lifetime Tax Rates
The analysis so far has been prospective, consid­
ering only the present value of future taxes and
transfers as of 1991 for existing generations and
those yet to be bom. A prospective analysis can
compare policy changes, and it can compare the
lifetime fiscal burdens on the newly born and fu­
ture generations, since their entire lifetimes are
yet to come. However, it cannot compare the
lifetime fiscal burden of one existing generation
with that of another existing generation born in
a different year— or with future generations—
because part of any living generation’s taxes and
transfers occurred in the past and thus are not
taken into account.
A comparison of one existing generation with
another must be based on their entire lifetime
taxes and transfers. Table 5 shows the results in
terms of lifetime net tax rates for different genera­
tions bom since 1900 and for future generations.
The lifetime net tax rate of a generation is defined
as the present value of its lifetime net taxes (taxes
less transfers) divided by the present value of its
lifetime income. The present values are calculated
as of the generation’s year of birth, so that each
cohort can be compared from the standpoint of
when it was bom. The lifetime net taxes are the

9

same as the generational account for a genera­
tion in the year of its birth. (As shown in table 1,
the lifetime net taxes of males bom in 1991 are
$78,900.) Since lifetime taxes, transfers, and in­
come have trended upward and have fluctuated
to some extent, it is more appropriate to com­
pare the relative fiscal burden on different gen­
erations in terms of lifetime net tax rates than in
terms of absolute amounts.
Lifetime net tax rates are calculated from his­
torical data on taxes, transfers, and income up
to 1991 and from projections of future data as
described in the previous sections. Historical
data, however, are not available in the same
detail as the figures for recent years underlying
our projections, and in some cases they are not
available at all. The appendix summarizes the
methods used to construct the historical series.
Lifetime calculations also introduce a number
of conceptual issues. For example, how should
lifetime income be measured? Lifetime income is
defined as a present value, like lifetime taxes and
transfers. Therefore, the present-value calcula­
tions should include all income that increases a
generation’s resources: labor earnings, inherited
wealth, and capital gains over and above the
normal return to saving. The normal return to
saving is not itself included in income, because
that would be double counting. Saving and
earning a normal rate of return do not increase
the present value of a household’s resources.
Data do not exist on the share of each genera­
tion’s income stemming from inherited wealth
or supernormal capital gains, so labor earnings
are used to represent income.3
The lifetime net tax rate for males in the base
case exhibits a strong upward trend, rising from
17.8 percent in 1990 to about 34 percent in 1970
and succeeding years. The lifetime net tax rate
for females exhibits a quite different pattern. It
started much higher than for males, at 35-3 per­
cent, declined irregularly for half a century, and
rose slightly thereafter. Since 1950, the net tax
rate has been about the same for both sexes.
The pattern of the female net tax rate is an
artifact of w om en’s increasing labor force partic­
ipation and the method used to attribute labor
earnings and taxes within a family. Labor earn­
ings are attributed to the person who receives
them; some taxes, including excises, are attrib­
uted equally to husband and wife. The lower
female earnings thus contribute to a higher fe­
male tax rate, especially in the early decades of
■ 3 The error due to this omission is relatively small in the aggregate,
given that labor income has long accounted for three-fourths of all income
 and that only part of the remaining income from capital should be included.
However, the errors for different generations could vary, depending on
http://fraser.stlouisfed.org/
fluctuations in asset values and bequest behavior.
Federal Reserve Banktrends
of St.and
Louis

the century. At the same time, the rise in female
labor force participation over time has caused
their earnings to increase faster than male earn­
ings, without directly increasing those taxes that
are attributed equally to husband and wife. This
has offset the general increase in taxes that con­
tributed to the rising net tax rates observed in
the series for males.
This pattern emphasizes a conceptual question
in calculating the generational accounts. How
should income, taxes, and transfers be attributed
within a family? Excise taxes could alternatively
have been attributed in proportion to labor earn­
ings, or labor earnings could have been attrib­
uted equally between husband and wife. Table
5 displays one answer to this question by includ­
ing lifetime net tax rates for males and females
combined, calculated as a weighted average of
the net tax rate for each sex. Note that the aver­
age net tax rises significantly over most of this
century, increasing from 21.5 percent for the gen­
eration born in 1900, to 32.1 percent for the gen­
eration born in I960, to about 33 percent for the
generations born since 1970. This trend reflects
the growing fiscal role of government. The aver­
age net tax rate for future generations is 71.1
percent, which is the same percentage differ­
ence relative to people newly born in 1991 as
that shown in tables 1 and 2. The male and
female net tax rates are virtually identical for fu­
ture generations.
Table 5 also breaks down the net tax rates
between gross tax rates and transfer rates. To
calculate the latter, the present value of a gener­
ation’s lifetime taxes (or transfers) is divided by
the present value of its lifetime income. This
breakdown reveals the expanded role of gov­
ernment transfer payments during the past cen­
tury. The lifetime transfer rate for males and
females taken together nearly quadrupled be­
tween the generations born in 1900 and those
bom in 1991, starting at 3-3 percent and rising
each decade to a rate of 12.2 percent. The in­
crease was more rapid, in both relative and ab­
solute terms, for the generations born before
World War II than afterward.
Because of the growth in the transfer rate, the
gross tax rate has not leveled off in the past two
decades to the same extent as the net tax rate.
The gross tax rate for males and females combined
nearly doubled between the generations born in
1900 and 1991, starting at 24.8 percent and in­
creasing each decade to a rate of 45.8 percent.
A generation’s lifetime taxes pay for the govern­
ment’s purchases of goods and services as well
as for public transfers to its own members and
other generations.

T A B L E

6

Lifetime Net Tax Rates
(percent)
Males
Generation’s
Year of Birth

Females

Average of Males and Females

Baseline

Mandatory
Cap

Surtax

Baseline

Mandatory
Cap

Surtax

Baseline

Mandatory
Cap

Surtax

1900
1910
1920
1930
1940
1950
I960
1970
1980
1990
1991

17.8
21.8
24.2
26.4
28.2
30.6
32.3
33.6
34.1
33.9
33.9

17.8
21.8
24.4
26.8
28.9
31.5
33.6
35.3
36.5
36.6
36.6

17.8
21.8
24.3
26.4
28.5
31.6
34.6
37.6
39.9
40.7
40.8

35.3
35.7
34.0
34.4
32.7
30.6
31.5
32.5
33.1
32.9
32.8

35.3
35.9
34.8
36.5
35.2
32.9
34.2
35.7
37.0
37.4
37.3

35.3
35.7
34.0
34.5
33.2
31.5
33.5
35.9
38.2
39.0
39.1

21.5
24.7
26.3
28.1
29.3
30.6
32.1
33.2
33.8
33.6
33.5

21.5
24.7
26.6
28.9
30.4
31.9
33.8
35.4
36.6
36.9
36.9

21.5
24.7
26.3
28.2
29.7
31.6
34.2
37.1
39.3
40.2
40.2

Future
generations

71.5

40.9

47.0

69.3

41.7

45.0

71.1

41.3

46.5

SOURCE: Office of Management and Budget (1992).

The breakdown further shows that the simi­
larity between males and females in lifetime net
tax rates masks very different gross tax and
transfer rates. Each rate is much higher for fe­
males, reflecting such factors as their lower life­
time income and greater longevity (as well as
the attribution assumptions for taxes and income
within the family).
Table 6 shows how policy changes designed
to rectify the generational imbalance would
affect the lifetime net tax rates of different gener­
ations. For future generations, the cap on m an­
datory spending reduces the average lifetime
net tax rate on males and females together from
71.1 percent to 41.3 percent, while the surtax
reduces it to 46.5 percent.
For existing generations, the effect of policy
changes on lifetime net tax rates increases as
the generation’s age declines, and for the very
youngest cohort, bom in 1991, the change is
quite significant. Under the mandatory cap, this
generation’s lifetime net tax rate increases by
2.7 percentage points for males. For females,
who will live longer, the increase is 4.5 percent­
age points. A surtax would raise the burden on
the youngest group still more: an increase over
the baseline of 6.9 percentage points for males
and 6.3 percentage points for females. For older
generations, the increase in the lifetime net tax
rate is smaller, primarily because the absolute ef­
 fects of the policy change are discounted over
http://fraser.stlouisfed.org/
more years in order to calculate the present value
Federal Reserve Bank of St. Louis

as of the generation’s year of birth. In the case
of the surtax, the absolute effects are also
smaller for older generations, because they
have fewer remaining years of labor earnings.
The burden that remains on the older genera­
tions is greater under the mandatory cap than
under the surtax, as previously explained, be­
cause Medicare benefits are relatively high and
income taxes relatively low during their remain­
ing years. Since females live longer than males,
the increase in their lifetime net tax rate under
the mandatory cap is greater than for males at
every age. O n the other hand, because males
have higher labor earnings, the surtax generally
hits them harder than it does females.

Appendix—
Construction of
the Generational
Accounts
Present-Value
Constraint
Generational accounting is based on the presentvalue budget constraint of the government sector.
In simple terms, this constraint says that the gov­
ernment must ultimately pay for its purchases of
goods and services either with resources it
obtains from current and future generations or
with its current assets (net of debt). If current

KU

generations pay less in taxes (net of transfers
received) to finance government purchases,
future generations will have to pay more. For
example, suppose that, through borrowing, pay­
ments for the government’s bills were repeatedly
shifted to future generations by each successive
current generation. Then this debt would grow,
with interest. Eventually, the interest would ex­
ceed the lifetime income of future generations,
resulting in default.
More precisely, the government’s presentvalue constraint means that, at any point in time,
the present value of the government’s future pur­
chases of goods and services cannot exceed the
sum of three items: 1) the present value of future
taxes to be paid (net of transfers received) by
existing generations (that is, the sum of their
generational accounts multiplied by the number of
people in each generation), 2) the present value
of taxes to be paid (net of transfers received) by
future generations, and 3) the value of government
assets that yield income, less the government debt.
Generational accounting estimates the present
value of the government’s purchases of goods and
services plus amounts 1 and 3- Amount 2, the pres­
ent value of taxes to be paid by all future genera­
tions (net of transfers received), is calculated as the
present value of future government purchases
minus amounts 1 and 3The generational accounts for future genera­
tions are derived from the aggregate amount 2.
For all but one of the policy experiments dis­
cussed here, different net payments (after adjust­
ing for economic growth) are not estimated for
different future generations. Rather, the aggregate
present-value net payment by future generations
is divided on an even basis among all future gen­
erations so that the average net payment by the
members of each keeps pace with the economy’s
productivity growth. Thus, as shown in tables 1
and 2, one single (growth-adjusted) average figure
stands as the generational account for all future
generations of a given sex. Because the genera­
tional account is calculated indirectly from the
above aggregates, it can be shown only as a single
number and cannot be divided among specific
taxes and transfers.

Underlying
Calculations
Calculating the generational accounts is a threestep process. The first step entails projecting
each currently living generation’s average taxes
and transfers for each future year in which at
 least some of its members will be alive. The


second step converts these projected values into an
actuarial present value, using assumptions for the
discount rate and the probability that the generation’s
members will be alive in each of the future years.
The sum of these present values, with transfers sub­
tracted from taxes, is the generational account, or net
payment, for existing generations shown in the first
column of tables 1 and 2. The third step estimates
the other terms of the present-value constraint (ex­
plained in the previous section) so as to derive the
average net payment by future generations. The cal­
culations are based on projections to the year 2200.
Projection o f taxes a n d transfers. The projec­
tion of average future taxes and transfers begins
with the national totals of all federal, state, and
local taxes and transfers as reported in the Na­
tional Income and Product Accounts (NIPAs) for
calendar year 1991. (All years in this article are
calendar years unless otherwise stated.) Employee
retirement and veterans’ benefits paid by the gov­
ernment are considered a form of employee com­
pensation and are classified as the purchase of a
service rather than as a transfer payment.
The base-year NIPA totals are distributed to all
existing generations, as defined by age and sex,
based on the corresponding distributions in crosssection survey data. These surveys include the Sur­
vey of Income and Program Participation and the
Current Population Survey, both by the Bureau of
the Census, and the Survey of Consumer Expendi­
tures by the Bureau of Labor Statistics. Those taxes
that are not directly paid by individuals and so do
not appear in these surveys, such as the corpo­
rate income tax, are allocated. Because genera­
tional accounting attributes taxes and transfers
to individuals, household taxes and transfers
are attributed to household members. No spe­
cial imputations are made to children, but the
cross-section surveys impute some consumption
to them; thus, the taxes on that consumption are
attributed to children. The attribution mles affect
the values of the baseline accounts, but are not
likely to alter the generational implications of
policy changes.
The distribution of average future taxes and
transfers by age and sex is assumed to equal the
base-year average amounts after adjustments for
growth and projected policy. In the case of federal
taxes and transfers for 1993-2004, the amounts
correspond to the current service estimates of
taxes and transfers in the Mid-Session Revieiv o f
the 1993 Budget (July 1992), extended beyond
1997 and updated for the actual fiscal year 1992
results. In the case of state and local taxes and
transfers for 1993-2004, the amounts are based on
the GDP assumptions in the Mid-Session Review
as well as on the assumption that the ratios of

ita
state and local tax and transfer aggregates to
GDP remain constant at 1991 levels. After 2004,
the average taxes and transfers by age and sex
are assumed, with two exceptions, to increase
at the assumed rate of productivity growth. Pro­
ductivity (both labor and multifactor) is assumed
to increase by 0.75 percent a year, which is close
to the average annual rate of multifactor produc­
tivity growth since 1970.
Social Security and health care transfers are the
two exceptions. Projected Social Security transfers
and payroll tax receipts after 2004 are based on
special calculations made by the Social Security
Administration assuming a productivity growth
rate of 0.75 percent. Projected Medicare and Medi­
caid transfers from 2005 through 2030 are calculated
from the growth rates in the Health Care Financing
Administration’s middle-scenario estimates published
in 1991 4 After 2030, health care transfers are as­
sumed to stabilize as a percentage of GDP apart from
the effect of changes in the composition of the popu­
lation by age and sex. Medicare receipts are assumed
to grow at 0.75 percent a year.
Assum ptionsforpresent value. The appropriate
discount rate for calculating the present value of
future amounts depends on whether these
amounts are known with certainty. Future govern­
ment receipts and expenditures are risky, which
suggests that they should be discounted by a rate
higher than the real rate of interest on government
securities. On the other hand, government receipts
and expenditures appear to be less volatile than
the real return on capital, which suggests that
they should be discounted by a rate lower than
that. The baseline calculations assume a 6 per­
cent real discount rate, which is intermediate
between the roughly 2 percent average real
return available in recent years on short-term
Treasury securities and the roughly 10 percent
real return available on capital.
The present values of future average taxes
and transfers are also discounted for mortality
probabilities in order to derive actuarial present
values. The demographic probabilities through
2066 are those embedded in the Social Security
trustees’ intermediate projection in 1992 (alter­
native II) of the population by age and sex. The
fertility, mortality, and immigration probabilities
in 2066 were used for later years. Immigration is
treated as equivalent to a change in mortality.
Otherprojections. Federal purchases of goods
and services through 2004, like federal taxes and
transfers, are from the latest Mid-Session Review
extended beyond 1997 and updated for the actual
fiscal year 1992 results. State and local purchases
through 2004 are kept at the same ratio to GDP as

http://fraser.stlouisfed.org/ 4 This scenario is discussed in Sonnefeld et al. (1991).
Federal Reserve Bank of St. Louis

■

in 1991. Federal, state, and local purchases after
2004 are divided between 1) those made on
behalf of specific age groups— the young,
middle-aged, and elderly— such as educational
expenditures, and 2) those that are more nearly
pure public goods, such as defense and public
safety. Purchases per person in each of the
three age groups, and purchases of public
goods per capita, all increase at the assumed
rate of productivity growth.
The economic value of the government assets
that yield income, less the government debt, is es­
timated to be the cumulative amount of the NIPA
deficit since 1900 converted to constant dollars by
the GDP deflator.
The average growth-adjusted net payment to
be made by future generations is determined
using the aggregate present value of the net pay­
ment (as derived through the present-value bud­
get constraint), the assumed productivity growth,
and the projected size of future generations. The
size of future generations is estimated using the
Social Security alternative II projection through
2066 and the demographic assumptions for 2066
for later years.
H istorical lifetim e net tax rates. Lifetime net
tax rates for generations born between 1900 and
1991 are calculated by dividing the generational
account of each generation at birth by its human
wealth— the present value at birth of its future labor
earnings. Calculating a generation’s human wealth
requires knowing its average labor earnings in each
future year. The average labor earnings received by
particular generations in particular years are deter­
mined by distributing aggregate labor income by
age and sex using cross-section distributions of
labor income found in cross-section survey data.
The lifetime generational accounts for generations
bom between 1900 and 1991 are based on actual
taxes and transfers between 1900 and 1991 and on
projected taxes and transfers in the years thereafter.
Aggregate labor earnings, taxes, and transfers
were obtained from the NIPAs for 1929 and later
years. Pre-1929 aggregate labor earnings are from
Historical Statistics o f the United States, Colonial
Times to 1970. Pre-1929 taxes and transfers are from
the 1982 Census of Governments, Historical Statis­
tics on Government Finances an d Employment.
Various cross-section surveys are used to distribute
aggregate labor earnings, taxes, and transfers by age
and sex. Cross-section surveys prior to the early
1960s were not available for this study, so surveys
from years after I960 are used for earlier years. The
Current Population Surveys are used for labor earn­
ings and taxes on labor earnings in 1964 and later
years, and the 1964 survey is used for earlier years.

¡a
References
Auerbach, Alan J., Jagadeesh Gokhale, and Lau­
rence J. Kotlikoff. “Generational Accounts: A
Meaningful Alternative to Deficit Account­
ing,” in David Bradford, ed., Tax Policy a n d
the Economy, vol. 5. Cambridge, Mass.: Na­
tional Bureau of Economic Research and MIT
Press, 1991.
Kotlikoff, Laurence J. G enerational A ccounting:
K now ing Who Pays, a n d When, fo r W hat
We Spend. New York: The Free Press, 1992.
Office of Management and Budget. Budget o f
the U.S. Government, Fiscal Year 1993Washington, D.C.: U.S. Government Printing
Office, 1992.
Sonnefeld, Sally T., et al. “Projections of Nation­
al Health Expenditures through the Year
2000,” H ealth Care F inancing Review, vol.
13 (Fall 1991), pp. 1-27.




Ifil

Has the Long-Run Velocity
of M2 Shifted? Evidence
from the P* Model
by Jeffrey J. Hallman and Richard G. Anderson

Introduction
Since early 1990, M2 has grown more slowly
than suggested by its historical relationships
with both income and opportunity cost, the lat­
ter measured relative to short-term market inter­
est rates. During the first part of this period
(1990-91), although historical relationships with
its opportunity cost suggested a significant
decrease, M2 velocity remained quite close to
its long-run average value of about 1.65. During
1992, M2 velocity increased sharply while its
opportunity cost apparently decreased further.
This behavior suggests that the long-run ve­
locity of M2, or V-Star (V*), may have risen, per­
haps as a result of changes in the money supply
process, such as the stricter regulatory environ­
ment facing depository institutions. If V* has in­
deed increased, then the P-Star (P*) model, which
assumes no change in M2’s long-run velocity,
should have persistently underpredicted inflation
over the last three years.We find, however, that
the model has quite accurately predicted the de­
celeration of inflation since 1990.
The paper also presents an extensive analysis,
based on simulation of the P* model under a vari­
 ety of alternative hypotheses regarding possible


Jeffrey j. Hallman is an
economist at the Federal
Reserve Board of Governors,
Washington, D.C., and
Richard G. Anderson is a
research officer at the Federal
Reserve Bank of St. Louis.
The authors wish to thank
numerous colleagues at the
Board and at the Federal
Reserve Banks of Cleveland
and St. Louis for helpful com­
ments and suggestions.

shifts in long-run velocity, that provides little
support for the view that V* has changed. Our
findings reinforce other recent research conclud­
ing that the pickup in M2’s velocity may be
largely explained by increases in an alternative
opportunity cost measure based on long-term
market rates.1 If correct, these results suggest that
sluggish M2 growth over the last three years con­
tributed to both the slow pace of economic activity
and the significant progress toward price stability.
In addition, they suggest the potential for a
rebound of M2 growth during 1993 as long-term
rates fall and M2 velocity growth decelerates.

I. The P* Model2
The P* model links the behavior of the price
level to the growth of M2 by imposing two
hypotheses on the equation of exchange, M VPQ: (i) real output Q t fluctuates around poten­
tial real output Q * over long periods, and (ii)

■

1 See Feinman and Porter (1992).

■

2 See Hallman, Porter, and Small (1991).

F I G U R E

1

M2 Velocity and Opportunity Cost
Ratio

Percent

deceleration of inflation as P t —>P]- Hallman,
Porter, and Small (1991) show that the P* model
can be derived as the reduced form of a special
case of the expectations-augmented Phillips
curve. In this case, changes in the inflation rate
follow a simple autoregressive process aug­
mented by the lagged price gap, p t -p*t:

(2 )
4

i

SOURCE: Authors’ calculations.

velocity Vt has an equilibrium level V*, inde­
pendent of time, that it tracks in the long run.3
With these assumptions, P * is defined as the
long-run equilibrium price level that could be
supported by the current level of the money
stock (M t) if current output (Q t ) settled down
to this period’s level of potential output ( Q *):

( 1)

p* —
t

M t V*

Q)

Our assumptions regarding Vt and Q t imply
that if money remains fixed at M t, then Pt will
fluctuate around P*.
For policymakers, P* provides an index in
each period t of the cumulative long-run im­
pact of money on the price level. The difference
between the current price level and P* can pro­
vide a leading indicator of future acceleration or

where lower-case letters denote natural logs, n ,
is the inflation rate, and Ai t , is the quarterly
change in the inflation rate. The existence of P*
depends critically on the validity of assumptions
(i) and (ii). The assumption that real output fluc­
tuates around a growing level of potential out­
put is not controversial; indeed, measures of
potential output are often constructed so as to
ensure the validity of this assumption. The
velocity assumption is more open to dispute.4
The constant velocity assumption of the P*
model is motivated, in part, by the tendency of
M2’s velocity since 1955 to fluctuate around
1.65, trending neither up nor down (see figure
1). Velocity at times has remained above its
long-run average for several years, and recent
increases do not appear particularly unusual in
this respect. The assumption is likewise moti­
vated by the close historical correspondence be­
tween M2’s velocity and its opportunity cost
that prevailed through 1989, also shown in fig­
ure 1. During this period, sustained deviations
of velocity from its long-nm average tended to be
accompanied by comparable deviations of oppor­
tunity cost from its long-run average.6 The ten­
dency for M2 opportunity cost to return to its
long-nin average provided an economic rationale
for M2 velocity to do the same. Empirical models

■

4 See, for example, Kuttner (1990) and Pecchenino and Rasche
(1990). As Pecchenino and Rasche note, the inflation dynamics in Kuttner’s
paper are incorrect because he confuses Q and Q* in the P* model.

■ 5 The opportunity cost shown equals the difference between the
three-month Treasury bill rate (on an annualized coupon-equivalent
basis) and a share-weighted average of the own rates paid on the com­
ponents of M2. See Moore, Porter, and Small (1991). Note that their
series begins in 1959.

■

■

3 Equivalent alternative assumptions are (i) M2 velocity is a sta­

tionary stochastic process, or (ii) all shocks to the level of M2 velocity
are transitory. In a nonstochastic model, P will converge to P*. For a
 statement of the modern quantity theory, see Dewald (1988). For antece­
http://fraser.stlouisfed.org/
dents to P*. see Humphrey (1989).

Federal Reserve Bank of St. Louis

6 M2’s velocity and its opportunity cost have moved in opposite
directions before. In 1960, velocity rose while opportunity cost fell; in
1983, velocity fell while opportunity cost rose. The duration of the most
recent divergence appears unusual, however. Note that the vertical dis­
tance between the lines in the figure is not meaningful.

m

of M2’s opportunity cost developed by Federal
Reserve Board staff during the 1980s seemed to
confirm this long-run behavior.7 During the past
three years, however, M2’s velocity and opportu­
nity cost have diverged sharply, with the former
increasing as the latter has decreased. This diver­
gence raises the question of whether equilibrium
velocity has indeed changed 8

II. Using the P*
Model to Identify
Changes in V*
While the P* model was originally offered as a
link between inflation and money growth, its in­
verse provides a test of one of its primary assump­
tions: the constancy of long-run M2 velocity.9 If
the long-run velocity of M2 has in fact increased
during the last three years, predictions of inflation
from the original P* model (which assumes that
long-run velocity has not changed) should be in­
ferior to predictions from a model that incorpo­
rates the “true” change in V * . This simple insight
immediately suggests a testing strategy for evaluat­
ing alternative hypotheses regarding putative
shifts in V * : Construct the various P* time series
corresponding to alternative velocity assumptions;
use a battery of goodness-of-fit and forecast ac­
curacy tests to compare the relative forecasting
performance of the model under the alternative as­
sumptions; and accept the velocity assumption(s)
most consistent with the data or, in other words,
the one that yields the best model forecasting

■

7 See, for example, Moore, Porter, and Small (1991). These models
typically assumed the existence of a long-run fixed spread between the offer­
ing rate on a particular type of deposit and a short-term risk-free market rate
(for example, the three-month Treasury bill). A similar assumption was made
for money market mutual fund yields. The size of the equilibrium spread pre­
sumably depended on both demand and supply factors, including regulatory
(capital) requirements facing the intermediary, deposit insurance premiums,
and the liquidity of the deposit.

■

8 It also raises the possibility that M 2’s opportunity cost was incor­
rectly measured. Recent research by other Board staff suggests that this
may have been the case. A new opportunity cost measure that includes a
long-term Treasury rate and a rate on consumer loans appears to track
M2 velocity during 1984-92. These models are highly preliminary, how­
ever, and do not feature the long-run error-correction behavior of pre­
vious Board staff models. See Feinman and Porter (1992).

■

9 The antecedents discussed by Humphrey (1989) also view P*-

type models primarily as models of the inflation rate. A constant (or very
slowly changing) velocity of money is assumed almost without mention.
 This is reminiscent of Irving Fisher’s quantity theory model. See Laidler
http://fraser.stlouisfed.org/
(1985), chapter 5.

Federal Reserve Bank of St. Louis

performance.10 Suppose, for example, we learn
that V * increased 6 percent in mid-1989, to 1.75
from 1.65, and has remained at that value. Using
equation (1), we can construct an alternative time
series of P* values that will also have shifted up
by 6 percent, consistent with the higher velocity.
Use of this new, more accurate measure of the
equilibrium price level should improve the accu­
racy of inflation forecasts from the P* model.
Although the divergence of velocity and op­
portunity cost shown in figure 1 suggests that
V * may have increased, the curves tell us little
about the precise form of the change. In our
analysis, we consider five alternative hypotheses
concerning V * during 1989-92:
• It remained at its 1955-89 average value of
1.65.
• It increased 6 percent in 1989:IIIQ- This
quarter was chosen based on the presence of two
high-visibility events that marked the end of a dec­
ade of regulatory forbearance for undercapitalized
depository institutions: passage of the Financial In­
stitutions Reform, Recovery, and Enforcement Act
(FIRREA) and the first resolutions of insolvent
thrifts by the Resolution Trust Corporation. The
depository sector, facing a stricter regulatory envi­
ronment and the need to improve its capital ratios,
might be expected to grow more slowly or even
to contract as a result.
• It shifted upward by 2Va percent each
year in 1990 and 1991 and by 2Vi percent in 1992.
These are approximately the size of the forecast
errors from the Federal Reserve Board staffs mod­
el of M2 demand based on income and M2’s op­
portunity cost relative to short-term market rates.11
• It began increasing at a IV2 percent annual
rate in 1990:IQ.
• It began decreasing at a V2percent annual
rate in 1990:IQ. This scenario is included for two
reasons. First, it directly challenges the widely
held conjecture that structural changes affecting
depository intermediation during the past three
years must have increased M2’s long-run velocity.
Second, it admits the possibility that the decrease
in the inflation rate since 1989 has occurred largely
as might have been expected (and perhaps even
a bit more rapidly than expected), given the slow
growth of M2 and the significant output gap.

■

10 This is somewhat more complicated than stated, since the tests
are non-nested. Below, we generate the empirical sampling distribution
for each individual statistic.
■

11 See Feinman and Porter (1992), figure 1.

F I G U R E

2

Simulated Inflation Rates
from Alternative V* Hypotheses
Percent
7

Actual inflation
rate
Unchanged
One-time shift
in 1989:IIIQ
Money-demand
model
Increasing trend
Decreasing trend
1 I I I 1 II

1988

1989

I II

I I I I I 1 I I

1990

1991

I I 1 1 L...1

1992

1993

NOTE: First simulated value under all five hypotheses is 1 9 8 9 :IIIQ .
SOURCE: Authors’ calculations.

Each of the V* hypotheses suggests a corres­
ponding P * series, constructed according to
equation (1) using the hypothesized V*r Under
the null hypothesis that V* has not changed from
its 1955-89 level, the inflation-rate path for each
P* series is given by equation (2). Actual data are
used through 1992:IVQ.12
Under the five alternative V* assumptions, dy­
namic simulation of the P* model, shown in
equation (2), yields the five inflation-rate paths
shown in figure 2. Each simulation begins in
1989:IIIQ and is nonstochastic; that is, all of the
Et error terms in equation (2) are set equal to
zero over the simulation period. During the past
three years, the actual inflation rate generally
has been between the rates suggested by the un­
changed or declining V* scenarios and those
suggested by a trend increase in V*. O n balance,
the inflation rate appears to have most closely
followed the path given by the constant V*
hypothesis, at least through 1992:IIIQ. Inflation
in 1992:IVQ, however, was higher than forecast
by the P* model with V* unchanged.
The nonstochastic simulations shown in fig­
 ure 2, though suggestive of an unchanged longhttp://fraser.stlouisfed.org/
run M2 velocity, are not capable of answering
Federal Reserve Bank of St. Louis

our question about a shift in equilibrium veloc­
ity. In particular, the simulations assume that no
stochastic factors influence the evolution of the
inflation rate (ef = 0 for all t), including possible
random fluctuations in M2 velocity, when M2
velocity in fact has a relatively high variance.
From a statistical viewpoint, the data shown in
figure 2 represent only one “draw” from the uni­
verse of ways velocity and inflation might have
evolved under each alternative hypothesis regard­
ing V*. An adequate test must incorporate the in­
herent randomness and variability of economic
variables. Furthennore, comparing the perform­
ance of several models (or, in our case, the same
model using alternative estimates of P* ) solely on
observed, actual data leaves unanswered a num­
ber of interesting questions, such as:
• Suppose, in fact, that inflation accelerates
in 1993- How long might it take before incoming
data reveal a change in V? At what point, if any,
will the statistical evidence compel us to reject the
hypothesis that the long-mn velocity of M2 has
not changed?
• Which hypothesis regarding M2 velocity is
believed by financial market participants? Are
further decreases in long-term market interest
rates waiting for clearer signals regarding future
M2 velocity?
We conducted a simulation study to investi­
gate these issues as well as the overall accepta­
bility of the V* hypotheses.13 Our simulation
design generates, for each of the five V* hypoth­
eses, 1,000 simulated paths for Pt from 1989:IIIQ
through 1994:IVQ. Each path is the result of a
stochastic simulation of the P* model under the
appropriate velocity hypothesis. The stochastic in­
novations e, for the simulations are drawn from a
normal distribution scaled to have a mean of zero
and a standard deviation of about two-thirds of 1
percent at an annual rate. This corresponds to the
smaller post-1986 variance of the residuals from the
P model when estimated over 1960:IIIQ-1992:IIIQ,
as shown in figure 3- (A formal statistical test strongly
rejects equality of the variance of the residuals be­
fore and after 1986.) Although the reason for this
smaller variance is not apparent, it may be due to
less variance in the expected inflation rate after
1986. Our simulations assume that the future

■

12 After 1992:IVQ, M2 and 0 * are assumed to grow at annual
rates of 4.5 percent and 2.5 percent, respectively.
■

13 The simulation methodology also allows us to address some is­
sues of interest mainly to econometricians, such as assessing how well
various statistics perform in detecting the kinds of changes in which we
are interested.

FI GURE
Model Residuals
Percent
4
3

3

and ru’12 are based on the last four, eight, and
twelve forecast errors, respectively.
•
B ino m ial tests for an unusually high num ­
ber of positive forecast errors, due to the assumed
V* being too small. Bn4, bn8, and b n l2 are based
on the last four, eight, and twelve forecast errors,
respectively.

The statistics are discussed further in the appen­
dix. For each of the 1,000 replications, we calcu­
lated and stored the values of the statistics for
1
each quarter from 1990:IQ through 1994:IVQ.
0
For any particular quarter within our simula­
tion period, the degree of support for a V* hy­
-1
pothesis may be inferred by comparing the
-2
values of the statistics in that quarter to the simu­
lated distributions of possible outcomes. The
-3
simulated distributions indicate the range of
-4
values of the statistics that could result from ran­
dom, unobserved influences.14 If the value of a
-5
1960 1965 1970 1975 1980 1985 1990 1995 statistic falls outside the central area of the cor­
responding simulated distribution, we tend to
SOURCE: Authors’ calculations.
reject that particular hypothesis.
Our results for 1992:IVQ are shown in table 1
and figure 4. Values of the test statistics calcu­
lated from data for 1992:IVQ, the most recent
quarter for which we have preliminary gross
variance of the random innovations will resemble
domestic product (GDP) data, are shown in col­
the smaller post-1986 period.
umn 2 of the table. Columns 3-7 display a count
When the precise specification of alternative
of the number of model replications (out of 1,000)
hypotheses in a testing situation is uncertain, as it
wherein a test statistic took on a value less than
is for hypotheses regarding changes in V*, the
that shown in the second column. The third col­
choice of an appropriate test statistic is difficult.
umn, for example, summarizes our simulations
Some hypotheses suggest tests for omitted dummy
under the hypothesis that V* has not changed
variables (such as a discrete shift in the level or a
from its historical average value of 1.65. Each entry
nascent time trend), while others suggest the use
in the column shows the number of replications
of more general tests based on forecast errors.
for which the value of the statistic named in the
Along each simulated path P*, we computed the
first column was less than or equal to the 1992:IVQ
values of 12 test statistics, including tests for omit­
value, shown in the second column.
ted variables as well as tests for general misspecifiConsider, for example, the interpretation of
cation based on one-step-ahead forecast errors.
the lm shift statistic for 1992:IVQ as summarized
Our statistics fall into four categories:
by the first row of table 1. The value of this sta­
• Lagrange m ultiplier (LM) tests for an
tistic calculated from 1992:IVQ data is 0.026. The
omitted variable in equation (1). Lm shift tests
third column indicates that the lm shift statistic
for a post-1989:IIQ shift dummy variable,
was less than 0.026 in 266 of the 1,000 replica­
Im trend for a time trend beginning in 1990:IQ,
tions of the unchanged V* scenario. According
and Im both for both the shift and trend.
to this hypothesis, then, 0.026 appears to be
• Chow tests for a change in the forecast
neither unusually large nor small. In contrast, the
error variance, relative to the variance of the dis­
entry in the fourth column tells us that observing
turbance Et in the simulations, perhaps due to
an lm shift statistic value as small as 0.026 would
a change in V*. Ch4, ch8, and ch i 2 are based
be highly unusual if V* had in fact increased by a
on the last four, eight, and twelve forecast er­
one-time 6 percent shift in 1989:IIIQ. A value that
rors, respectively.
• Random walk tests for autocorrelation in the
forecast errors due to misspecification of the
model, including a structural change. Rw4, ru>8,

2




■

14 In other words, the distributions shown are the empirical sam­
pling distributions of the statistics.

RH

TABLE

1

Observed Values of Test Statistics in
1992:IVQ and Cumulative Frequency of
Occurrence of those Values in Simulation
Number of Replications wherein
Value of Statistic Is Less than in 1992:IVQ

Test Statistics and
1992:IVQ Values

Statistic

V* Hypothesis

Value

HI
No
Change

( 2)

(3)

H2
One-Time
Shift
(4)

0.026

266

0

0.233
0.377

719
573

H3
Money-DemandModel Shift
(5)

H4
1Vi Percent
Trend

H5
-Vi Percent
Trend

(6)

(7)

42
78

62

135
263
229

636

(1)
LM tests
Im shift
Im trend
Im botb
Chow tests
cb4
ch8
c h l2

4.10
12.9
15.0

626
879
753

278
351
84

227
530
388

429
752
611

576
866
727

Random walk tests
rw4
rw8
rw l2

3.86
0.475
1.16

960
536
727

595
9
4

494
42
166

752
184
377

926
460
664

4
5
8

947:1,000
648: 869
816: 932

612:1,000
18: 136
16: 95

541:1,000
63: 274
254: 497

719:1,000
170: 473
431: 699

979:1,000
778: 940
895: 974

Binomial testsa
bn4
bn8
b n l2

240
537

.

a. The two values correspond to the value of the statistic being, respectively, either strictly less than, or less than or equal to, the value in column 2.
NOTE: Each entry is the num ber o f replications out o f 1,000 trials.
SOURCE: Authors’ calculations.

low never occurred in 1,000 replications of the
“6 percent shift” scenario.
Table l ’s test statistics and simulation out­
comes are summarized in figure 4, with each
panel corresponding to one of the 12 statistics.
Each horizontal line segment in each panel rep­
resents the 1,000 replications of the P* model
under one of the five alternative V* hypotheses,
denoted H1-H5. A hypothesis regarding V* is
judged more or less acceptable (in other words,
consistent with the data) as the horizontal line
segments for that hypothesis tend to be centered
around the vertical dotted lines denoting the val­
ues of the statistics calculated from 1992 :IVQ
data. Overall, the hypotheses that V* has not
changed (H I) or has been decreasing slowly
(H5) appear to be highly consistent with the
data, with the 1992:IVQ value falling near the
midpoint of the distribution of simulated values
for a number of the statistics. The hypothesis of

http://fraser.stlouisfed.org/a one-time shift in 1989:IIIQ (H2) is soundly
Federal Reserve Bank of St. Louis

rejected. The hypothesis that M2 velocity shifted
as suggested by the Federal Reserve Board
staffs money-demand model (H3) appears less
consistent with the data than the hypothesis of a
steady upward trend (H4), which seems fairly
plausible. Neither of the trending V* hypotheses
(H3 and H4) appear to be as consistent with the
data as the unchanged and falling hypotheses
(H I and H5), however.
Market participants’ inflation expectations
appear to reflect acceptance of a significant in­
creasing trend in M2 velocity, despite the decel­
eration of inflation over the past three years.15
The January Blue Chip consensus forecast, for
example, calls for the GDP implicit price deflator

■

15 Chairman Greenspan’s latest Humphrey—Hawkins report to the
Congress in February of this year appears to endorse this view, as does
the FOMC's reduction of its 1993 M2 target growth ranges. To avoid such
bias, we use a Blue Chip forecast published before these were announced.

Summary of Simulation
Experiments for 1992:IVQ

LM Tests

C h o w Tests

R a n d o m W alk Tests

B in o m ia l Tests

Imshift

ch4

rw4

bn4
926

H5

-240

H5

-576

H5

—

H4

135

H4

-429

H4

—

752

H5

H3

42

H3

227

H3

494

H3

H2

0

H2

- 2 78

H2

-595

H2

HI

-- 960

HI

266

HI

-626

HI

979

/iy

H4

UI z
r947

Value in 1992:IVQ = 0.026

Value in 1992:IVQ = 4.10

Value in 1992:IVQ = 3 i

Value in 19 92:IVQ = 4

Imtrend

cb 8

rw8

bti8

I
I

H5

-- 636

H5

H4

-263

H4

-752

H4

H3

-530

H3

H3

78

H2

H2

HI

-719

-460

H5

•879

•42

H2
-536

H5

—

H4

-229

537

H4

62

-727

H5
H3

611

-388

rw!2

bn 12

H2

HI

HI

Value in 1992:IVQ = 0.377

-166

H3

H2
-573

-664
■377

H4

HI

-753

04
Value in 1992:IVQ = 5

H5

H2

18

HI

Value in 1992:IVQ = 0.475

c h !2

Imboth

1 /u

H3

HI

Value in 1992:IVQ = 12.9

H5
H4

184

H2

-351

HI

Value in 1992:IVQ = 0.233

H3

866

—

Value in 1992:IVQ = 15.0

H5
H4

4j>l

H3
H2
-727

Value in 1992:IVQ = 1.16

16
g

HI
Value in 1992:IVQ = 8

NOTE: Each horizontal line represents 1,000 replications of the P* model under either H I, H2, H3, H4, or H5. Shown after each line is the n u m ­
ber o f replications wherein the value o f the statistic is less than in 1992:IVQ.
SOURCE: Table 1. H1-H5 correspond to columns 3-7 in the table.

to increase at about a 2.7 percent rate during the
first half of 1993, versus its 2.1 percent pace in
the second half of 1992. The inconsistency be­
tween the paths of the price level implied by
the Blue Chip forecast and the P* model with an
unchanged V* is evident in table 2. Values of
our test statistics calculated from projected val­
ues of Pt for 1993-HQ that are based on this
forecast are shown in column 2.16 The entries
in column 3 show that many of our statistics will
reject the constant V* hypothesis if inflation fol­
lows the Blue Chip forecast. The complete set of
test results is displayed in figure 5. Ignoring the
Chow tests and the bn4 statistic, the trending V*
hypotheses H3 and H4 appear fully consistent
with the Blue Chip forecast.17

■

16 See Blue Chip EconomicIndicators, Sedona, Arizona, January

 10,1993, p. 5.


Initially, it may appear somewhat surprising
that the statistical support for the constancy of
V* is so sharply changed by inclusion of the two
additional quarters from the Blue Chip consen­
sus forecast. The reason for this sensitivity is that
the consensus inflation forecast is very different
from the forecast suggested by the P* model
with an unchanged V*. P* is currently more than
8 percent below Pt, so the P* inflation model —
equation (2) — forecasts that inflation will con­
tinue to decelerate over the next several quarters
from its 2.1 percent pace in 1992:IIH. The con­
sensus forecast, by contrast, predicts an acceler­
ation during the first half of 1993. The message
of table 2 is that such an acceleration is highly
unlikely unless equilibrium velocity has been
trending up for some time and has escaped
■

17 Neither the Chow tests nor the bn4 test has much power
against the hypothesis being tested, as is evident from examination of
table 3.

EU

T A B L E

2

Projected Values of Test Statistics in
1993:IIQ and Cumulative Frequency of
Occurrence of those Values in Simulation
Test Statistics and

Number of Replications wherein
Value of Statistic Is Less than Projected 1993:IIQ Value

1993:IIQ Values

V* Hypothesis

Statistic

Value

HI
No
Change

H2
One-Time
Shift

H3
Money-DemandModel Shift

H4

H5

1Vi Percent

-V i Percent

Trend

Trend

(1)

(2)

(3)

(4)

(5)

(6)

(7)

0.318
1.40
1.81

791
992
982

2
198
69

124
318
387

368
750
774

693
969
962

10.8

979
970
966

885
731
514

716
659
701

888
867
866

971
964
954

1,000
955
960

961
273
47

817
184
199

951
519
544

993
898
903

943:1,000

711:1,000
224: 540
34: 125

472:1,000
149: 439
97: 294

668:1,000
340: 687
279: 550

980:1,000
959: 992
931: 985

LM tests

Imshift
Imtrend
Imboth
Chow tests

ch4
ch8
chi2

17.3
22.8

Random walk tests

nv4
rw8
rwl2

8.91
3.54
3.63

Binomial testsa
4
6
8

hn4
bn8
bn 12

879: 965
824: 952

a. The two values correspond to the value of the statistic being, respectively, either strictly less than, or less than or equal to, the value in column 2.
NOTE: Each entry is the num ber of replications out of 1,000 trials.
SOURCE: Authors’ calculations.

detection by our tests for 1992:IVQ.18 Such an
acceleration of inflation would provide sig­
nificant evidence against the constancy of V*.

III. Evaluating
Alternative, Less
Specific Hypotheses
At this point, a true believer in higher equilibrium
velocity will object that, while our approach most­
ly rejects the specific shifted and upward-trending
■

18 Alternatively, it may be that the variance of the innovations has
increased. One way to see how inference about the constancy of V*
depends on the assumed variance of the innovation process is to note
that, if the Blue Chip forecast is correct, the P* model’s 1993:IQ forecast
of 1.2 percent will miss by about 1.6 percent. Since we have assumed an
innovation standard error of 0.6 percent, this is about a two-and-one-halfstandard-deviation miss, which is unusual. If the innovation standard
 deviation were instead (say) 1.6 percent, the forecast error would be only
about one standard deviation, which is not so odd.
http://fraser.stlouisfed.org/

Federal Reserve Bank of St. Louis

V* hypotheses outlined above, this does not
conclusively prove that V* has not changed.
Simulations with slower growth trends in V* or
ones that started later than 1990:IQ, for ex­
ample, might not be rejected.
The objection has merit. Our experiments con­
sider only a few specific alternative hypotheses.
To evaluate rigorously, using our stochastic
simulation method, the evidence for or against a
less specific hypothesis— such as “V* shifted some­
time in the late 1980s or early 1990s”— would re­
quire repeating our experiments using alternative
models with shifts beginning in 1989:IVQ, and
again with shifts beginning in 1990:IQ, and so
on. The number of required simulations in­
creases even further if we allow for a number of
trend growth rates, rather than the 1.5 percent
annual V* growth used here.
We can, however, address the issue indirectly.
Our test statistics should be valuable in detecting

Summary of Simulation
Experiments for 1993:IIQ

H5
H4
H3
H2
HI

H5
H4
H3
H2
HI

H5
H4
H3
H2
HI

LM Tests

C h o w Tests

R a n d o m W alk Tests

B in o m ia l Tests

Imshift

ch4

rw4

bn4

693

368
124

•791

971

H5
H4
H3
H2
HI

716
— 885
979

H5
H4
H3
H2
HI

-993
- 95 1
817
-961
- 1000

•980

Hi>
H4
H3
H2
HI

-668

472
-711
943

Value in 1993:IIQ = 0.318

Value in 1993:IIQ = 10.8

Value in 1993:IIQ = 8.91

Value in IS 93:IIQ = 4

Imtrend

ch 8

rw8

br18

— 969 H 5
750
H4
-3 1 8
H3
H2
198
■992 HI

•898

0
h

H4
H3
" 0
H2
/31
970 HI

519
184
-273
955

959

H5
H4
H3
H2
HI

iAr\

879

Value in 1993:IIQ = 1.40

Value in 1993:IIQ = 17.3

Value in 1993:IIQ = 3.54

Value in 1993:IIQ = 6

Imboth

c h l2

rw l 2

bn 12

— 962 H 5
-774
H4
387
H3
H2
69
-982 HI
Value in 1993:IIQ = 1.81

T"

---954
— 866
-701
-514
966

H5
H4
H3
H2
HI

Value in 1993:IIQ = 22.8

903
544
-199
47

Value in 1993:IIQ = 3.63

-960

H5
H4
H3
H2
HI

931
z /y
I— Q7
34

824
Value in 1993:IIQ = 8

NOTE: Each horizontal line represents 1,000 replications of the P* model under either H I, H2, H3, H4, or H5. Shown after each line is the nu m ­
ber of replications wherein the value o f the statistic is less than in 1993:IIQ.
SOURCE: Table 2. H1-H5 correspond to columns 3-7 in the table.

shifts in V* that begin in other time periods or
that follow time paths with somewhat different
shapes than those considered above. According
to table 3, when V* is subjected to a one-time
upward shift of 6 percent, within six quarters
the best of our test statistics (using the 5 percent
critical values shown in the appendix) reject the
(false) hypothesis of an unchanged V* in more
than half the replications. When V* is subjected
to the less dramatic change of increasing at a
1V2 percent annual rate, all of our statistics have
difficulty detecting this new trend growth until at
least three years have passed, as shown in table 4.
In part, this slow speed of detection is due to the
high underlying variance of Vt.

IV. Conclusion
All models used for policy analysis require peri­
 odic revalidation of their underlying assump­
http://fraser.stlouisfed.org/
O f particular concern in the P* model is
Federal Reserve Bank tions.
of St. Louis

the assumed constancy of the long-run velocity
of M2. Unfortunately, the long-run velocity of
M2 is no more amenable to direct observation
than other “long-run” variables in economic
models. Two of our findings suggest that it has
not changed, however. First, the deceleration of
inflation over the past three years (at least through
1992:IIIQ) closely resembles the predictions of
the P* model based on an unchanged long-run
M2 velocity. Second, stochastic simulation of the
P* model under five alternative hypotheses regard­
ing putative shifts in V* provides little evidence
against the constant V* hypothesis, strong evi­
dence against the hypothesis of a one-time shift
following the FIRREA legislation, and somewhat
weaker evidence against the hypothesis of an
upward trend during the past three years.
These results suggest little reason for policy­
makers to abandon the P* model when seeking to
understand the future adjustment of inflation to
money growth. Comparison of the P* model’s in­
flation forecasts to the Blue Chip consensus fore­

TABLE

3

Number of Rejections of Hypothesis
“V* Has Not Changed” When V* in Fact
Increased 6 Percent in 1989:IIIQ
LM Tests
Im shift

Im trend

Chow Tests
Im both

ch4

ch8

Random Walk Tests

c h i2

ru>4

rw 8

rw l2

405
521
431
765

Binomial Tests
bn4

bn8

bn 12

68
472
657
789

0
0
0
0

0
0
305
264

0
0
305
264

848
855
826
933

0
0
0
0

229
615
740
727

229
615
545
489

1990
IQ
HQ
IIIQ
IVQ

459
591

245
347
491
587

252
327
420
411

252
327
419
452

252
327
419
452

207
421
641

735
796
867
908

626
720
786
812

646
733
796
856

359
358
366
386

484
580
625
590

484
580

609
596

609
640

639
595

686
806
920
886

917
953
964

862
894
907
939

888
907
932
953

380
346
281
244

575
551
481
488

677
713
709
656

598
546
481
444

893
892
875
868

897
952
978
978

0
0
0
0

712

754

675
627
583

695
765
726

991
991
994

934
960
957
966

965
969
979
980

206
181
154
139

447
376
303
281

619
561
515
475

354
314
225
194

819
759
678
616

967
967
949
935

0
0
0
0

527
460 .
404

694
875
836

353

493

995
995
998
998

977
977
979
988

989
991
993
994

125
112
101
106

231
193
155
l6 l

424
379
305
258

180
l6 l
155
134

535
439
365
320

918
834
765
692

185
0
0
0

293
240
207
171

433
370

299
421

198
341

549
673

659

1991
IQ
HQ
IIIQ
IVQ
1992
IQ
HQ
IIIQ
IVQ

975

1993
IQ
IIQ
IIIQ
IVQ

981

1994
IQ
IIQ
IIIQ
IVQ

311
529

SOURCE: Authors’ calculations.

cast suggests that market participants already
believe that V* has shifted. In so doing, they ap­
parently are discounting evidence that the steep
slope of the yield curve has induced portfolio
substitution away from M2 (particularly small
time deposits) and toward assets such as bond
mutual funds.
Our results also suggest a word of caution.
The high variance of Vt means that attempts to
distinguish changes in V* from short-run move­
ments in Vt are subject to a high degree of un­
certainty. Our tests almost surely would have
identified by now a large, discrete shift in V* that
occurred other than very recently. However,
they might not yet have detected an emerging
 slow growth trend or a more rapid trend that
http://fraser.stlouisfed.org/
started later than 1990:IQ. To the extent that in­
Federal Reserve Bank of St. Louis

flation responds with a long and variable lag to
changes in money growth, this uncertainty rein­
forces the need for caution and vigilance in the
conduct of monetary policy. If M2’s long-run
equilibrium velocity has in fact shifted or is
trending up, continuing slow money growth
may yield less progress toward price stability
than expected. The stickiness and (later) halting
decline of long-term interest rates during the re­
covery likely reflects, in part, views by financial
market participants that V* has increased and
that price stability is not yet the rule of the land.

H

I

T A B L E

4

Number of Rejections of Hypothesis
“V* Has Not Changed” When V* in Fact
Began Growing at a V/z Percent Rate in 1990:IQ
LM Tests

Chow Tests

Random Walk Tests

Im shift

Im trend

Im both

ch4

ch8

c h i2

rw 4

rw 8

rw l2

50
52
54

50

50

50

50

53

53
52
51

53
56
57

53
56
57

50
52

50

53
52
64

50
50
58
60

49
64

52
61
69
84

65
80
120
128

60
75
88
104

50
61
67
87

58
70
76
80

58
70
73
84

53
78
115
128

94
135
181
216

182
238
297
383

129
159
203
263

85
102
118
147

95
110
109
141

91
108
101
118

176
188
223
266

244
317
378
440

431
576
646
745

311
386
474
557

148
182
210
230

156
180
215
274

133
168
201
244

280
343
334
380

405
512

520

823
886
925
966

683
771
820
867

223
256
266
281

302
317
344
396

282
337
358
400

437
478
454
473

Binomial Tests
bn4

bn8

50
54
57
79

0
0
0
0

0
0

0
0

39
26

39
26

99
78
62
120

0
0
0
0

15
62
80
96

15
62
47
39

83
149
203
249

0
0
0
0

122
143
174
224

99
79
90
120

565
641

344
503
579
675

0
0
0
0

271
313
356
384

158
450
530
290

700
740
774
787

780
814
874
927

374
0
0
0

417
458
481
506

339
406
460
757

b n l2

1990

IQ
HQ
IIIQ
IVQ

53
38
66

1991

IQ
HQ
IIIQ
IVQ

46
63
94
87

1992

IQ
HQ
IIIQ
IVQ

141
213
277
360

1993

IQ
HQ
IIIQ
IVQ
1994

IQ
HQ
IIIQ
IVQ

569
645
677

SOURCE: Authors’ calculations.

Appendix— The
Test Statistics
The 12 statistics calculated during the simulations
for each quarter include tests for omitted variables
and for properties of forecast errors.19 The first
three statistics are LM tests for omitted variables in
equation (1): Im shift tests for a post-1989:IIQ shift
dummy, Im trend for a time trend beginning in

■

19 To obtain forecast errors for the tests that need them, we esti­
mate the P* model (using the constant V* version of P*) for each quarter
of the simulation period using the simulated Pi series running up
through the previous quarter. A single-step forecast error for the quarter
 is computed and saved, the process is repeated for the next quarter, and
so on.
http://fraser.stlouisfed.org/

Federal Reserve Bank of St. Louis

1990:IQ, and Im both for both simultaneously.
An appropriate test for a 1989:IIIQ shift in
equilibrium velocity can be formulated as a
test for an omitted variable, where the omitted
variable itself is a dum m y variable that equals
zero until 1989:IIQ and one thereafter. To see
this, notice that the variable p* in equation (1)
is defined as p* = m2 + v* - q*, where lower­
case letters indicate natural logs. A shift or
trend in v* translates directly into an equivalent
shift or trend in p*. If a 6 percent increase in
equilibrium velocity causes us to understate p*
by 0.06, this can be handled in equation (1) by
adding a constant term equal to -0.06 times a ,
the coefficient on p - p * . The rationale for the
Im trend test is identical.

m

TABLE

A- 1

95th Percentile of Empirical Sampling
Distribution of 12 Test Statistics under
Null Hypothesis that V* Is Unchanged
from Its Long-Run Value
LM Tests

1990
IQ
HQ
IIIQ
IVQ

1991
IQ
HQ
IIIQ
IVQ

1992
IQ
HQ
IIIQ
IVQ

1993
IQ
HQ
IIIQ
IVQ

1994
IQ
HQ
IIIQ
IVQ

Chow Tests

Random Walk Tests

Binomial Tests

Im shift

Im trend

Im both

ch4

ch8

c h i2

rw 4

rw 8

rw l2

0.83
0.87
0.86
0.86

0.89
0.87
0.85
0.81

1.31
1.38
1.29
1.34

8.05
9.42
9.82
9.99

12.72
13.78
13.88
14.38

15.02
14.86
15.73
17.70

3.12
4.01
3.94
3.72

2.25
2.23
2.76
3.10

1.28
1.34
1.93
2.16

3
4
4
4

6
6
5
6

6
7
7
8

0.89
0.94
0.92
0.94

0.93
0.89
0.85
0.88

1.45
1.42
1.41
1.38

10.39
10.00
9.97
9.64

14.64
15.31
15.53
15.98

19.31
19.67
19.95
19.98

4.26
4.21
3.78
3.82

3.65
4.12
3.74
4.33

2.68
2.90
3.08
3.60

4
4
4
4

6
6
6
6

9
8
8
9

0.98
0.90
0.89
0.87

0.84
0.83
0.84
0.76

1.39
1.44
1.42
1.33

9.52
9.48
9.64
9.25

15.85
15.56
16.23
15.52

19.70
20.63
21.27
21.64

3.56
3.79
3.70
3.40

4.13
3.74
3.60
3.31

4.03
3.89
3.80
3.75

4
4
4
4

6
6
6
6

8
9
9
9

0.90
0.82
0.81
0.80

0.82
0.72
0.77
0.72

1.37
1.38
1.36
1.31

9.30
9.11
9.37
9.29

15.21
15.51
15.66
14.81

21.51
21.49
21.51
21.01

3.62
3.27
3.79
3.72

3.55
3.26
3.17
3.10

3.72
3.26
3.30
3.17

4
4
4
4

6
6
6
6

9
8
8
9

0.76
0.78
0.76
0.78

0.67
0.68
0.67
0.63

1.25
1.23
1.27
1.26

9.73
9.10
9.01
9-31

14.87
14.93
15.17
14.82

20.82
20.49
20.65
20.59

3.36
3.27
3.44
3.65

2.96
2.91
2.91
3.05

2.87
3.15
2.98
2.75

3
4
4
4

6
6
6
6

9
9
9
8

bn4

bn8

b n !2

SOURCE: Authors’ calculations.

Chow forecast tests have long been used to
determine parameter constancy and are, in fact,
tests of the constancy of variances. The idea is
that if the process generating the data changes
at time t but the model used by the forecaster
does not, the forecast error variance will in­
crease. The utility of the test is limited by its im­
plicit assumption that the variance of the true
disturbances is constant. Our three Chow statis­
tics — ch4, ch8, and c h i2 — are calculated as
the sum of the latest four, eight, or twelve
squared forecast errors, respectively, divided by
the variance of the simulation innovations.

The rw statistics are our own invention,
http://fraser.stlouisfed.org/
motivated by the idea that a persistent misspeciFederal Reserve Bank of St. Louis

fication of the P* model, such as would result
from a shift or trend in V*, will lead to positive
autocorrelation in the forecast errors. The var­
iance of the sum of K consecutive forecast er­
rors will then be much larger than just K times
the innovation variance. The rw4 statistic is the
square of the sum of the four most recent fore­
cast errors, divided by four times the innovation
variance; rw8 and rw l2 are analogous. An rw
statistic can be written as the sum of a Chow sta­
tistic plus a term that measures autocorrelation
in the forecast errors. Thus, we expect the rw
test to be more powerful than the correspond­
ing Chow test when the alternative hypothesis
involves positive forecast error autocorrelation.

The binomial statistics (bn4, bn8, and bn 12)
are simple counts of the number of positive
forecast errors made over the corresponding inter­
vals. A correctly specified model should, on aver­
age, give about the same number of positive and
negative forecast errors. The estimated coefficient
in equation (2) is negative, so if V* and P* are un­
derstated, we would expect to see an inordinately
high number of positive forecast errors.
Table A-l shows the 95th percentile of the
12 statistics’ sampling distributions, based on
1,000 replications, under the null hypothesis
that V* has not changed from its 1955-89 value.
The number 0.87 in the 1992:IVQ row and
Im shift column, for example, indicates that the
Im shiftstatistic for 1992:IVQ was less than or
equal to 0.87 in 950 of the 1,000 replications of
the constant V* model.




References
Dewald, William G. “Monetarism Is Dead; Long
Live the Quantity Theory,” Federal Reserve
Bank of St. Louis, Review, vol. 70, no. 4
(July/August 1988), pp. 3-18.
Feinman, Joshua J., and Richard D. Porter. “The
Continuing Weakness in M2,” Board of
Governors of the Federal Reserve System,
Finance and Economics Discussion Paper
No. 209, September 1992.
Hallman, Jeffrey J., Richard D. Porter, and David
H. Small. “Is the Price Level Tied to the M2
Monetary Aggregate in the Long Run?” Am eri­
can Econom ic Revieiv, vol. 81, no. 4 (Sep­
tember 1991), pp. 841-58.
Humphrey, Thomas M. “Precursors of the P-Star
Model,” Federal Reserve Bank of Richmond,
Econom ic Review, vol. 75, no. 4 (July/August
1989), pp. 3-9.
Kuttner, Kenneth N. “Inflation and the Growth
Rate of Money,” Federal Reserve Bank of
Chicago, Econom ic Perspectives, vol. 14, no.
1 (January/February 1990), pp. 2-11.
Laidler, David E.W. The D em and fo r Money:
Theories a n d Evidence. New York: Harper
and Row, 1985.
Moore, George R., Richard D. Porter, and David
H. Small. “Modeling the Disaggregated De­
mands for M2 and M l: The U.S. Experience
in the 1980s,” in Peter Hooper et al., eds.,
F in a n c ia l Sectors in Open Economies. Wash­
ington, D.C.: Board of Governors of the Fed­
eral Reserve System, 1991, pp. 21-105.
Pecchenino, R.A., and Robert H. Rasche. “P*
Type Models: Evaluation and Forecasts,” Na­
tional Bureau of Economic Research, W ork­
ing Paper No. 3406, August 1990.

Q

Examining the Microfoundations
of Market Incentives for
Asset-Backed Lending
by Charles T. Carlstrom
and Katherine A. Samolyk

Introduction
The past two decades have witnessed a virtual
revolution in financial intermediation. One innova­
tion is securitization: the packaging of loans into
pools that are funded by marketable securities. At
the same time, the selling of individual loans has
itself grown tremendously over this period. While
individual loans are primarily sold to other depos­
itory institutions, securitization involves the sales
of securities to nonbank investors as well. Both
loan sales and securitized loan pools are broadly
identified as asset-backed lending.
A financial asset is a claim to future cash flows
as stipulated by the issuer. What distinguishes
asset-backed lending is that the securities involved
are backed by specific financial assets and then
sold. Alternatively, these financial assets might
have been pooled and funded by issuing general
claims on the firm. Instead, when a loan is either
securitized or sold individually, it is funded sepa­
rately rather than with the other assets on the bal­
ance sheet of the loan originator.1 Hence, loan
sales and securitization, from the perspective of
the seller of the asset-backed securities, are a
means of off-balance-sheetfinance.

http://fraser.stlouisfed.org/ The proliferation of asset-backed lending has
been
commonly viewed as a response to com­
Federal Reserve Bank of
St. Louis

Charles T. Carlstrom and
Katherine A. Samolyk are econ­
omists at the Federal Reserve
Bank of Cleveland. The authors
wish to thank Joseph G. Haubrich,
Anjan Thakor, and James B.
Thomson for helpful comments.

petitive and regulatory pressures, which have
prompted institutions to participate in credit
markets in ways that are not directly reflected
on their balance sheets. In particular, capital re­
quirements are cited as reducing the profitability
of funding certain investments on-balance-sheet
with deposit liabilities. However, nonbank firms
that are not subject to the regulations associated
with the federal safety net are also engaging in
asset-backed financing. This indicates that there
are important nonregulatory incentives for loan
sales and securitization.
Asset-backed lending has become an impor­
tant mode of funding for particular types of credit.
Though depository institutions are the primary
originators of home mortgages, more than 40 per­
cent of these claims are ultimately financed
through the government-sponsored secondary
mortgage market. In the past several years, how­
ever, asset securitization has spread beyond
government-sponsored sales of mortgage-backed

■ 1 Although securitized loan pools are funded separately, they are
frequently sold with some type of recourse, which means that they are
partially backed by the general claims of the firm that originated the loan.

securities to include private pools that are backed
by increasingly diverse types of loans, from
credit-card receivables to Third World debt. Cur­
rently, more than 15 percent of consumer install­
ment credit is funded through securitization.2
The evolution of financial market innovations
in tandem with changing banking regulations
makes it difficult to assess what is driving the
trends in asset-backed markets. Because we wish
to evaluate why asset-backed lending occurs in
the absence of regulations, we examine how
successful economists have been in applying
formal models to this phenomenon. Although
off-balance-sheet funding can arise for either
market-based or regulatory-based reasons, we
focus on four papers that attempt to model assetbacked lending in the absence of governmentsponsored insurance and regulations.
We first outline the general nature of inter­
mediation and describe asset-backed markets in
this context. Information costs have long been
viewed as a rationale for financial intermedia­
tion. The literature on asset-backed lending has
picked up on this theme to argue that loan sales
and securitization are also best understood as a
means of minimizing information costs. There­
fore, in order to understand some of the models
that have attempted to formalize asset-backed
lending, we first discuss several models of finan­
cial contracting under imperfect information,
which have been useful in characterizing the
roles that financial intermediaries play in chan­
neling credit.3 Finally, we analyze how existing
government policies may affect the incentives
for firms, primarily banks and thrifts, to engage
in these activities.

I. An Overview of
Interm ediation
In a decentralized economy with significant in­
formation and transaction costs, the financial
sector affects how resources are channeled from
lenders to borrowers. As financial conduits, in­
termediaries pool lenders’ resources to fund a
portfolio of claims on many, often diverse, bor­
rowers. In doing so, intermediaries are said to
conduct indirectfin an ce , allowing them to issue
indirect claims with cash flows that differ in vary­
ing degrees from those of the borrowers. Thus,

2 See Federal Reserve Bulletin, Domestic Fi nancial Statistics,
Table 1.55, Consumer Installment Credit, March 1993.

intermediaries perform asset transform ation in
making their investment and funding choices.
To the extent that information is costly to ob­
tain, financial contracts and institutions also can
reduce the information costs associated with
channeling resources to the most productive in­
vestment opportunities. Thus, intermediation
yields more attractive portfolio choices for inves­
tors while facilitating a more efficient flow of
credit to borrowers.

Interm ediation
and Asset
Transformation
Three of the types of asset transformation pro­
duced by intermediaries are 1) denomination
transformation, 2) credit risk transformation,
and 3) maturity transformation. How effectively
these methods can mitigate information costs is
an important part of our subsequent analysis.
Denomination transformation allows inter­
mediaries to lend to borrowers with large credit
needs by issuing smaller-denomination claims
to many savers. For example, mutual funds that
invest in government bonds and Treasury bills
pool the funds of a group of small investors to
fund a portfolio of relatively similar claims.
Denomination transformation also allows small
savers to diversify by enabling them to hold a
wider variety of investments.
Credit risk transformation pools the resources
of many lenders to fund several projects. This al­
lows intermediaries to diversify the risks of the
assets in their portfolios, and thus to issue in­
direct claims to investors with a more predict­
able return than the individual assets being
funded. This is the main role of stock or bond
mutual funds, although most intermediaries
engage in credit risk diversification.
Finally, intermediaries also perform maturity
transformation by issuing indirect claims that
offer a pattern of promised cash flows different
from those promised by borrowers. Banks and
thrifts are noted for the degree of maturity trans­
formation in their portfolios. They fund mediumand long-term projects by issuing short-term
liquid deposits that serve as close substitutes for
legal tender.4 Contractual savings institutions,
such as insurance companies and pension funds,

■

 ■ 3 Two important papers surveying this literature are Gertler (1988)
http://fraser.stlouisfed.org/
and Bhattacharya and Thakor (1991).
Federal Reserve Bank of St. Louis

■

4 McCulloch (1981) emphasizes that this degree of maturity trans­
formation is actually “misintermediation” that reflects the regulatory in­
centives for banks to assume credit risks as well as the risk associated
with mismatching the durations of their assets and liabilities.

produce a very different sort of cash flow trans­
formation. They fund portfolios of assets by sell­
ing contracts promising cash flows that are
contingent on specific events, such as property
loss, death, or retirement.
Much of the intermediation associated with
these types of asset transformation channels funds
to borrowers who place debt or equity directly in
credit markets. A distinguishing characteristic of
some intermediaries is that they specialize in lend­
ing to borrowers who would find it prohibitively
costly to obtain funds through direct market place­
ments because of the relative costs associated
with screening, monitoring, and servicing their
claims. Depository institutions and finance compa­
nies, for example, profit by developing a compara­
tive advantage in lending to small or information­
intensive borrowers. Thus, some intermediaries
are special in the sense that they provide lenders
with new investment opportunities— that is, they
are asset originators.

nonbank investors. These pools are generally
originated by large firms. From the perspective
of the pool originator, however, nonmortgage
securitization is basically a means of separating
the financing of certain assets from that of its
general portfolio.
Finally, securitization of mortgages takes
place in the secondary market in order to fund
pools of insured mortgages. These pools in­
clude claims from many, often geographically
diverse, mortgage originators. This form of se­
curitization simultaneously creates a pool of sim­
ilar loans (mortgages) purchased from loan
originators in different localities. Hence, a unique
characteristic of mortgage-backed securities is
that they are collateralized by loans from various
financial firms.

Loan Sales versus
Securitization
A major difference between loan sales and

An Overview of
Asset-Backed
Markets
In contrast to funding a portfolio of assets by
the issue of unsecured claims, asset-backed
lending is an alternative funding mode by
which an asset or set of assets is sold by its
originator. We use the term asset-backed lend­
ing to refer to both securitization and individual
loan sales.
A loan sale is usually made by a bank to
another bank, and involves no asset pooling in
and of itself.’ However, the process of making
loans marketable, by increasing the access of
other lenders to investment opportunities, can
improve the allocation of credit. Loan sales in­
volve transactions between two (or more) finan­
cial institutions, whereas securitization generally
involves the sale of claims (against the securi­
tized asset portfolio) to individual investors who
hold these in their portfolios for investment pur­
poses. Consequently, securitized claims are
priced like other capital-market instruments, but
loan sales are priced based on bilateral (multi­
lateral) negotiations.
Alternatively, nonmortgage securitization
usually takes the form of a bank or nonbank
firm funding a pool of similar assets by forming
a subsidiary that markets claims to the pool to

 ■ 5 For a comprehensive overview of the loan sales market, see Gor­
http://fraser.stlouisfed.org/
ton and Haubrich (1990).
Federal Reserve Bank of St. Louis

securitization is that loan sales usually provide
no recourse for the party buying the loan. Most
view this as the result of regulators’ treatment of
loan sales in their assessment of capital ade­
quacy requirements for depository institutions.
Banks and thrifts are not required to hold capi­
tal against loans sold, except for those sold with
recourse, which are treated as if they are onbalance-sheet items in determining capital ade­
quacy. Thus, given the incentives to maximize
leverage, these institutions tend to sell loans
without recourse to truly “get them off the
regulatory books.”
Securitization, on the other hand, is generally
associated with the provision of some form of
credit enhancement that increases the market­
ability of the asset-backed securities. One common
form of enhancement for securitized assets is
backing by a bank-issued standby letter of credit
(SLC). For a stipulated fee, banks issue SLCs,
which are promises to insure the purchasing party
up to a prespecified amount for losses incurred on
the securitized loans. Before a loan pool is funded,
both the loans and the bank issuing the SLC are
rated. Because the rating of the pool is affected by
the rating of the bank issuing the guarantee, the
extent to which this method of credit enhance­
ment is used is limited. Moreover, to avoid regu­
lated capital assessments, a bank securitizing a
pool of loans usually does not issue the creditenhancing SLC. Thus, the originator of the pool is
generally not also its guarantor.
An increasingly popular enhancement, the
cash-collateral-account method, has the pool

originator covering potential losses with cash
placed in an escrow account. Another method to
enhance loan quality is to overcollateralize the
loan pool. That is, extra loans are included in
the pool so that the value of the loans exceeds
the value of the securities issued to fund it.

Why Fund OffBaiance-Sheet?
Given the attributes of asset pooling, it is natural
to question the benefit of funding a loan or pool
of loans off-balance-sheet. The answer, of
course, is that this method is more efficient—
less expensive— than on-balance-sheet fund­
ing. As we have asserted, asset-backed lending
is commonly viewed as a response to both regu­
latory costs and market incentives.
In its early years, regulations were clearly an
important factor motivating securitization via the
secondary mortgage market.6 Regulated branch­
ing restrictions in tandem with information costs
caused banks and thrifts to operate in relatively
localized markets. The government-sponsored
secondary mortgage markets allowed these institu­
tions to hold portfolios from many different parts
of the country. These regulatory restrictions are
less important today. This suggests that informa­
tion costs are becoming the more relevant deter­
minant of interregional lending.
A fundamental role of intermediation is to
produce the information involved in channeling
credit in the most cost-effective way. In particular,
lenders do not always have good information
about the risk and return of borrowers’ investment
opportunities. Intermediaries specialize in produc­
ing this information, as well as in structuring and
servicing contracts. Therefore, in order to under­
stand why off-balance-sheet funding may be
more efficient, it is useful to examine the roles of
both financial contracts and intermediation in
mitigating information costs.
Here, the primary focus is on market incen­
tives— specifically due to information costs— as
a motive for asset-backed lending. In the follow­
ing section, we discuss several models of finan­
cial contracting and intermediation. We then
proceed to examine why asset originators might
choose asset-backed lending as an alternative to
on-balance-sheet funding.


6 See Pavel (1986) for a comprehensive description of the histori­
http://fraser.stlouisfed.org/
cal evolution of this market.
Federal Reserve Bank of St. Louis

■

II. Financial
Structure in
Response to
Information Costs
Even in a world where there is complete informa­
tion about available investment opportunities,
credit intermediation can occur if individuals
without wealth have more profitable projects than
do those with greater financial resources. How­
ever, while intermediation can help in diversifying
the portfolios of the individuals supplying finan­
cial resources, the nature of the claim on these in­
vestment projects is uncertain. In particular, as
Modigliani and Miller (1958) state, it is not clear
why a project should be funded via a debt con­
tract, which stipulates a predetermined promised
cash flow and default (should that cash flow not
be met), versus an equity contract, which prom­
ises only to pay a cash flow that is contingent on
the project’s return— precluding the event of de­
fault. Modigliani and Miller show that in a world
without taxes, transaction costs, and information
costs, entrepreneurs would be indifferent between
funding projects with debt or equity.

Debt versus
Equity Contracts
Information costs thus play an important role in
explaining the stnicture of the contracts between
borrowers and lenders that we observe in reality.
One model of financial contracting under imper­
fect infonnation is presented in Townsend
(1979). He demonstrates that when it is costly
for lenders to monitor the performance of a
borrower’s project, debt contracts allow lenders
to minimize monitoring costs.7 In his model,
borrowers can observe the proceeds of their in­
vestment opportunities, while lenders can do so
only by paying a fee. In this setting, an equitytype contract stipulating a payoff that always
depends on the project’s realization implies that
investors will always have to expend resources
to monitor the project’s outcome.
Alternatively, debt contracts minimize these
monitoring costs by specifying a contractual inter­
est payment to lenders. Borrowers pay this pre­
specified amount except when default is declared.
In that situation, lenders receive the realized value
of the project (or firm), which they must ascertain

■ 7 This suggests that debt would be preferred to equity. One reason
equity might be preferred is if bondholders cannot observe the riskiness
of the investments undertaken by the firm ’s management. In that situa­
tion, the investments undertaken w ill be too risky, which transfers wealth
from bondholders to equityholders.

by incurring monitoring costs. Here, debt con­
tracts minimize monitoring costs because
lenders must monitor investment outcomes
only in the event of borrowers’ default.8

Information Costs
and Credit Risk
Transformation
One function of financial intermediation, as
mentioned earlier, is to pool assets in order to
reduce portfolio risks, thus enabling investors
with limited wealth to hold a diversified
portfolio. Another, indirect advantage of diver­
sification is that it helps to minimize information
costs by decreasing the need for investors to
monitor privately observed portfolio risks.
Diamond (1984) examines how asset diversi­
fication by banks mitigates the need for depositors
to monitor the performance of bank investments.
He describes a world in which information about
realized project returns is costly. If many lenders
are needed to fund one borrower, an intermediary
could group these lenders to fund the project.
However, because the project’s return is costly to
observe, each lender would in general have to
monitor the intermediary’s investment.
Diamond demonstrates that by diversifying
across many projects, an intermediary can
decrease the variability of the return on its port­
folio, and thus the need for lenders to monitor
the performance of the portfolio. Depositors in
essence loan funds to the bank in exchange for
debt contracts. A reduction in portfolio risks
lowers expected monitoring costs by reducing
the probability that the firm will default on its
liabilities by not paying depositors their stipu­
lated return. In the extreme case, complete
diversification of asset returns eliminates portfo­
lio risk and thus the need for depositors to m on­
itor the bank. Hence, Diamond describes how
asset pooling allows the monitoring function to
be delegated to intermediaries.9

■

8 This result is predicated on the assumption of deterministic
auditing. That is, auditing occurs with a probability of either one or zero.
Mookherjee and Png (1989) show that, in general, random auditing will
be optimal. That is, even when bankruptcy occurs, the probability of
being audited is less than one.
■

9 Ramakrishnan and Thakor (1984) show that financial intermedi­

 aries will also arise with ex ante monitoring costs. Diamond’s paper as­
http://fraser.stlouisfed.org/
sumes ex post monitoring costs.
Federal Reserve Bank of St. Louis

III. Asset-Backed
Lending as a
Funding Mode
Diamond’s analysis illustrates an interesting
point, but in more realistic settings, firms may be
limited in how much they can benefit from asset
pooling. This restriction is useful to consider in
examining why loan sales and securitization
may be efficient ways of funding certain invest­
ments. Asset-backed lending in its most general
sense is the sale of an asset by its originator,
which separates the financing of the asset from
that of the originator’s portfolio.
Imperfect information about the portfolio
choices of intermediaries can help to explain
market-based incentives for asset-backed lend­
ing. The first two papers we discuss below cite
the inability of localized or specialized banks to
diversify portfolio returns as a rationale for fi­
nancial firms to engage in both loan sales and
securitization. The models developed in these
papers formalize this rationale, motivating assetbacked lending as a means for local borrowers
to tap into nonlocal sources of funds. The second
two models of asset-backed lending emphasize
the differences in the information available to in­
termediaries versus the individuals who hold
their debt prior to investment choices. These
models formalize asset-backed lending as a
means of collateralizing, thus enabling investors
to obtain financing terms that better reflect the
underlying quality of the projects being funded.

Portfolio Risks and
Capital Constraints
While perfect diversification removes the need
to monitor imperfectly observed portfolio risks,
imperfect diversification creates the need for a
more complicated financial structure. For exam­
ple, when banks cannot perfectly diversify risks,
the amount of their equity capital assumes greater
importance. Without sufficient equity capital,
banks may be unable to attract funding in order to
finance risky investments. By buffering potential
portfolio losses, equity capital serves as an alterna­
tive means of mitigating the need for lenders to
monitor an intermediary: It cushions portfolio
losses and thus protects depositors.
Bemanke and Gertler (1987) and Samolyk
(1989a,b) show that when depositors’ costs of
monitoring an institution are prohibitive, inter­
mediaries may face market-imposed capital con­
straints on the risks associated with their portfolio
choices. Capital inadequacy arises when a bank is

unable to attract funds to finance profitable in­
vestments because it has inadequate capital to
absorb possible portfolio losses.
The key to this result is that it is assumed to be
extremely costly for depositors to monitor the out­
come of a bank’s portfolio. Depositors recognize
that banks have the incentive to report large losses
on their risky assets, in effect claiming that they
are unable to meet depositors’ claims. Hence,
banks will not be able to attract depositors un­
less they have sufficient capital to cover poten­
tial portfolio losses on risky investments.10

Limits to the
Benefits of
On-Balance-Sheet
Interm ediation
Capital constraints can arise because banks are
both unable and unwilling to diversify their port­
folios adequately. Government policies have
affected the incentives for intermediaries— espe­
cially banks and thrifts— to manage portfolio risks
prudently. Portfolio and branching restrictions
have limited the ability of banks and thrifts to di­
versify credit risks as well as the risks associated
with maturity transformation. Regulatory limits on
the types of depository lending, such as the “Qual­
ified Thrift Lender Test,” also constrain portfolio
diversification.11 Finally, the provision of federal­
ly sponsored deposit insurance creates moral haz­
ard problems in both the management of credit
risks and the interest-rate risks associated with
maturity transformation. These policies reduce the
potential for depositors (and regulators) to dele­
gate the monitoring function.
Given the partial deregulation of the banking
industry, these restrictions are probably not as
important an impediment to diversification as
they once were. Ironically, a major factor limit­
ing intermediaries from diversifying and hence
minimizing information costs is the very costs of
identifying, monitoring, and funding borrowers
that make financial contracts and intermediation
important. These costs may cause intermedi­
aries to specialize in lending to certain types of
borrowers (industry versus consumers) or to
borrowers in certain regions.

Asset-Backed
Lending as a
Response to
Localized Capital
Constraints
Carlstrom and Samolyk (1993) present a model in
which capital constraints motivate one rationale
for off-balance-sheet lending. Their model predicts
that loan sales occur as a response to differences
in project returns across regions that arise when
some regions are capital constrained and others
are not. Similar to the model used by Samolyk
(1989b), banks operate in distinct, informationally
segmented regions or markets. Bankers within a
particular region have a comparative advantage in
supplying loans there because they have better in­
formation about credit conditions or would-be bor­
rowers. However, the inability of banks to diversify
localized portfolios perfectly can cause some
regions to be capital constrained.12
The authors demonstrate that in the absence of
asset-backed lending, a region with a relatively
large set of profitable— albeit risky— investment
opportunities and limited bank capital can be con­
strained. That is, the region will be unable to at­
tract sufficient deposits to fund all of its profitable
investment opportunities. A constrained bank
must channel resources instead into safer but less
profitable investments.
Binding capital constraints cause interregional
differences in returns on projects. These, in turn,
create the incentive for banks in constrained
markets to originate and sell unfunded profitable
investments to banks in unconstrained regions.
Unconstrained banks, though adequately capi­
talized, would not lend to constrained banks via
deposit liabilities because these liabilities are
claims on the constrained banks’ entire portfolios,
which nonlocal firms have no comparative advan­
tage in monitoring. Alternatively, unconstrained
bankers will purchase individual projects from
these banks. They recognize that banks are con­
strained because of excess profitable investment
opportunities in their region. Thus, binding capital
constraints give rise to asset-backed lending by al­
lowing a bank to separate the funding of certain
projects from the performance of its portfolio.

■ 10 In this discussion, depositors should be understood as either
uninsured depositors or banking regulators.
■

11 The Qualified Thrift Lender Test refers to the regulation that re­

 quires thrifts to hold a certain fraction of their portfolio In the form of
http://fraser.stlouisfed.org/
home mortgages.
Federal Reserve Bank of St. Louis

■

12 Capital constraints arise because of short-term variations in
lending opportunities that do not create the incentive for a structural re­
allocation of bank equity capital.

33

Asset-Backed
Lending as a Means
of Delegating
Nonlocal Monitoring
Carlstrom and Samolyk’s model shows how cap­
ital constraints in informationally segmented
banking markets can cause banks to sell loans,
facilitating a more efficient allocation of resources.
These capital constraints are one example in
which capital markets may not be as efficient as
suggested by textbooks. Loan sales may arise to
help correct the associated regional imbalances.
Another potential problem with intermedia­
tion is that information costs may cause credit to
be rationed for some borrowers. Credit rationing
exists when someone is unable to obtain credit
even though he or she is (ex ante) identical to a
borrower who does obtain financing. W hen in­
formation is costless, economic theory predicts
that credit rationing will not arise because loan
rates will increase until the quantity of loans sup­
plied equals the quantity of loans demanded.
Williamson (1986) demonstrates that it may
be efficient for intermediaries that face monitor­
ing costs to ration credit. As in Diamond, he
characterizes banks as issuing claims to a large
number of lenders and lending to a large num ­
ber of borrowers. Because of ex post project
monitoring costs, banks issue debt contracts to
many ex ante identical borrowers, monitor
projects only in the event of default, and pay a
noncontingent return to depositors.
Unlike Diamond, who assumes that banks
can fund any number of investments at a given
cost of funds, Williamson analyzes an economy
in which banks face an increasing marginal cost
of funds: They must charge higher loan rates to
offer returns that will attract the funds of inves­
tors with better alternatives. Higher loan rates,
however, lead to greater monitoring costs be­
cause higher interest charges raise the probabil­
ity that borrowers will default on their loans.
Although lenders get all of a project’s proceeds
in the event of default, the increase in expected
monitoring costs may actually decrease the ex­
pected return of a loan. In this setting, interme­
diaries may be unwilling to charge higher loan
rates in order to fund more projects and instead
choose to ration credit.
In a related paper, Boyd and Smith (1989) ex­
tend this analysis to show another way in which
asset-backed lending may improve the perform­
ance of informationally segmented credit mar­
kets. As in Carlstrom and Samolyk, differences in
interregional returns on projects lead to a type

of asset-backed lending.


Boyd and Smith consider a variation of the con­
tracting model described by Williamson (1987).13
In their model, identical borrowers, whose proj­
ects require costly ex post state verification, con­
tract individually with lenders to supply funds. To
observe the ex post returns on borrowers’ invest­
ments, lenders must incur monitoring costs, but
such costs are assumed to be larger for lenders in
other markets. Thus, like Carlstrom and Samolyk’s
model, there is a comparative advantage to fund­
ing projects within one’s own region. Boyd and
Smith consider two banking regions that differ in
the local ratios of potential lenders to borrowers,
creating a scenario in which a Williamson-type
credit rationing occurs in only one of the regions.
Securitization allows lenders in unrationed
markets to fund projects in rationed markets:
An intermediary pools and monitors the loans
of local borrowers, funding them by issuing
claims to other markets. Like D iam ond’s model
of intermediation, diversification by this inter­
mediary allows the ultimate investors, lenders
in the unrationed market, to delegate the
monitoring to the intermediary in the market
where the loans are being originated.
Lenders do not find it profitable to fund proj­
ects in other markets directly because of the
large intermarket monitoring costs. However,
asset pooling, which completely diversifies away
the risk of the pool, eliminates the need for in­
vestors to incur the large intermarket costs of
monitoring the underlying assets. All monitoring
takes place locally by the coalition at the lower
intramarket monitoring cost. Similar to Carlstrom
and Samolyk’s model, loan sales occur in order
to equalize expected project returns across mar­
kets. Credit rationing, however, may still occur
in markets where assets are being securitized.

How Well Do These
Models Describe
Off-Balance-Sheet
Financing?
In Boyd and Smith’s model, securitized loan pools
are originated by a coalition of individual borrow­
ers within one locality, but are funded by lenders
in another. Most mortgage securitization takes
place via an interregional intermediary, which
pools loans from loan originators in many

■

13 Williamson (1987) shows that credit rationing can occur in a
model with debt contracts, where individual borrowers contract with in­
dividual lenders. This paper is sim ilar to his earlier one (Williamson
[1986]), except that there are no financial intermediaries.

localities. To the extent that interregional diver­
sification is conventionally viewed as an impor­
tant rationale for mortgage securitization, the
Boyd-Smith model is limited in the extent to
which it can be interpreted as a model of the
secondary mortgage market.
Instead of being a model of regional mortgage
securitization, their analysis is a better descrip­
tion of most nonmortgage securitization. They
do not, however, depict an intermediary that
funds a share of its projects off-balance-sheet
through a subsidiary. Rather, each individual
borrower (not a “bank”) funds his entire project
along with other borrowers.
Carlstrom and Samolyk depict loan sales and
not securitization. However, they model one im­
portant aspect of nonmortgage asset-backed
lending in the sense that banks fund parts of
their portfolio on- and off-balance-sheet.
These models help explain some of the bene­
fits of both loan sales and securitization. For two
reasons, however, the models are limited in de­
scribing some dimensions of asset-backed mar­
kets. First, both the Carlstrom-Samolyk and BoydSmith models rely on regionally segmented bank­
ing markets to drive their results— an increasingly
less likely scenario given the consolidation of the
depository industry and the increase in nonbank
intermediation. Second, as discussed earlier, secu­
ritized assets are usually backed by some type of
credit enhancements or provide some sort of
recourse for the purchasing party that helps make
them marketable. Neither of these papers explains
why credit enhancements might be an important
part of the securitization process. The next two
papers discuss the importance of credit enhance­
ments in making risky bank assets attractive to
nonbank investors.

Asset-Backed
Lending as a
Means of Signaling
Credit Quality
Greenbaum and Thakor (1987) present a model
in which the choice of on- versus off-balancesheet funding (which they refer to as the deposit
funding mode [DFM] and securitized funding
mode [SFM], respectively) is a sorting mecha­
nism whereby borrowers choose one or the
other based on the quality of their project. If a
borrower selects the SFM, he must also choose
the degree to which the bank will provide re­
course in the event of default. The degree to
which a loan is collateralized signals the quality




of the asset to nonbank investors. This elimi­
nates the need for them to screen the borrower.
The model consists of borrowers with projects
that differ in quality. Borrowers must choose be­
tween one of two funding modes. If a project is
funded on-balance-sheet, a bank’s entire stock of
equity capital effectively collateralizes the project.
The bank screens the borrower to ascertain the
quality of his project, while depositors screen the
bank. This redundancy is necessary because
banks are unable to convey the outcome of their
screening directly to depositors. Under the DFM,
the value of the bank’s collateralization and both
of these screening costs are priced into the
borrower’s risk-adjusted loan rate.
Alternatively, under the SFM, a bank offers to
fund the project off-balance-sheet by providing
a credit enhancement in the form of bank col­
lateralization. A borrower pays for the amount
collateralized with an up-front fee. Banks screen
borrowers and then announce a fee schedule as­
sociated with a borrower’s choice of collaterali­
zation. As with insurance, lower-risk projects
are charged less for any given level of coverage
(collateralization). A borrower’s choice of cov­
erage is public information and thus can signal
a project’s quality, eliminating the need for the
purchasing party also to screen the asset.
For higher-quality projects, the fee associated
with the borrower’s choice of bank collateraliza­
tion is offset by the reduction in depositors’ screen­
ing costs. For poorer-quality projects, however, the
fee necessary to purchase collateralization is
greater, outweighing the benefits from the elim­
ination of screening by nonbank investors. Thus,
poorer-quality borrowers forgo the fee and
choose the DFM with full collateralization, al­
though depositors’ screening costs wrill be
priced into their loan rates.
An important implication of this framework is
that higher-quality assets will tend to be securi­
tized, while lower-quality assets will tend to be
held on-balance-sheet. The intuition is as follows:
Higher-quality borrowers receive a lower interest
cost than lower-quality borrowers under either
funding mode. However, because the choice of
collateralization under the SFM produces informa­
tion about project quality and eliminates the need
for asset-backed investors to screen the underly­
ing assets, higher-quality borrowers can take ad­
vantage of low credit enhancement rates to obtain
a better term of finance. Moreover, their cost of
funding is lower despite the increased risk asso­
ciated with less-than-full bank collateralization
from the investors’ perspective.
The Greenbaum-Thakor framework repre­
sents an important step in characterizing the

35

trends in securitization, especially to the extent
that asset-backed lending separates the col­
lateralization and monitoring of the underlying
claims from their funding. Similar to the BoydSmith model, this model depicts asset-backed
lending as a means of eliminating the need for
investors to monitor the performance of the un­
derlying asset(s). Here the reduction in monitor­
ing costs occurs, however, because a borrower’s
choice to fund via a collateralized loan sale signals
project quality and eliminates investors’ need to
screen. Alternatively, in Boyd and Smith, the diver­
sification associated with borrowers’ pooling of
claims facilitates delegated monitoring.

Asset-Backed
Lending as a
Means of Securing
Credit Quality
James (1988) presents a model that characterizes
a different rationale for asset-backed lending. Spe­
cifically, he emphasizes that loan sales with re­
course are a means of obtaining lower funding
costs by separating the cash flows on a particular
claim from those to the unsecured claimants fund­
ing a bank’s balance sheet. He argues that loan
sales with recourse are equivalent to a firm issuing
secured debt. Because banks are prohibited from
issuing secured claims, loan sales with recourse
are likely to occur for the same reasons that firms
issue secured debt.
Firms issue secured debt in part to mitigate an
underinvestment problem that may occur with
fixed-rate bond contracts. If firms with outstand­
ing debt are constrained to raise funds by issu­
ing additional unsecured claims, they may forgo
financing certain new profitable projects— in
particular, projects that would reduce the over­
all risk of the firm’s portfolio. This occurs be­
cause banks cannot reprice existing unsecured
claims to reflect accurately changes in the risk
of their portfolio due to new asset acquisitions.
Thus, if a firm chooses to issue unsecured claims
to finance a project that reduces portfolio risk,
existing bondholders receive a wealth transfer
from stockholders as the risk-adjusted value of
their claims increases.
James refers to the underinvestment problem
that motivates the use of secured debt as the col­
lateralization hypothesis. The key to this problem
is that banks are locked into a fixed cost of funds
on their liabilities. With secured debt, the existing
bondholders do not have access to the newly ac­
quired assets should the firm declare bankruptcy.

Since regulations restrict banks and thrifts from is­


suing secured debt, loan sales with recourse—
by separating the funding of new projects from
that of a firm’s existing investments— can mitigate
a potential underinvestment problem.
Banks cannot issue secured debt, so the ex­
tent to which they fund their portfolios by issu­
ing term liabilities such as certificates of deposit
(CDs) may motivate them to finance certain as­
sets off-balance-sheet with some form of
recourse. Still, James’ model may be limited as
an explanation for asset-backed lending by
banks and thrifts, because the bulk of their
liabilities are short-term deposits. Such liabilities
have a return that can be readjusted to reflect
the risk of a bank’s portfolio after new assets are
acquired. Thus, any wealth transfers from bank
equityholders to depositors (in an unregulated
environment) could be mitigated by readjusting
short-term deposit rates.

Regulatory Factors
and Asset-Backed
Lending
In reality, the fact that banks are insured, and that
the FDIC (not insured depositors) must consider
the risk of a bank’s portfolio, complicates this
analysis. As the residual claimant of a bank’s as­
sets, the FDIC, not insured depositors, bears the
credit risk of these assets. If capital requirements
and deposit insurance premiums were correctly
priced (and effectively repriced) to reflect a bank’s
risk, the incentives for banks to engage in assetbacked lending would be reduced. To the extent,
however, that the FDIC does not price the provi­
sion of insurance to reflect a bank’s risk accurately,
James’ model motivates asset-backed lending.
The interpretation here is that safer assets will be
funded off-balance-sheet to maximize the value of
FDIC insurance to bank equityholders.
The models in both James and Greenbaum
and Thakor explain why firms would provide
credit enhancements for their off-balance-sheet
funding. In reality, these enhancements are gen­
erally issued by a third party— to some degree
because of regulations. This is especially true
for bank loan sales, as loans sold with recourse
are viewed as on-balance-sheet assets in the as­
sessments of capital requirements. In spite of these
limitations, however, these frameworks are useful
in characterizing a widely accepted rationale for
the proliferation of nonmortgage securitization: to
separate the securitized assets from the general
portfolios of financial intermediaries.

36

The proliferation of asset-backed lending is
merely one way that the financial scene is chang­
ing. As evidenced by nonbank activities in this
market, securitization is both the result of techno­
logical innovations in information production and
an artifact of banking regulations. In this paper,
we have focused primarily on models that formal­
ize market-based reasons for asset-backed lend­
ing. However, the existence of government
regulations, in tandem with the provision of the
federal safety net, is widely viewed as a significant
factor impacting both the volume of securitization
and the types of loans securitized.

IV. Regulatory
Incentives for
Securitization
Regulatory models of asset-backed lending gen­
erally focus on how regulations impact a bank’s
choice of funding. For example, Benveniste and
Berger (1987) argue that credit enhancements
for asset-backed securities allow banks to maxi­
mize the value of deposit insurance by issuing
claims that are senior to those of the FDIC. Al­
though their argument is similar to that posited
by James, he argues that this adverse tendency
is offset by the likelihood that loan sales backed
by SLCs mitigate the underinvestment problem.
The incentive to shift risk to the FDIC is also
limited by the marketplace. The creditworthiness
of both the loans being securitized and the issuer
of credit enhancements affects the rating of a pool.
Thus, banks that issue SLCs are generally lowerrisk institutions.
Other regulatory incentives for banks to en­
gage in asset-backed lending are the regulatory
taxes associated with on-balance-sheet funding.
For example, capital requirements— the mini­
mum legal fraction of an investment that must
be held as equity capital— are popularly viewed
as the primary regulatory incentive for banks
and thrifts to sell assets. These requirements are
designed to protect the FDIC and uninsured
depositors in the case of bank failure.
Regulation-based models, however, empha­
size that if capital requirements on a particular
class of loans are greater than merited by the
inherent risk of the claims, banks will have an
incentive to either sell or securitize the loan.14
That is, there will be an incentive to move a loan
from on-balance-sheet, where it is subject to capital
requirements, to off-balance-sheet, where it is not.

http://fraser.stlouisfed.org/
■ 14 See Pennacchi (1988).
Federal Reserve Bank of St. Louis

This will be the case when the cost of the reg­
ulated equity buffer exceeds the cost of market­
ing the claims.
Two other regulatory taxes that have been cited
as potential inducements for asset-backed lending
are fractional reserve requirements and flat-rate
FDIC insurance premiums on deposit liabilities.
These assessments are viewed as raising the cost
of deposit funding, thus encouraging depository
institutions to fund loans off-balance-sheet. Yet,
securitization has continued to expand in spite of
decreases in the reserve requirements set by the
Board of Governors of the Federal Reserve Sys­
tem. In addition, to the extent that deposit in­
surance is subsidized, flat-rate deposit insurance
premiums are unlikely to be a major factor in the
growth of securitization. For example, if the
premiums charged to insure the deposits funding
relatively risky loans allow an institution to obtain
funds more cheaply than from other sources, then
even though there are other costs associated with
deposit funding, this may be a relatively cheap
source of finance. Because deposit insurance
premiums are currently not risk based, they may
still have the undesirable effect of causing banks
to securitize their safest and most liquid loans.

V. Conclusion
Although market-based reasons are an impor­
tant factor driving off-balance-sheet lending,
this type of lending may still impact the risk of
lending that is funded on banks’ balance sheets.
For example, Greenbaum and Thakor’s model
predicts that the safest assets will be securitized
while the risky assets will be held on-balancesheet. Regulations provide similar incentives for
securitizing the safest assets. Because these fac­
tors can clearly impact the exposure of the
FDIC, policymakers are understandably con­
cerned about the rapid growth of this practice.
In its role as an insurer, the government aims
to maintain the solvency of the insurance fund
by regulating deposit insurance premiums and
capital requirements. But it is precisely these as­
sessments that can affect the risks undertaken
by depository institutions, as regulatory costs
create an incentive for banks to shrink their
balance sheets by securitizing loans.
However, the trend toward asset-backed
lending should not be viewed as either a boon
for nonbank competitors or the bane of the FDIC.
Depository institutions can earn fee income for
participating in various dimensions of the secu­
ritization process. Moreover, with prudent
regulatory supervision of banks’ off-balance-

sheet activities, asset-backed lending can miti­
gate the rising costs of the federal safety net as it
reduces the share of credit funded on the books
of depository institutions. Thus, securitization is
better viewed as an important innovation in the
financial sector— one that allows new suppliers of
credit to enter the market and existing ones to inter­
mediate credit more efficiently.

References
Benveniste, Lawrence M., and Allen N. Berger.
“Securitization with Recourse: An Instrument
that Offers Uninsured Bank Depositors Se­
quential Claims,”Jo u rn a l o f B anking a n d
Finance, vol. 11, no. 3 (September 1987), pp.
403-24.
Bemanke, Ben, and Mark Gertler. “Banking and
Macroeconomic Equilibrium,” in William A.
Barnett and Kenneth Singleton, eds., New Ap­
proaches to M onetary Economics. New York:
Cambridge University Press, 1987, pp. 89-111.
Bhattacharya, Sudipto, and Anjan V. Thakor. “Con­
temporary Banking Theory,” Indiana Univer­
sity Discussion Paper 504, November 1991.
Boyd, John H., and Bruce D. Smith. “Securitiza­
tion and the Efficient Allocation of Invest­
ment Capital,” Federal Reserve Bank of
Minneapolis, Working Paper 408, May 1989Carlstrom, Charles T., and Katherine A. Samolyk.
“Loan Sales as a Response to Market-Based
Capital Constraints,” Federal Reserve Bank of
Cleveland, Working Paper, 1993 (forthcoming).
Diamond, Douglas W. “Financial Intermediation
and Delegated Monitoring,” Review o f Eco­
nom ic Studies, vol. 51, no. 3 (July 1984), pp.
393-414.
Gertler, Mark. “Financial Structure and Aggre­
gate Economic Activity: An Overview,” Jo u r­
n a l o f Money, Credit, a n d B anking, vol. 20,
no. 3 (August 1988), pp. 559-88.
Gorton, Gary B., and Joseph G. Haubrich. “The
Loan Sales Market,” in George G. Kaufman,
ed., Research in F in a n c ia l Services: Private
a n d P ublic Policy, vol. 2 (1990), pp. 85-135.




Greenbaum, Stuart I., and Anjan V. Thakor.
“Bank Funding Modes: Securitization versus
Deposits,”Jo u rn a l o f B anking a n d Finance,
vol. 11, no. 3 (September 1987), pp. 379-401.
James, Christopher. “The Use of Loan Sales and
Standby Letters of Credit by Commercial
Banks,”Jo u rn a l o f M onetary Economics, vol.
22, no. 3 (November 1988), pp. 395-422.
McCulloch, J. Huston. “Misintermediation and
Macroeconomic Fluctuations,”Jo u rn a l o f
M onetary Economics, vol. 8, no. 1 (July
1981), pp. 103-15.
Modigliani, Franco, and Merton H. Miller. “The
Cost of Capital, Corporation Finance, and the
Theory of Investment,” A m erican Econom ic
Review, vol. 48, no. 3 (June 1958), pp. 261-97.
Mookherjee, Dilip, and Ivan Png. “Optimal
Auditing, Insurance, and Redistribution,”
Q ua rterlyJou m a l o f Econ om ics, vol. 104,
no. 2 (May 1989), pp. 399-415.
Pavel, Christine. “Securitization,” Federal Reserve
Bank of Chicago, Economic Perspectives, vol.
10, no. 4 (July/August 1986), pp. 16-31.
Pennacchi, George G. “Loan Sales and the Cost
of Bank Capital,”Jo u rn a l o f Finance, vol.
43, no. 2 (June 1988), pp. 375-96.
Ramakrishnan, Ram T.S., and Anjan V. Thakor. “In­
formation Reliability and a Theory of Financial
Intermediation,” Review o f Economic Studies,
vol. 51, no. 3 (July 1984), pp. 415-32.
Samolyk, Katherine A. “Portfolio Risks and Bank
Asset Choice,” Federal Reserve Bank of Cleve­
land, Working Paper 8913, October 1989a.
______ . “The Role of Banks in Influencing
Regional Flows of Funds,” Federal Reserve
Bank of Cleveland, Working Paper 8914,
November 1989b.
Townsend, Robert M. “Optimal Contracts and
Competitive Markets with Costly State Verifi­
cation,”Jo u rn a l o f Econom ic Theory, vol. 21,
no. 2 (October 1979), pp. 265-93.

Williamson, Stephen D. “Costly Monitoring,
Financial Intermediation, and Equilibrium
Credit Rationing,”Jo u rn a l o f M onetary
Economics, vol. 18, no. 2 (September 1986),
pp. 159-79.
______ . “Costly Monitoring, Loan Contracts,
and Equilibrium Credit Rationing,” Q uarterly
Jo u rn a l o f Economics, vol. 102, no. 1
(February 1987), pp. 135-45.




First Quarter
Working Papers
Current Working Papers of the
Cleveland Federal Reserve Bank
are listed in each quarterly issue
of the Economic Review. Copies
of specific papers may be re­
quested by completing and mail­
ing the attached form below.

■ 9217
Commitment as
Irreversible Investment
by Joseph G. Haubrich and
Joseph A. Ritter

Single copies of individual
papers will be sent free of charge
to those who request them. A
mailing list service for personal
subscribers, however, is not
available.

Institutional subscribers, such
as libraries and other organiza­
tions, will be placed on a mail­
ing list upon request and will
automatically receive Working
Papers as they are published.

■ 9219
■ 9302
Cross-Lender Variation in HRM Policy and
Home Mortgage Lending Increasing Inequality
by Robert B. Avery,
in a Salary Survey
Patricia E. Beeson, and
MarkS. Sniderman

by Erica L. Groshen

■ 9218
The Determinants of
■ 9301
Airport Hub Locations,
Sharing with a RiskService, and Competition Neutral Agent
by Neil Bania,
Paul W. Bauer, and
Thomas J. Zlatoper

by Joseph G. Haubrich

Please complete and detach the form below and mail to:
Research Department
Federal Reserve Bank of Cleveland
P.O. Box 6387
Cleveland, Ohio 44101




Check item(s)
requested

Please send the following Working Paper(s):

□ 9217

□ 9219

□ 9218

□ 9301

□ 9302

Send to:
Please print

Name

Address

City

State

Zip

Economic Review
H

1992 Quarter 1

■

Recent Behavior of Velocity:
Alternative Measures of Money
by John B. Carlson and Susan M. Byrne

H

1992 Quarter 3
Comparing Central Banks’ Rulebooks
by EJ. Stevens

Commodity Prices and P-Star
by Jeffrey J. Hallman and Edward J. Bryden

Forbearance, Subordinated Debt, and the Cost
of Capital for Insured Depository Institutions
by William P. Osterberg and James B. Thomson

The Causes and Consequences of Structural
Changes in U.S. Labor Markets: A Review
by Randall W. Eberts and Erica L. Groshen

An Introduction to the International Implications
of U.S. Fiscal Policy
by Owen F. Humpage

1992 Quarter 2

I

1992 Quarter 4

Intervention and the Bid-Ask Spread
in G-3 Foreign Exchange Rates
by William P. Osterberg

White- and Blue-Collar Jobs in the Recent Recession
and Recovery: W ho’s Singing the Blues?
by Erica L. Groshen and Donald R. Williams

An Ebbing Tide Lowers All Boats:
Monetary Policy, Inflation, and Social Justice
by David Altig

Assessing the Impact of Income Tax, Social
Security Tax, and Health Care Spending Cuts
on U.S. Saving Rates
by Alan J. Auerbach, Jagadeesh Gokhale,
and Laurence J. Kotlikoff

Sluggish Deposit Rates: Endogenous Institutions
and Aggregate Fluctuations
by Joseph G. Haubrich




History of and Rationales for the
Reconstruction Finance Corporation
by Walker F. Todd