View original document

The full text on this page is automatically extracted from the file linked above and may contain errors and inconsistencies.

Opinions expressed in the Economic Review do not necessarily reflect the views of the
management of the Federal Reserve Bank of San Francisco, or of the Board of Governors of
the Federal Reserve System.
The Federal Reserve Bank of San Francisco's Economic Review is published quarterly by the Bank's
Research and Public Information Department under the supervision of John L. Scadding, Senior Vice
President and Director of Research. The publication is edited by Gregory J. Tong, with the assistance of
Karen Rusk (editorial) and William Rosenthal (graphics).
For free copies of this and other Federal Reserve publications, write or phone the Public Information
Department, Federal Reserve Bank of San Francisco, P.O. Box 7702, San Francisco, California 94120.
Phone (415) 974-3234.

2

I. Structure and Performance: Some Evidence from California
Banking.............................................................................................. 5
Randall J. Pozdena

II. Off Balance Sheet Risk in Banking:
The Case of Standby Letters of C red it......................................... 19
Barbara Bennett

III. Arbitrage and Efficient Markets Interpretations of Purchasing
Power Parity: Theory and Evidence ..............................................31
John Pippinger

3

4

ill

II

Randall J. Pozdena*
The "Structure-Performance Hypothesis' has been the subject of
controversyfor 35 years. One aspect ofthis controversy is the difficulty of
measuring the economic performance offirms. In this paper, data on the
rate ofbank entry in California banking markets is used in a new, indirect
test of the hypothesis. The results are consistent with the idea that
increased concentration is associated with increasingly high profits.

Anti-trust policy toward the banking industry
rests partly upon the premise that increased concentration of market share causes a deterioration in
the performance of banking firms. As concentration
increases in a market, according to the premise, so
too does stable, anti-competitive conduct (such as
overt or tacit collusion). Known in the industrial
organization literature as the structure-performance
hypothesis, this premise has been debated hotly for
over thirty years on both theoretical and empirical
grounds.
This paper re-examines empirically the link
between structure and performance by indirectly
using data from California banking markets. In
particular, we study the relationship between the
structure of California banking markets and the rate
of bank entry. Although entry is not per se a performance measure, its study provides some insight into
the relationship between structure and performance
without many of the conceptual and measurement
problems encountered in using direct performance
measures such as profits and prices.
Our results are consistent with the contention that
increased concentration is associated with

increasingly high profits. In addition, we find that at
any given level of concentration, entry rates are
higher in markets with a large number of suppliers.
This latter finding is consistent with the notion that
entry-limiting pricing discipline is difficult to sustain when the number of producers becomes large.
These findings, thus, reinforce the arguments that
support anti-trust policy. As we discuss below,
however, such evidence of a structure-performance
link is only one step in the logic that supports a
policy of active manipulation of market structure to
improve market efficiency.
The remainder of the paper is organized as follows. First, we discuss the origins of the formal
structure-performance hypothesis and the various
theoretical and empirical criticisms of its study and
use in anti-trust policy. Second, we discuss the
rationale of structure-entry tests as an alternative to
conventional structure-performance studies. After
discussing the data and empirical findings, the
paper concludes with a summary of the findings and
their policy implications.

I. The Structure-Performance Relationship
The notion that market structure influences performance originates from observations about the

theory of the firm. In a world characterized by pure
and perfect competition, for example, theory argues
that firms in the marketplace will perform in a
socially desirable fashion, producing where price

* Senior Economist, Federal Reserve Bank of San
Francisco.
5

equals marginal cost and enjoying only "normal"
profits. One of the attributes of the perfect competition model is that production is performed by many
firms, each too small to influence market prices.
Thus, in the classic model of competition, low
concentration of market share is associated with
socially desirable performance.
In contrast, under circumstances of pure monopoly - where there is, by definition, only one
producer and, thus, complete concentration of market share - socially undesirable performance
results. Under such performance, price exceeds
marginal cost and leads to sub-optimal production
and "excess" profits. In this case, concentration of
market share is associated with undesirable performance.
Understandably, the implications of these two
special models of the firm - perfect competition
and pure monopoly - spawned the notion that
markets displaying an intermediate level of concentration might, therefore, perform in a manner
between these extremes. Since most markets are not
characterized by the features of the simple perfect
competition or monopoly models, such a notion is
of practical interest. Economic theory, however,
does not articulate clearly the association between
concentration of market share and performance in
imperfectly competitive models.
The notion that a monotonic relationship might
exist between market share concentration and performance is thus a purely empirical one. It was first
advanced by the economist Joe S. Bain in the late
1940s. 2 He hypothesized that the ability of firms to
engage in overt or covert collusive behavior
increases as the concentration of market share
increases. In the process, the likelihood that the
firms would display anti-competitive or quasimonopolistic performance also rises. Bain first
tested this hypothesis in 1951 using reported profits
of the firm as a measure of performance. 3 He found
that increased concentration, indeed, was associated with higher profit rates and this result started
the structure-performance controversy.

ciation between market concentration and performance does not establish market concentration as
the cause of the observed performance and, thus,
does not by itself provide a rational basis for a policy
of rnanipulating market structure to improve performance.
It has been argued, for example, that the higher
profits observed to be enjoyed by large firms in
concentrated markets are the result of economies of
scale and the consequent superior efficiency of large
firms. This claim seems particularly relevant in the
context of the early structure-performance studies,
which examined a cross-section of industries displaying different market share concentration levels.
The firms in such a sample undoubtedly faced
different technological and demand conditions that
had the potential of systematically affecting a performance measure such as profit, as well as a structural characteristic such as market share concentration. 4
For studies within an industry, such as the
numerous structure-performance studies of the
banking industry, this particular criticism is less
likely to be relevant. The possibility remains,
however, that a third factor positively related to both
concentration and price or profit performance measures statistically links concentration and performance, giving the appearance of a direct, casual link
when none exists. 5
A second major criticism of structure-performance studies is that the structure-performance
notion hypothesizes a relationship between structure and inefficient firm behavior but most studies
have used performance measures that may not
unambiguously detect such inefficiency. The use of
published data on profits to proxy true economic
profits, for example, is notoriously flawed. In an
industry such as banking, where accounting relies
heavily on book valuation of assets and liabilities,
reported net income flows, rates ofretum on assets,
and net worth are of dubious empirical usefulness.
Moreover, expense-preference theory suggests that
firms enjoying market power may express inefficiency by indulging in objectives other than maximizing shareholder profit. Such behavior would
argue against finding a consistent relationship
between structure and measured profits. 6
Similar criticisms have been leveled against the

Criticisms of Structure-Performance
Studies
Structure-performance studies are controversial
for a number of reasons. First, discovering an asso-

6

marketplace would be expected. Indeed, at least in
simple formulations of industry behavior, it is entry
that is expected to bring discipline to the marketplace and to ensure that production is expanded
to the point where price equals marginal cost. For
this concept to be useful in examining the notion of a
link: between structure and performance, however,
certain other assumptions and qualifications must
be made.
First, it must be assumed that new entrants cannot
be mobilized instantaneously. If this were the case,
market structure could be altered instantaneously
and one would not observe variations in market
structure of any importance in markets that were
otherwise identical. Thus, no relationship between
market structure and entry rates would be observable. 9 Finding a positive relationship between market concentration and entry does not, however,
identify for us the process that permits high levels of
concentration to be maintained. We can, however,
structure the model to test for the simple possibility
that concentration persists because of lagged adjustment. In particular, we define
E*(t) = E*(X(t»,
where E*(t) is the desired rate of entry if such entry
could be effected immediately in period t, and X(t)
is a vector of variables influencing that rate.
The response of actual entry E(t) to X(t) is likely
to be influenced by the regulatory time lags and
general adjustment costs that confront a new
entrant. Thus the actual rate of entry in any given
period is likely to depend upon the past pattern of
entry in addition to variables influencing the
"desired" or target rate of entry, E*(t). The actual
entry relationship therefore might be written as
E(t) = E(X(t), E(t-l), E(t-2), E(t-3)... ).
Because of data limitations, we are unable to examine such a generalized model for the adjustment of
E(t) to conditions in previous periods. Our studies
employ only E(t-l) to model the influence of
previous economic states on current entry. Inclusion
of a lagged dependent variable in a regression equation also may serve to proxy for the influence of
variables omitted from the arguments of the equation.
Second, although finding a positive relationship
between concentration and entry in such a model
would be consistent with the notion that concentra-

use of price as a performance variable. In most
industries, including banking, the products offered
by firms are not homogeneous, but rather vary in
quality, attendant service characteristics and other
attributes. In banking, for example, the proximity of
branching facilities, availability of automated teller
machine services, and many other service attributes
are relative dimensions of the "price" of deposit or
loan services (indeed, prior to the elimination of
deposit rate regulation, this was the only dimension
of competition for certain types of bank liabilities.)
If the non-price attributes of bank products vary
systematically with concentration because of their
mutual association with a third factor, spurious
relationships between concentration and price performance may appear when none exist, or no relationship may be observed when one, in fact, does
exist.
Finally, structure-performance studies have been
criticized because of the difficulty in properly defining the relevant variables and controlling for other
possible influences. 7 Defining an appropriate "market" and identifying its constituent producers, for
example, certainly involves some arbitrariness.
Similarly, alternative measures of concentration
exist8 with little theory to guide choosing among
them. These criticisms strike this author as somewhat nihilistic and properly could be directed at
virtually all empirical work.

Entry and Market Structure
Almost all of the more than 200 structure-performance studies of the banking industry have
employed profit or price measures of performance. 8
Because of the potential problems of systematic bias
pointed out above, it is worth considering alternative
means of identifying inefficient performance. In this
paper, we examine the relationship between rates of
new entry and market share concentration.
Although the logic of this relationship is itself not
unassailable, entry can be measured more accurately than other factors required of direct structureperformance studies.
We thus will be focusing on the relationship
between entry activity and concentration. The logic
of the test is fairly straightforward. If market share
concentration allows incumbent firms to enjoy
abnormally high profits, new entry into the affected
7

tion is associated with excess profits, a converse
finding offers no information. The absence of a
relationship between concentration and entry could
arise because the incumbent firms in a concentrated
market, although they enjoy excessive profits, are
able to erect impenetrable barriers to entry. Alternatively, the firms that constitute the concentrated
market may be especially efficient and, although
they enjoy excess profits, able to maintain price at or
below the level needed to support an entrant of
average efficiency. JO Unfortunately, therefore, the
absence of an observed relationship between concentration and entry does not necessarily disprove
the existence of a relationship between concentration and profits.
Finally, it should be emphasized that finding a
positive relationship between market share concentration and entry need not imply that active
intervention to deconcentrate market structure will
improve efficiency. Improving efficiency would
require an ability to define optimal entry from the
standpoint of economic efficiency - something
that cannot be done by this, or probably any, structure-performance study. Whether entry is sub- or
supra-optimal has been argued to depend upon
specific demand and cost characteristics. 1 1
In summary, excess profits should induce net
entry into a banking market. To the extent that
market structural factors are related to profit rates,
therefore, entry and market structure may be associated. No association will be observed, however, if
the market is in entry equilibrium at all times, that
is, when excess profits are extinguished immediately by the influence of actual or threatened entry.

conditions of constant returns to scale) or favors
large-scale firms (in an environment of increasing
returns to scale), then current firms may meet the
amplified demand for industry output by expanding
their. scale. of production. If, on the other hand,
increased firm scale is associated with decreased
returns, growth in demand may be associated
positively with the rate of net new entry. In the
studies of entry rates reported below, various demographic and economic scalars are employed to isolate this effect.
A second factor influencing entry behavior is
itself a market structure characteristic, namely,
entry conditions. The ease or cost of entry can be
influenced by numerous factors, including bank
charter regulations and land use procedures affecting the location of commercial activity. To the extent
such factors dominate the entry decision process,
they will also obscure any observation of the
hypothesized link between concentration and profits
and profits and entry.
Similarly, market share concentration may be
associated not only with the enjoyment of abnormal
profits, but also with efforts by incumbent firms to
accumulate power for the purpose of retarding new
entry. A common proposition along these lines is
that the existence of economies of scale not only
predisposes a market to display a concentrated
structure, but also confers on incumbent firms the
ability to retard entry. 13 It is not necessary to replay
the debate here, but it is worth noting that if entry
conditions do deteriorate as concentration
increases, this condition also would tend to bias
studies toward the finding that concentration has no
effect on profits and entry.
One of the inherent propositions in the structureperformance hypothesis, however, is that non-atomistic market structures may permit covert or overt
coordinated pricing behavior that has the effect of
limiting entry. To the extent that entry limit pricing
(or other conduct that retards entry) is facilitated by
the lack of numerous rivals, entry rates at any given
level of industry profit should be higher in markets
with a greater number of existing rivals. Thus, in
addition to anticipating a positive relationship
between concentration and entry, a positive relationship between the number of institutions in a market
and the rate of entry also should be anticipated.

The Determinants of Entry
The simple theory of the firm provides the argument that excess profits observed within an industry
may induce the net entry of new firms into a marketplace. 12 However, the presence or absence of
excess profits may not be the only factor influencing
entry. We tum here to a discussion of two possibly
moderating influences on entry: growth in demand
(that is, the "scale" of the market) and entry barriers.
Growth in demand or in the scale of a marketplace
mayor may not result in net new entry. If cost
conditions are such that the optimal size of a firm in
the marketplace is indeterminate (such as under

8

Finally, structure-performance studies usingprice as the performance measure often are criticized
(probably fairly) for ignoring differences in the
qualitative aspects of the products offered by different-sized firms and firms in different markets.
The advantage of studying the effect of structure on
entry rather than prices is that we need worry less

about variation in service quality as long as all firms
in the market potentially can offer the same product
or service quality. Thus, for testing the hypothesis
that high market share concentration may result in
abnormal profits that attract entry, it makes little
difference if the actual mix of products or quality of
service varies within the sample.

II. California Banking Markets
The basic unit of observation in our study is a
banking market. We focused on activity in California banking and constructed measures of the rate of
entry of banking institutions and variables describing the structural and demographic characteristics
of the banking markets in the state.
Before proceeding to a more detailed description
of the data employed, it is worthwhile to review the
rationale for focusing on the California market and
the issues that arise in defining the variables. California banking operates in an environment particularly conducive to exploring the concentrationentry hypothesis. First, as mentioned above, California has long had a policy of unlimited, intrastate
branching, and state banking policy has permitted
vigorous entry. In 1970, there were 203 commercial
banks; by 1980, this number had increased to 311. 14
California's economic geography also provides the
variation in economic conditions and bank structure
necessary to test the structure-entry hypothesis.
Indeed, the study of California banking is, in terms
of sheer scale of banking activity, analogous to
studying the banking system of a medium-sized
western country. (California is very similar to Canada, for example, in population and growth levels of
economic activity.)
Finally, the thrift industry in California - which
must at least be considered a potential rival to the
commercial banking industry - is relatively
homogeneous. It consists almost entirely of savings
and loan associations, with no mutual savings banks
and few thrift and loan companies.
California, although an extremely large economy,
abuts rural, desert or mountain areas, ocean or the
country of Mexico. Thus, we need worry less about
border competition effects and interstate differences
in regulatory policy on banking in California than in
other important banking markets such as New York

and Pennsylvania, which are adjacent to still other
important banking markets.
The banking industry in California is considerably concentrated in all reasonable market geographies. In 1974, for example, the Herfindahl Index at
the state level was over 2500 within commercial
banking. IS The deposit market share of the four
largest banks in California has hovered near 60
percent throughout the study period.
Banking also is concentrated at the local market
level. The Herfindahl Index within California
counties has exceeded 2,000 throughout the study
period. The United States Department of Justice
presently considers any Herfindahl Index in excess
of 1,800 to signifY a concentrated market.
Chart 1 presents additional detail concerning the
distribution of concentration in commercial banking in California counties.

Chart 1
Market Concentration in California Counties, 1980
Percent of Counties

45
40
35
30
25

20
15
10

5

o
Q

~

~

~

~g

g

~

g g g gg g g

Herfindahl Index

9

o~ Q =
gog
g

~

Defining Banking Markets
The preceding statistics on the geographic concentration of banking activity in California raise the
important issue of how to define the appropriate
market geography for this study. Such definition has
been widely debated both among economists and
among regulators and the judiciary. 16 From a banking structure standpoint, the market geography
should be defined in such a way that the aggregate of
economic forces impinging upon the banks within
that geography dominate the forces exerted upon
them by institutions outside that geography. This, in
tum, clearly depends upon the accessibility of
various products to consumers, which, in tum,
determines the extent to which the products offered
by various institutions are close substitutes. Various
investigators therefore have used market areas
defined on the basis of commute patterns, shopping
patterns, residential densities, and even proposed
complex lexicographic schemes. 17
In this paper, our choice of market definition is a
practical one compelled by the availability of economic and demographic data necessary to test for
the effects of growth in market scale as discussed
above. In particular, we must employ counties (or
aggregates of counties). We do not deny the

arbitrariness of this definition, but hasten to point
out that California - like many states - implements land use regulation through county general
plans. There may be, therefore, fortuitous relationships between the county geography and the geography implied by employment, commute or residential land use patterns. Indeed, as arbitrary as the
political subdivision may be in defining banking
markets, it has survived structure-performance
studies that compared it to alternatives. 18
Our approach resulted in the definition of 58
markets in California, although our markets are
large relative to the geographic market definitions
employed by investigators in Eastern states. Minor
variations on the county market definition were
explored, such as employing SMSA definitions in
metropolitan areas and eliminating extremely large
counties such as San Bernardino County from the
sample in alternative regressions. Since these variations did not yield important differences in the
findings, the following discussion is based only on
the use of county measures of market areas.

Trends in Entry and Concentration
As Chart 2 shows, there has been vigorous entry
by new institutions in California county banking
markets throughout the study period and, consistent
with this, there has been a secular decline in concentration as well. In the third panel of Chart 2, the
entry rate - defined as the net number of new
institutions entering a county market over a twoyear period divided by the number of institutions in
the base year
is graphed. As the graph indicates,
the rate of net new entry of institutions has fluctuated between slightly above I percent to over 7
percent on an annualized basis over the study
period. Because of this significant variation, it was
important to test the hypothesis using a series of
cross-sections to ascertain the stability of the relationship, if any, between concentration and entry
rates.
Chart 3 depicts the distribution of banking
institutions among counties. Most of the counties
(over 35) have 9 or fewer banking institutions.
Conversely, only about a dozen counties have 20 or
more institutions in them. To the extent that the
number of institutions in a marketplace may affect

Chart 2
Measures of Selected Variables for California,

4000

1974-1980

3500
3000
270
190
110

10
5

10

and 1980. The average annual· rates of growth for
banks that were in the sample in 1972 and remained
in the. sample in 1980 was highly variable. Moreover, the growth rates bore no statistical relation to
bank size (measured in this context by total
deposits).2 1 This finding, interestingly, is consistent
with Gibrat's stochastic model of market share
concentration. Gibrat argued that if rates of growth
of firms in a marketplace were distributed randomly
(independent of firm size), this stochastic process
alone would be sufficient to generate a non-uniform
distribution of market share among firms much like
the pattern observed in most marketplaces. Namely,
most of the market would be served by a few large
firms, but many small firms would coexist. 22
If Gibrat's hypothesis explains the market share
concentration observed in California banking markets, the interpretation of our study of concentration
and entry rates may be less ambiguous since
Gibrat's hypothesis militates against the argument
that economies of scale or permanent differences in
the efficiency of individual firms explain the market
share supremacy of certain firms over others. Thus,
if we find a positive relationship between concentration and entry, it suggests that concentration per se
affords incumbent firms some protection from
profit-extinguishing competitive behavior.

Chart 3
Distribution of Number of Banks in County, 1980
Number of Counties

40

30

20

10

Number of Banks

competition independent of the Herfindahl measure
of concentration, it is important to note the wide
disparity in bank populations by county. We address
this issue in the empirical work below.

Trends in Bank Size
There also is wide variation in the rates of growth
of individual banks within the state between 1972

m.

Empirical Tests of the Relationship Between
Entry and Concentration

We turn now to our empirical examination of the
relationship between entry and market share concentration. Data from the period 1972 to 1980 were
used to construct the variables employed in the
studies reported here. The statistics on banking
activity and market demography were available only
for the years 1972, 1974, 1976, 1978 and 1980. (We
chose not to expand the study into the 1980s to avoid
the influence of the major changes in state and
federal banking regulation that occurred at that
time.) Because of the complexity involved in constructing some of the measures employed here, we
digress momentarily to describe the construction of
dependent and independent variables.

Constructing The Variables
We measured entry by observing flows of institutions, branches and other measures of capacity in
and out of various geographic banking markets. The
entry rates were measured using two-year measurement intervals. Thus, from 1972 to 1980, we
obtained four two-year cross-sections of entry
observations. Since the basic form of the estimated
relationship is that presented in the preceding section, a lagged entry rate variable was one of the
arguments of the regression leaving us with three
cross-sections to study.
Our main interest is in the notion of new entry,
that is, the entry of banking firms into banking

11

markets in which they were previously not represented. We are also interested, however, in the
possibility that high levels of concentration may
induce existing firms to expand their presence in the
marketplace, net of any withdrawal from the marketplace that may occur. The branch growth rate
was used to study this entry process. Finally, we
wish to study the extent to which entry is a phenomenon of existing banks or new banks. We therefore examined the de novo branch growth and de
novo bank entry rate as additional measures of entry
activity.
In all cases, the entry rate was defined as the
change in the entry measure occurring over a twoyear period divided by the level of that measure at
the beginning of the two-year period. Therefore, in
some of the entry measures studied, we distinguished between a gross rate of entry, an exit rate
and a net rate of entry. The gross rate was computed
by counting all entry events over each two-year time
frame as a percentage of the level in the base of the
two-year period. The exit rate was a count of all exit
events as a percentage of the level of the measure in
the base of the two-year period, and the net entry
rate was constructed as the net of entry events over
exit events divided by the level's measure in the base
of the period.
The independent variables in the regression, if
they are level variables, are the measures relevant to
the base year of the entry measure. Those independent variables that are rate variables (such as population and income growth) are the rates that occurred
in the two-year period just prior to the base date of
the entry measure. In this way, the independent
variables may be viewed as measures that are truly
not contemporaneous with the entry activity they are
seeking to explain.
In addition to the lagged dependent variable, the
independent variables consist of the Herfindahl
index and the number of branches and/or institutions as measures of the structural characteristics of
the banking market. The rate of growth of per capita
income and the rate of growth of population were
included as scalars of market demand.
Numerous variations on these three basic entry
notions also may provide insight into the processes
that stimulate entry into California banking markets. We examine, for example, the exit of existing

firms to see ifthe process of elimination of banking
firms is in any way related to market share concentration or the other demographic or structural
variables. Most exit in the banking industry occurs
through merger, either voluntary or arranged, for
failing banking firms by bank regulators. The exit
concept that can be developed from available data,
therefore, differs somewhat from the exit concept in
the economic literature, which refers to the departure of productive capacity from the marketplace
altogether.

New Bank Entry
Table 1 presents regression results from a pooled
time series of cross-sections used to analyze the
effects of concentration and the other independent
variables on new entry.23 Concentration and the rate
of new entry appear to be positively related in the
sample. The size of the coefficient indicates that an
increase in the Herfindahl Index by 50 would result
in an increase inthe 2-year rate of entry of 2 percent.
(This is an elasticity of approximately 0.6 at the
sample means.)24 New entry also is positively
related to population and personal income growth in
the county markets, although with marginal significance. 25
The number of institutions already in the market
appears to have a significant, positive effect on
entry. This finding is consistent with the notion that
entry limit pricing discipline may be more difficult
to maintain in a market in which there are many
potential rivals. Alternatively, the positive association between the number of institutions and the rate
of entry may be the result of differences in the
minimum efficient scale in markets of different
capacity. It may be easier, for example, for a bank to
enter a market with the capacity to support a large
number of banks than a market that can support only
a few banking facilities of an efficient size. Attempts
to verify this hypothesis, however, were unsuccessful.2 6
Finally, it should be noted that the coefficient on
the lagged value of entry variable is small and of
marginal statistical significance. This does not necessarily imply that past entry rates do not influence
current rates given the simplicity of the lagged
structure permitted us by the data. The fairly consistent negative sign on this variable may indicate that

12

bank,ing firms not previously serving that market. In
Table2, we focus our activity on true de novo bank
entry by studying the effects of concentration on the
rate at which new banking firms are created. It is
impPrt.antto make this distinction in the eventthat
regulatory barriers to entry - which are presumably more important for de novo banks than for new
branch facilities - are an important determinant of
entry patterns.
As Table 2 indicates, however, the pattern of the
relationship between de novo entry rates and concentrationissimilar to that observed between concentration and all forms of entry into the county
market. In our sample, the entry of de novo banks
explains about one-third of the total entry rate over
our study period; most of the new entry into county
markets was due to the geographic expansion of
existing banks. Nevertheless, it appears that the
market structure variables have an influence on de
novo entry that is qualitatively similar in direction
and magnitude to that observed for geographically
expanding institutions. 28

stochastically high or low rates of entry in a given
time period may, respectively, discourage orencourage entry activity in the two years following. This
could be the consequence of information lags, the
reaction ofincUIlllJent fifIIls Or simplymisspecification of the model. In addition, the lagged variable
may be a proxy for some omitted, contemporaneous
influence on entry. 27
Table I also presents the results of studies ofthe
exit rate and the net bank entry rate using a regression model of the same structure containing the
same variables. Analysis of our sample indicates
that most banking firms "exited" the market
through merger with surviving institutions. Most of
the coefficients in the exit regression are not statistically significant. However, the significant, positive
association of exit with population growth suggests
that incumbent firms respond at least partly to the
growth in the scale of the market by acquiring
existing banking capacity. The net bank entry rate
regression reinforces the notion, however, that new
entry is responding not so much to growth to market
scale as to the level of concentration in the market.

Branch Entry and Concentration
De Novo Entry and Concentration

By analyzing entry only in terms of entry of
banking institutions, we may be under- or overstating the responsiveness of entry to changes in

In the preceding reported results, we studied the
effects of a market's concentration on the entry of
TABLE

1

Studies of Bank Entry

Lagged
dependent
variable
Herfindahl
Index
Personal
Income
Growth
Population
Growth
Number of
Institutions

R2
n

Rate of Bank
Entry

Rate of Bank
Exit

Net Rate of
Bank Entry

-0.12
(1.4)
1.8 x 10- 5
(3.2)

-0.13

-0.13
(1.7)

(1.5)

3.3

10- 6
(1.2)
X

0.03
(1.6)

0.01
(0.92)

0.55
(1.8)
4.1 x 10- 3
(3.7)

0.26
(1.8)
2.0 X 10- 3
(4.0)

0.39
174

174

Note: Numbers in brackets are t-ratios.

13

1.5

X 10- 5
(2.9)

0.02
(1.2)

0.31
(1.1)
2.1

X 10- 3
(2.2)

0.26
174

concentration if the entering institutions are larger
or smaller, respectively, than existing banking
firms. In addition, we maybe failing to measure
increases in total banking capacity that are occurring because of the growth of incllm\:)entbanking
institutions ina given market.
We examine this possibility in Table 3 through
three different measures of changes in banking
capacity. The first two regressions ··examine· ·the
branch growth pattern ofincumbents as well as outof-county banks before and lifter correction for closures and consolidation of branches. The effect of
concentration on this measure, once again,is
qualitatively similar to that found in all other entry

measures, Branches may not, .however, accurately
measure the true increments to banking service
capacity created by neW entry or branching activity.
Ideally, we would like to knowthe design capacity
of the new facilities. to stlldy capacity increments
directly. In the absence of this data, we are able only
to look at the actual activity attracted to the new
facilities. In the third regression presented inTable
3,the rate of deposit growth <represented by new
branches (of either de·novo or incumbent banks) is
employed asa dependent variable. Once again, we
observe a positive relationship \:)etween cOnCentrationand subsequent entry.

IV. Summary and Conclusions
The vigorous growth in the number of banks and
branches in Califomia in the 1970s has provided an
opportunity to test the simple notion that new
entrants will be attracted to markets with high
concentration because high concentration is,
according to the structure-performance hypothesis,
associated with abnormally high profits. We,
indeed, have observed a positive relationship
between entry and the ambient level of concentra-

tion in the market, a finding that is consistent with,
but not necessarily proof of, the notion that concentration and profit rates are positively correlated. 29 In addition, the rate of entry is enhanced,
rather than retarded, by the presence of a large
number of banking institutions. This finding is
consistent with the argument that firms in a concentrated market not only enjoy higher profits, but are
able to pursue entry-limiting pricing strategies more

TABLE

2

Studies of De Novo Bank Entry

Lagged
dependent
variable
Herfindahl
Index
Personal
Income
Growth
Population
Growth
Number of
Institutions
Number of
Branches

R2
n

Rate of De Novo
Bank Entry

Rate of De Novo
Bank Exit

-0.17
(2.0)
1.6 x 10- 5
(3.7)

-0.15
(1.4)
3.9

X 10- 6
(1.4)

Net Rate of
De Novo Bank
Entry
-0.11
(1.5)
1.2 X 10- 5
(3.3)

-8.3 x 10- 4
(0.07)

-1.4 X 10- 3
(0.16)

5.8 X 10- 4
(0.05)

0.3
(1.3)
1.9 x 10- 3
(0.84)
-1.1 x 10- 4
(0.57)

0.22
(1.4)

0.05
(0.25)

2.1

X 10- 3
(1.4)

1.2 X 10- 4
(0.94)

0.28
174

0.15
174

14

-2.5 X 10- 4
(0.13)
1.8 X 10- 5
(0.1l)
0.15
174

easily than in a market where there are few rivals·of
any size.
Although our findings provide support for those
who believe structure influences perfonnance, we
are unable to extend the implications ofour study to
any particular prescription regarding anti-trust policy. We do not observe efficiency directly in structure-perfonnance studies and thus are not in a positiol1to conclude that the manipulation of market
structure willl1ecessarily make a market more effi-

cient. Conversely, although it is tempting to interpret the findings as evidence that entry can be relied
upon to repair inefficiently structured markets, we
have no way of evaluating whether the observed
levels of entry are sub-or supra-optimal in the sense
of dynamic efficiency. Anti-trust policymakers by
necessity must bring their own judgmentto bear on
evaluating such evidence until a time when theory
and empirical evidence can be more helpful.

TABLE

3

Studies of Alternative Entry Measures

Lagged
dependent
variable
Herfindahl
Index
Personal
Income
Growth
Population
Growth
Number of
Institutions
Number of
Branches

R2
n

Total
Branch Growth Rate

Net Total
Branch Growth Rate

Deposit
Entry

-0.05
(0.69)
1.6 x 10- 5
(3.6)

0.02
(0.23)
1.2XIO- 5
(2.9)

-0.14
(1.6)
4.5 X 10- 6
(2.3)

-2.5 x 10- 4
(0.02)

-8.3 X 10- 4
(0.07)

-6.1 X 10-'4
(0.1)

0.42

0.21
(0.99)
4.3 X 10- 3
(2.0)

0.2
(l.8)

(1.7)

7.0 x 10- 3
(2.9)
-4.3 x 10- 4
(2.1)

-2.7 X 10- 4
(l.5)

n.a.

0.43
174

0.21
174

0.27
174

7.0 X 10- 4
(2.0)

FOOTNOTES
1. The model of perfect competition assumes that a market is characterized by unrestricted entry and exit, the
absence of scale economies, homogeneous products and
perfect information in addition to atomistic production.
2. Joe S. Bain, "Workable Competition In Oligopoly,"
Economic Review, May 1950, pp. 35-47.

and profits actually showed a relationship between large
banking firms and profitability and that the expected
higher returns by smaller firms were not found. His results,
like those of most concentration-profit studies, however,
were not particularly consistent and may suffer from some
of the problems pointed out later in this paper.

3. Joe S. Bain, "Relation of Profit Rate To Industry Concentration: American Manufacturing, 1936 to 1940," Quarterly Journal of Economics, August 1951, pp. 293-324.

6. Franklin R. Edwards, "Managerial Objectives in Regulated Industries," Journal of Political Economy February
1977, pages 147-162.

4. Gain knew the hazards of testing the hypothesis in this
manner and pointed out that an observed structure-performance relationship was of interest only if entry, technological and demand conditions were the same across the
sample and uncorrelated with market share concentration
(see Bain, 1951).

7. For a survey of the various criticisms of structureperformance models in banking, see D. Osborne and J.
Wendell, "Research on Structure, Conduct and Performance in Banking: 1964-79," Oklahoma State University,
July 1983, mimeo.
8. Common performance measures in these studies
include profit rates, deposit rates, commercial loan rates,
automobile loan rates, service charges and banking hours.

5. Demsetz (1973) argued that some studies finding a
positive relationship between market share concentration

15

See Osborne and Wendell for an up-to-date review of the
various surveys of this extensive literature.

it may be deterred from entering the market.) For additional
debate concerning the relevance or irrelevance of
demand and cost conditions on entry, see Michael
Spence, "Entry Capacity, Investment and Oligopolistic
Pricing," Bell Journal, Auturnn 1977.

9. In econometric modeling parlance, structure and entry
would be related through an identity, and structure would
be a redundant variable.

14. ()urstudy terminates in 1980, however, to avoid any
perturbing influence of the major changes in banking
legislation that occurred in 1980 and 1982 at the federal
level and also because of lags in the availability of certain
demographic and economic variables employed in the
stuqy.
15. The Herfindahl (or, more properly, the HerfindahlHirschman) Index is computed by squaring and summing
the market share, in percent terms, of all firms in the
marketplace. In our sample of California counties, this
index ranges from about 1,000 to its theoretical maxirnum
(10,000). Alternative measures of concentration frequently
employed are the three-, four- and five-firm concentration
ratios computed, respectively, by summing the market
shares of the largest three, four or five firms in the marketplace.

10.• A firm employing an entry-limit pricing strategy adjusts
price to maximize the present value of long-term profits
taking into account the fact that the flow of new entrants is
positively related to the profits enjoyed by incumbent firms.
Fora discussion of limit pricing in the context of banking,
see Timothy H. Hannan, "The Theory of Limit Pricing: Some
Applications for the Banking Industry," Journal of Banking
and Finance, October 1979, pp. 221-234. See also, D.
Hay, "Sequential Entry and Entry Deterring Strategies in
Spatial Competition," Oxford Economic Papers, July 1976,
pp.240-257.
11. See Weisacker (1983). One example of a potential
cause of sub-optimal entry is the existence of positive
externalities of the activities of one firm on another.
Wei sacker also argues that the existence of economies of
scale in a game-theoretic oligopoly pricing context could
lead either to sub- or supra-optimal entry from an efficiency
standpoint.

We examined the use of the three-firm concentration ratio
in lieu of the Herfindahllndex. For California, at least, there
appears to be no important difference and we have chosen
to report only the Herfindahl results in the tabulations that
follow.

12. In the traditional theory of the firm, there are no impediments to the exit of firms from the marketplace. Exit normally is viewed as occurring because of random processes related to the allocation of management skills, cash
flow problems and other situations specific to the firm.
There is no a priori reason to expect a relationship to exist
between exit rates and concentration. In the banking
industry, most firms exit by way of merger with another firm.
True exit of capacity is observed, however, in the case of
individual bank branches. Entry and exit processes for
both banking firms and branches are studied below.

The Herfindahl Index presently is employed by the United
States Department of Justice in formulating its merger
guidelines. Presently, the Department of Justice considers
any market with a Herfindahllndex in excess of 1800 to be
concentrated. By this criterion, most of the county markets
defined in this paper are concentrated.
16. See Osborne and Wendell (1983), Section V, for a
discussion of this issue.

13. There has been considerable debate over the years
concerning which, if any, demand or cost conditions confronting a firm can result in the erection of "barriers to
entry." Bain (1951) argued that entry could be impeded if
(1) incumbent firrns enjoyed cost advantages not available
to new entrants, (2) economies of scale existed or (3)
products were differentiable. Stigler (1973) dismissed the
second and third factors as barriers to entry and argued
only for the case of incumbent cost advantages. Subsequent authors have argued that both views are inappropriate because they focus on entry conditions rather than the
consequences on efficiency of the factors enumerated.

17. See Stoltz (1976) and Osborne and Wendell (1983).
18. See the evaluation of the work of Stolz (1976) in
Osborne and Wendell (1983), Section V.
19. Hannan (1983), for example, used local labor market
and employment pattern data to define nearly this many
markets for the (much smaller) state of Pennsylvania. He
did find, however, that his results were relatively insensitive
to variation in the definitions of geographic market areas.
Given the comparatively benign weather and high quality
road system enjoyed by Californians, however, the relatively large size of the individual counties may not be
inappropriate.
20. The rate stated in the chart is the rate occurring
between the noted date and the two previous years. Thus,
our data, while spanning 1972 to 1980, are able to report
entry rates only from 1974 to 1980.
21. A simple, linear regression of deposit size in 1972
(SIZE) and a percentage change in deposits over the
period (GROWTH) resulted in the following coefficient
estimates and associated t-statistics:

In particular, we are not concerned with barriers to entry
per se, but rather whether certain cost or dernand conditions can lead to sub-obtimal entry and attendant efficiency losses. When viewed from this perspective, Bain's
original list of factors have the potential to lead to suboptimal entry and inefficient production, but the precise
outcome depends upon numerous other assumptions (see
Weisacker 1983).
In addition, factors other than those enumerated by Bain
can lead to sub-optimal entry including positive externalities to production. (For example, a firm may have to
spend a considerable amount of money to design a successful product or a marketing strategy. If the firm recognizes that it will be unable to keep its potential rivals from
obtaining the same information subsequently without cost,

GROWTH = 319.27
(6.8)

1.9 X10- 5 *SIZE, R2 =0.01, n = 95
(1.0)

Performing the same regression on a county-by-county
basis yields less consistent results, with occasional signifi-

16

cant positive or negative coefficients on the SIZE variable.
The absence of a consistent pattern and the small samples
involved prohibit us from drawing any conclusions about
these findings contrary to the general implication that the
growth rates are independent of size.

25. Personal income growth and population growth are
computed by county from data provided by the California
Department of Finance.
26. Attempts were made to test this notion by inclusion of
non-bank measures of the capacity of the county markets.
In particular, the level of personal income and population in
the county markets was included in the regression formulation. In every case, however, these variables proved statistically insignificant and had inconsistent signs.

22. Gibrat's Law demonstrates that, if a firm's growth rate
per period is a random, normally distributed variable, the
firm's size distribution ultimately will become skewed even
if the initial firm size distribution is uniform. Thus, industries
and markets can become concentrated in the absence of
economies of scale or entry barriers by virtue of randomness in the outcomes of management selection processes,
marketing decisions and other internal decisions affecting
growth of a firm.

27. The relatively poor statistical performance of the lagged entry rate variables in the regressions reported in this
paper may suggest, in fact, that the lagged adjustment
formulation simply is inappropriate. We have not elimInated the lagged variable from the regressions, however,
because at the very least this variable may perform some
modest role in controlling for cross-sectinal variation in
entry rates that is not explained by the variables included
as arguments of our regressions.

Such a process of concentration would not in itself be
expected to affect entry since each firm is, by definition,
confronted with the same distribution of "luck" in every
period and thus this aspect of entry conditions remains
unchanged over time. Indeed, the finding of growth rates
unrelated to size militates somewhat against explanations
of concentration based on economies of scale since these
are permanent features of size not independently drawn
each period. Gibrat's 1931 work was articulated by
Michael Kalecki, "On the Gibrat Distribution,"
Econometrica, April 1945.

28. In our sample, the mean rate of entry of new institutions
is 15.2 percent (biennially). The mean rate of de novo bank
entry (as a percent of total banks in the market) is 4.2
percent. Most de novo entry, however, involved one
branch only. When compared with the total entry rate
measured as the rate of change in the number of branches,
the de novo entry figure appears somewhat larger, since
branching growth is only 12.3 percent biennially in our
sample period.

23. The database used in this study was constructed
usinq data from the period 1972 to 1980. Several of the
variables used in this study are rate variables, such as the
entry rate, and the rates of population and income growth,
and these are constructed using level measures of these
variables at the beginning and end of two-year periods
because of data availability.

It is useful to note, however, that, for our sample at least, the
rate of gross entry of banks exceeds the rate of gross entry
of branches. Although it would be desirable to measure
entry rates in terms of some meaningful measure of banking capacity, we were unable to do so and could only
weight each form of entry similarly in these computations.

The results reported here were run separately on each
cross section as well as in the pooled variant presented.
The results are qualitatively unchanged in the sense that
the signs of variables significant in individual cross sections remained the same in the pooled sample although the
enlarged sample results in improved standard errors for
the estimates. In the reported regressions, growth rates
are in decimal form. The Herfindahllndex is measured with
a maximum value of 10,000 and all other variables are in
level form.

29. We also studied the relationship between concentration and profits directly in our work. However, because
profit data are available only for the banking enterprise as a
whole, whereas market concentration is measured at a
local market level, we were forced to construct a concentration measure for the bank as a whole using depositweighted individual county concentration measures.
Whether because of this construction or because of the
many problems with profit measures cited above, we were
unable to find a consistent relationship between profitability and any of our structural or demographic variables.

24. Presently, the Department of Justice considers an
increase in the Herfindahl of 200 points or more to be
significant.

17

18

Barbara Bennett*
Bank regulators and other analysts worry that the recent rapid growth
in standby letters of credit (SLCs) outstanding is a response to more
stringent capital regulation and has increased bank risk. This analysis
traces the growth of such instruments primarily to the growth of directfinance markets in a setting of increased overall economic risk. It also
finds that SLCs are at least potentially riskier than loans. Although banks
may be applying higher credit evaluation standards in partial compensation, the issuance ofSLCs nevertheless may warrant someform ofcapitalrelated regulation.
tions of capital adequacy currently do not include
OBS, banks may have an incentive to shift risktaking towards these relatively less-regulated
activities. To correct this problem, the federal bank
regulatory agencies are considering ways to factor
OBS exposure into their formal evaluation of a
bank's capital adequacy. Consequently, regulators
need to analyze the nature and degree of risk
involved in each type ofOBS as compared to banks'
other activities.
This article examines one off balance sheet
activity that has grown quite rapidly over the last
several years: standby letters of credit. The first
section discusses the uses for standby letters of
credit and the reasons for their growth. In the second
section, a framework for analyzing the risks associated with standby letters of credit is developed.
Unfortunately, data limitations make impossible
any definitive statements about the impact of
standby letters of credit on overall bank risk. Finally,
the paper concludes with some observations about
the regulatory treatment of standby letters of credit.

The off balance sheet activities of commercial
banks have attracted a lot of attention lately. Regulators, securities analysts and the financial press all
have voiced concerns about the rapid growth in such
contingent obligations as loan commitments, financial futures and options contracts, letters of credit,
and foreign exchange contracts. Although they are
not recognized as assets or liabilities on bank balance sheets (hence the term, "off balance sheet
activities," or OBS), these contingent claims
involve interest rate, credit, and/or liquidity risks.
Moreover, because they provide the opportunity for
substantially greater leverage than is the case for
banks' lending and investment activities, OBS have
the potential to increase banks' overall risk.
Ironically, bank regulators' efforts to control risktaking through more stringent capital regulation
may be partly responsible for the growth in OBS
over the last few years. Because regulatory defini-

* Economist, Federal Reserve Bank of San Francisco. Research Assistance was provided by Kimya
Moghadam and Julia Santiago.

19

I. The Market for Standby Letters of Credit
Of all the off balance sheet activities in which
U.S. banks engage, the issuance of standby letters
of credit (SLCs) has attracted the most attention
lately. Many observers point to the rapid growth in
SLCs outstanding over the last few years as well as
the prominent role such instruments played in several recent bank failures - most notably, Penn
Square National Bank in 1982 - as evidence that
SLCs may be increasing bank risk significantly.
SLCs outstanding grew from $80.8 billion in June
1982 to $153.2 billion in June 1985 - a 90 percent
increase over the period. Moreover, most of that
growth occurred at the 25 largest banks, which
recorded more than a $40 billion increase in SLCs
outstanding.
A letter of credit (LC) is a contractual arrangement involving three parties - the "issuer" (the
bank), the "account party" (the bank's customer)
and the "beneficiary." Typically, the account party
and the beneficiary have entered into a contract
requiring the former to make payment(s) or perform
some other obligation to the latter. At the same time,
the account party has contracted with its bank to
issue a letter of credit which, in effect, guarantees
that by substituting the bank's liability for that of the
account party, the account party will perform
according to the terms of the original contract with
the beneficiary. Initially, the bank's obligation under
the LC is a contingent one because no funds are
advanced to the beneficiary until that party presents
the documents that are stipulated in the LC contract.
There are two types of LCs: the more traditional
commercial letter of credit which generally is used
to finance the shipment and storage of goods, and
the standby letter of credit which is being used in
connection with a growing variety of transactions,
including debt issuance and construction contracts.
Unlike the commercial LC, which is payable upon
presentation of title to the goods that have been
shipped, the SLC is payable only upon presentation
of evidence of default or nonperformance on the part
of the account party. As such, SLCs typically expire
unused, in contrast to commercial letters of credit.
Because SLCs are payable only upon nonperformance on the part of the account party, they are a
guarantee of either financial or economic perfor-

mance on the underlying contract. I The issuer ofthe
SLC promises to advance funds to make the beneficiarywhole in the event of the account party's
failure. to perform according to the terms of the
contract with the beneficiary. An SLC involving a
financial guarantee requires the issuing bank to pay
any principal or interest on debt owed the beneficiary by the account party should the latter default.
According to a recent survey, just over half of banks'
SLCs outstanding backs some form of debt obligation. 2 An SLC backing a construction contract, in
contrast, represents a performance guarantee and
requires the bank to make a payment to the beneficiary if the contractor does not complete the project
satisfactorily.
By issuing an SLC, the bank is assuming the risk
that normally would have been borne by the beneficiary. However, it is the account party that arranges
the SLC and compensates the bank for the risk. In
return for paying the bank's fee and reducing the
beneficiary's risk, the account party expects to
obtain a higher price for the debt issued to or the
services performed for the beneficiary.
In general, the account party will choose to
arrange a standby letter of credit whenever the cost
of the transaction (that is, the bank's fee) is less than
the value of the guarantee to the beneficiary (as
measured by the premium the beneficiary is willing
to pay for the account party's debt or services with
the SLC backing). The size of this differential
between the bank's fee and the beneficiary's willingness to pay for the guarantee depends upon two
factors.
First, the value of the guarantee to the beneficiary
will depend on the creditworthiness of the issuing
bank as compared to that of the account party and
the relative costs of obtaining information about the
creditworthiness of each. An SLC issued by a bank
with a poor credit rating is not likely to be worth
much to the beneficiary since the probability of that
bank's default on its obligation may be high. Likewise, an SLC issued by a small, unknown bank may
have little value since the cost to the beneficiary of
obtaining information to evaluate the bank may be
greater than the cost of evaluating the account party
and underwriting the risk itself.

20

These observations are consistent with the data
presented in Tables I and 2, which show that most
SLC issuance occurs at the largest banks and that
the higher rated banks tend to do relatively more
SLC business.
Second, the size of the differential will depend on
the extent of the issuing bank's comparative advantage in underwriting the risk of default on the part of
the account party. (Of course, the extent to which the
bank's comparative advantage will be reflected in
the fees the bank charges depends on the level of
competition among issuers of SLCs). With respect

to most beneficiaries, the issuing bank's underwriting costs are likely to be substantially lower because
the bank is better able to diversify the risk associated
with SLCs and because the bank enjoys certain
economies in credit evaluation. For example, the
marginal cost of performing an evaluation of the
account party is lower for the bank than for the
beneficiary because the bank frequently has an
ongoing relationship with the account party; this
makes the cost of obtaining information much lower
for the bank.

TABLE

1

SLC Issuance by Size of Bank
(Billions of dollars)

Year-End:
Banks with Assets of Over $100 MM
25 Largest Banks
10 Largest Banks
15 Other Large Banks
All Other Banks

1979

1980

1981

1982

1983

1984

June 1985
(Percent
share)

34.1

45.7

69.9

98.3

117.4

144.3

153.2 (100)

27.2

36.5

55.5

77.6

91.5

111.2

117.9( 77)

24.3

32.0

47.9

65.0

77.1

92.4

96.3 ( 63)

2.9

4.5

7.6

12.6

14.4

18.8

21.6 ( 14)

6.9

9.2

14.4

20.7

25.9

33.1

35.3 ( 23)

Source: Quarterly Reports of Condition

TABLE

2

SLC Issuance of 25 Largest Banks by Bank Rating*
Dec 1982
June 1985
(Billions of dollars)
Large Banks (with assets over $50 billion)

Percent
Change

41.6

63.1

51.7

Aaa - Aa (4 banks)

33.4

51.7

54.8

A or less (1 bank)

8.2

11.4

39.0

35.9

54.6

52.1

Aaa - Aa (11 banks)

21.9

37.4

70.8

A or less (8 banks)

14.0

17.2

22.9

0.1

0.2

100.0

0.1

0.2

100.0

Medium Banks (with assets of $10-50 billion)

Small Banks (with assets under $10 billion)
A or less (1 bank)

*Ratings of banks based on latest evaluation in
Moody's Corporate Credit Reports.

21

mediaries such as banks. However, this decline in
financial intermediation has also meant that the
undiversified investors in such markets must bear
more credit risk than if they were to invest in the
deposit liabilities of commercial banks. Apparently,
such an increase in credit-risk exposure is unpalatable to at least some portion of these investors
because 15 percent of all dealer-placed taxable commercial paper is supported by some sort of legally
binding guarantee and nearly al! rated commercial
paper also is backed by a bank loan commitment. 6
The second reason that financial guarantees have
grown rapidly over the last several years is that
overall economic risk has increased over the same
period. The rampant inflation of the late 1970s, the
increased volatility of interest rates and business
activity of the early 1980s, and the unexpected
sharp deceleration in the rate of inflation in the
middle 1980s have caused wide swings in asset
prices and returns on investment. Consequently, the
demand for instruments like SLCs and other guarantees that reduce the risk to the beneficiary has
increased tremendously.
Banks' involvement in this market is at once an
extension of their traditional lending business and,
because SLCs are not funded, a significant departure from it. Like their lending business, banks'
issuance of SLCs entails the underwriting of credit
risk. In this area, banks enjoy certain economies of
specialization that make them lower-cost issuers of
financial guarantees. They can easily (that is, without cost) diversitY the risk associated with SLCs.
Also, banks typically have other lending and
deposit relationships with their SLC customers. As a
result, the marginal cost to banks of obtaining
information to perform a credit evaluation for the
purposes of issuing an SLC is very low. Moreover, in
contrast to insurance companies, banks do not generally secure their guarantees with a formal collateral arrangement with the account party since they
usually have the right to debit the account party's
deposit accounts. This lack of a formal collateral
arrangement makes banks' SLCs more attractive,
but it also increases the bank's risk somewhat. (See
the next section for a discussion of SLC risk.)
Given the enormous increase in the demand for
guarantees, the fact that banks are low-cost issuers
may be sufficient explanation for the rapid growth of

Chart 1
Standby Letters of Credit
of U.S. Commercial Banks
Billions of Dollars

160

140
120
100
80
60
40
20

o 1976

1978

1980

1982

19841985

The Growth of SLCs
The almost exponential growth in SLCs outstanding since the late 1970s (see Chart 1) is just one
manifestation of a rapidly growing general market
for guarantee-type products. In addition to the SLCs
that banks offer, surety and insurance companies are
now offering such guarantees as credit-risk
coverages (which guarantee repayment of principal
and interest on debt obligations) and asset-risk
coverages, such as residual value insurance and
systems performance guarantees. This expansion in
the types of coverages offered has given insurance
companies a rapidly growing source of premium
income. Between 1980 and 1984, the insurance
industry's net premiums from such surety operations 3 nearly doubled, rising from $900 million to
$1.6 billion. 4 Financial guarantees offered by other,
specialized providers have grown rapidly as well.
Municipal bond insurance, for example, was virtual!y nonexistent prior to 1981, but now supports
an estimated 29 percent, or $6.4 billion, of new
issues of long-term municipal bonds. s
Two factors account for this growth in the market
for financial guarantees in general, and SLCs in
particular. First, the growth over the last ten to 15
years of direct-finance markets has increased the
credit-risk exposure of investors who may prefer not
to bear such risk. Such direct-finance markets as the
commercial paper market have grown rapidly since
the late 1960s because borrowers are able to obtain
funds more cheaply from them than through inter-

22

lated firm's capital unless the firm can somehow
compensate either by reducing its asset base or by
increasing the riskiness of its portfolio. Because
nonbank competitors are not similarly regulated, a
move to shrink assets will not necessarily increase
the return on bank capital. Thus, in the absence of
other forms of portfolio regulation, capital regulation may induce banks to take on more risk.
Much of bank portfolio regulation is crafted to
prevent banks from responding to this incentive, but
regulators are concerned that banks' off balance
sheet activities may not be adequately covered. The
current capital adequacy standards do not formally
account for banks' off balance sheet exposure. Consequently, when faced with capital-related limitations on asset growth, banks may have an incentive
to shift risk-taking toward SLCs and other off balance sheet activities that do not "use up" capital.
In sum, the growth in banks' SLC issuance is a
reflection of an increased demand for financial guarantees both as result of increased reliance on directfinance as a source of funds and as a result of an
increase in overall risk. Banks have been willing to
respond to this demand by issuing SLCs because
they enjoy certain cost advantages in doing so and
because regulatory constraints on their lending
activities make the issuance of SLCs more attractive. The next section presents a framework for
analyzing the impact of SLC growth on bank risk, as
well as an evaluation of the available evidence.

bank-issued SLCs over the last several years.
However, banks also may have an incentive to
respond to this demand since they can overcome
binding regulatory constraints on their lending
activities by doing so. For example, at current levels
of interest rates, reserve requirements add an estimated 25 to 30 basis points to banks' cost offunds,
making bank credit considerably less attractive than
other sources of credit. 7 Because SLCs a..'"e not
funded and are therefore unaffected by reserve
requirements, they represent a less costly way of
assuming a given level of credit risk.
A more important regulatory constraint that
undoubtedly has given banks incentive to issue
SLCs is the move towards tougher capital regulation
in recent years. Regulators began to express serious
concern about bank capital adequacy in the late
1970s as the aggregate capital-to-assets ratio drifted
to historically low levels. Then, in December 1981,
the Federal Reserve Board (FRB) and the Office of
the Comptroller of the Currency (OCC) issued
"Capital Adequacy Guidelines" to pressure large
banks into improving their capital-to-asset ratios.
More formal standards for large banks were
imposed in June 1983, and even more stringent
standards were imposed on the industry as a whole
in March 1985.
Economic theory suggests that the imposition of
tighter capital regulations depresses the return on
capital, causing a decline in the price of the regu-

II. The Risk of Standby Letters of Credit
With the deregulation of many aspects of the
banking business, banks have received expanded
opportunities for risk-taking. Regulators worry that
increasingly risky bank practices could bankrupt the
deposit insurance system, which underwrites at
least a portion of any increase in bank risk. If banks
did not have deposit insurance or if that insurance
were priced correctly, the cost of bank liabilities and
the price of shareholder equity would fully reflect
any increase in bank risk. However, since all banks
currently are charged the same premium for deposit
insurance regardless of riskiness, and since bank
regulators apparently have been reluctant to close
large, troubled banks, at least large banks have an

incentive to undertake more risk than they otherwise
would. 8
Consequently, bank regulators have attempted to
reduce banks' opportunities (if not incentives) for
risk-taking by adopting more stringent capital
requirements for the industry. However, because
such regulation may induce banks to try to take on
more risk, bank regulators worry that the rapid
growth in SLCs outstanding in recent years may be
increasing overall bank risk, particularly since SLCs
now equal 100 percent of aggregate bank capital.
(See Chart 2.) Moreover, for the 25 largest banks,
the average ratio of SLCs to capital is even higher165.4 percent. As a result, each of the three federal

23

bank regulatory agencies (FRB, OCC and FDICFederal Deposit Insurance Corporation) recently
proposed that the current capital adequacy regulation be supplemented by risk-based capital
guidelines that would explicitly take into account
the relative riskiness of broad categories of bank
assets and certain off balance sheet items, including
SLCS.9
Ideally, risk-based measures of capital adequacy
ought to reflect the effect of a bank's SLC exposure
on overall risk, taking into account the extent to
which SLC risk is correlated with other risk
exposures. Unfortunately, such a measure is difficult
to develop given currently available data and bookvalue accounting conventions. Neither can the markets for bank debt and equity provide more than an
approximation for this measure since the existence
of deposit insurance causes these markets to underprice bank risk. As a result, bank regulators can
develop only crude measures of SLC risk based on a
comparison with the riskiness of banks' loan portfolios.
Loans are the logical "benchmark" for rating the
riskiness of SLCs because both instruments involve
credit risk. At the same time, however, a comparison
of the two is impeded by some of the differences in
their risk characteristics. For example, unlike loans,
SLCs generally do not entail interest rate risk and
liquidity risk. If the issuing bank must advance
funds under the terms of the SLC contract, the
interest rate on the resulting loan to the account
party typically varies with market rates (plus some
mark-up). Moreover, because SLCs generally do not
require a commitment of the bank's funds, the risk
of loss associated with meeting related cash flow
obligations is very small. On the other hand,
because SLCs are not funded, they provide the
opportunity for a much higher degree of leverage
risk than is the case for loans.

Chart 2
Standby letters of Credit Outstanding
as a Percent of Capital
Percent

110
90
70
50
30
10 1976

1978

1980

1982

1984 1985

Virtually any financial instrument can be modelled as an option or a series of options. In this case,
because the borrower/account party can default on
its obligation to the bank, a loan and an SLC both
implicitly contain a put option on the assets of the
borrower/account party. In other words, the borrower (or the account party) has the right to sell
("put") its assets to the bank at an exercise price
equal to the par value of its obligation to the bank.
This option will be exercised if the par value of the
obligation exceeds the market value of the underlying assets securing the obligation. 10
Several factors determine the risk of exercise and
hence, the value of this option. First, the option's
value increases with increases in the exercise price,
other things equal. As the par value of the loan or
SLC obligation increases, so does the bank's risk.
Second, the value of this put option varies inversely
with the value of the underlying assets. As the value
of the underlying assets securing the obligation
falls, the cost of exercising the option also falls,
increasing the bank's risk. Finally, the option's value
rises with increases in the riskiness of those assets
(that is, variance of their price). The greater the
chance that the value of the underlying assets will
fall substantially, the greater is the risk to the
bank. II
A comparison of the risk associated with SLCs
and loans, then, requires an evaluation of all these
dimensions of the two portfolios. Moreover, an
evaluation of the impact of SLC risk on bank risk
also requires an understanding of the extent to
which the returns on the two portfolios are corre-

An Options Framework
Options theory can be used to evaluate the relative
riskiness of loans and SLCs. However, because the
development of an econometric model to evaluate
these two instruments is beyond the scope of this
paper (and the available data), the discussion that
follows is intended only to suggest how this framework might be useful to regulators.

24

lated. Unfortunately, data on these aspects of banks'
SLC and loan portfolios are not available.
Nonetheless, it still is possible to use an options
framework at least to suggest how banks' SLC
issuance may be affecting bank risk. To do so,
assume that the characteristics of banks' loan and
SLC portfolios that are most under management
control are identical. In other words, for every given
SLC there is also a loan to the same customer with
the same term-to-maturity and par value. The essential difference between these two portfolios, then,
lies in the relative strength of their collateral
arrangements. The loans, for the most part, are
formally secured by the borrowers' assets, while the
SLCs are not.
In an options framework, this difference amounts
to a difference in the relative costs of exercising the
put options contained in the loan and SLC portfolios. Because the cost of exercising the SLCrelated options is lower, other things equal, the
likelihood that they will be exercised is greater,
making the SLC portfolio riskier than the loan
portfolio. Moreover, this lower cost of exercise
means that the value of the SLC portfolio is more
sensitive to changes in the variance of the prices of
the underlying assets (that is, changes in the financial condition of the banks' customers). For this
reason as well, the SLC portfolio is riskier.
In practice, of course, banks' SLC and loan portfolios are not identical. Thus, while SLCs may be
riskier than loans in this one respect, banks probably manage the other aspects of the two portfolios in
a manner that mitigates some of the greater risk
arising from differences in the contractual terms of
the loan and SLC instruments. Specifically, the
creditworthiness of banks' loan and SLC customers
may be very different. Bankers have indicated that,
as a matter of policy, they try to reject SLC business
from customers for whom default is even a remote
possibility. This is in admitted contrast to lending
policy, where the standards are somewhat more
relaxed. 12 (For a discussion of the other ways banks
nianage SLC risk, see the Appendix.)

fees apparently are lower than the implicit fees they
charge on loans. The fees for SLCs for short-term,
high quality credits range from 25 to 50 basis points
and from 125 to 150 basis points or more for longer
term and/or lower quality credits.BBy contrast,the
implicit loan premium for large denomination, variable rate loans is approximately 240 basis points for
both short- and longer term credits. 14 This disparity
in the fee structures of the two portfolios suggests
that the creditworthiness of banks' SLC customers is
higher than that of its loan customers.
This evidence on the relative riskiness of SLC and
loan portfolios should be interpreted cautiously,
however. Fees do not provide a measure of the
expected return on equity. After netting out the
higher administrative and other expenses associated
with loans, it is likely that the expected return on
and the risk of SLCs is at least as high as that for
loans.
Similarly, the available evidence on the loss experience of loans and SLCs provides some evidence
that the creditworthiness of banks' loan and SLC
customers is different. Of course, loss experience
technically does not measure credit risk because it is
an ex post measure; however, there should be some
correlation over time between risk and observed
losses.
Data on SLC losses were last collected in 1978,
when a special survey on SLCs was conducted by
the staff of the Board of Governors. IS That survey
found that the initial default rate on SLCs averaged
2.03 percent. But because more than 98 percent was
recovered, the loss rate on SLCs was extremely low
-only 0.03 percent. This low figure compares very
favorably to banks' loan loss rate of 0.16 percent in
1979. According to bankers in the Twelfth Federal
Reserve District, the loss rate on SLCs has increased
somewhat since then, but, compared to loan losses
now hovering around 0.65 percent, losses on SLCs
still are very low. 16 Once again, however, this evidence should not be interpreted as proof that the risk
to bank capital from banks' SLC exposure is less
than that from loans.
Finally, evidence from capital markets may
provide some insights into the riskiness of banks'
SLC portfolios. Of course, this evidence may be
biased since prices will reflect the value of any
perceived deposit insurance subsidy. Nonetheless,

Evidence
The rather limited data on fees and loss experience suggest that banks do, in fact, manage the risk
of the two portfolios differently. First, banks' SLC

25

ing leverage and fell with increases in SLCs as a
proportion of total risky assets. Since these two
factors tended to cancel each other, the net effect on
bank risk of an increase in banks' SLC exposure
apParently was negligible.
Sllch a result is pe~haps not surprising for two
reasons. First, the level of SLCs outstanding was
low in relation to other risky assets and to capital for
mostofthisperiod. Thus, the effects of rapid SLC
growth (in percentage terms) may have been
swamped by larger (absolute) increases in loan
volume. Second, the regression covers a period
when bank capital ratios generally were falling.
Because banks were not constrained by capital
regulation (at least not until the end of this period),
they may have had less incentive to increase overall
risk through SLC issuance. Moreover, it is significant that Goldberg and Lloyd-Davies found that,
despite higher credit quality, increasing SLC
exposure did not reduce bank risk.

as long as investors believe that they are not fully
protected against loss, they will respond to perceived increases in bank risk by demanding a higher
risk premium. Consequently, an evaluation of the
market's reaction to the growth in SLCs outstanding
over time should indicate whether bank risk also has
increased.
Ina study of the determinants of large banks' CD
rates, Goldberg and Lloyd-Davies found that the
market had not penalized banks for increasing SLC
exposure between 1976 and early 1982. 17 Their
model explains the level of the CD rate as a function
of the general level of interest rates and of various
bank risk characteristics. The effect of banks' SLC
exposure on CD rates is treated as having two
components: a leverage risk effect (the ratio of bank
capital to risky assets, including loans and SLCs)
and a credit quality effect (the ratio of SLCs to risky
assets - to allow for differences in the credit quality
of the loan and SLC portfolios). Based on this
model, they found that CD rates rose with increas-

m.

Regulating Standby letters of Credit
Currently, bank regulators place only rather limited restrictions on banks' SLC activities. They
require only that banks (l) include SLCs with loans
for the purposes of calculating loan concentrations
to any one borrower (the limit is 10 percent of
capital) and (2) apply the same credit evaluation
standards for SLCs as for loans. However, because
of the greater riskiness of the SLC instrument as
well as the greater potential for capital leverage with
SLCs than with loans, some form of capital-related
regulation of SLCs may be justified.
Capital adequacy regulation with respect to SLC
exposure ought to do two things. First, from a
bookkeeping perspective, it should ensure that
institutions that are likely to experience larger losses
also have a larger capital buffer to absorb those
losses. Second, ideally, it should provide a structure
that penalizes banks for attempting to increase overall risk through increases in SLC risk or leverage.
Accordingly, one can evaluate the risk-based capital adequacy concept that is under consideration at
the federal bank regulatory agencies. Under this
approach, SLCs outstanding would be added to
assets for the purpose of calculating a new, risk-

Bank regulators are concerned that the rapid
growth in SLCs outstanding over the last several
years is an indication that banks are attempting to
take on more risk, in part, as a result of increasingly
stringent capital regulation. This paper has suggested that while capital regulation may have played
a modest role in the growth of SLCs, the primary
reason for such growth has been an increase in the
demand for financial guarantees generally. Whether
this growth has increased bank risk is still open to
question.
In some repects, SLCs are (potentially at least)
more risky than loans, but the available evidence
suggests that banks may be applying higher credit
evaluation standards for SLCs than for loans to
compensate for the riskier features of the SLC
instrument. At the same time, however, this paper
has suggested that it would be a mistake to infer
from this evidence that SLCs necessarily pose less
risk to capital than do loans. It is hard to believe that
with the implicit subsidy to risk-taking provided by
the deposit insurance system, banks actually would
conduct their SLC business in a manner that entails
less risk than lending.
26

based capital ratio. Moreover, because it is thought
that at least certain types of SLCs may entail less
risk than loans, those SLCs would be accorded a
lower weight in the calculation of that ratio. For
eX,imj;lle, the FRB's proposed guidelines assign a
weight of 1.0 to most types of SLCs, but a weight of
only 0.6 to a few types, such as performance-related
SLCs.
The advantage of this basic·approach is that it is
easy to administer. Also, it provides a means of
ensuring that as banks' SLC exposure grows, so too
will their capital buffer. The disadvantage is that it
treats all SLC portfolios (and all loan portfolios, for
that matter) as having the same level of credit risk.
Clearly, this approach will impose a higher capital
cost on the banks that have higher quality SLC
portfolios than is the case for banks with lower
quality portfolios. As a result, the former may have
an incentive to compensate for this implicit penalty
by taking on more credit risk in their SLC portfolios.

To overcome this problem, the regulators could,
in theory, adopt a more sophisticated measure of
SLC risk along the lines of the options model
outlined in this paper. Such a measure would enable
regulators to take variations inthe creditquality of
individual portfolios into account when assigning
risk weights. However, it would be difficult to
administer since considerably more data on the
characteristics of individual portfolios would be
needed. Instead, the regulators have chosen simply
to recognize the inherent weaknesses in any capital
adequacy ratio and to emphasize that such ratios even those that attempt to adjust for risk - are
meant only to supplement the bank examiner's
judgement. Ultimately, they argue, the bank examiner must decide whether an institution's capital is
adequate based on such qualitative considerations
as the quality of earnings and management and
overall asset quality as measured by the level and
severity of examiner-classified assets.

ApPENDIX

Banks seek to manage SLC risk in several ways.
first, throug/l the fees they charge, banks require
compensation in proportion to the risks they
assume. Consequently, SLC fees vary with the term
of the SLC and the credit rating of the account party.
For short-term, high-quality credits, fees currently
range from 25 to 50 basis points on the outstanding
amount, while fees on longer term and lower quality
credits range from 125 to 150 basis points or more.
Second, banks attempt to reduce credit risk on
longer term commitments by requiring periodic
(usually annual) renegotiation of the terms of the
agreement. For example, SLCs backing the commercial paper of nuclear fuel trusts typically have a
three-to four-year term, but are renewable each year
at the bank's option. This arrangement helps protect
the bank against deterioration in the creditworthiness of the account party over the term of the
SLC.
However, such arrangements are not always ade~
quate. One large bank that issues SLCs to back
industrial development bonds analyzes its risk
exposure in terms ofthe life of the bonds (usually 20
years). It has chosen this measure instead of the life
of the SLC (typically five years) because at the

expiration of the SLC, if the account party's financial condition has deteriorated such that it cannot
obtain another SLC, the bondholders can declare
the borrower in default under the terms of the bond
identure and thus require the bank to cover any
losses. * In this case, the shorter term of the SLC
does not necessarily limit the bank's exposure. Likewise, a bank may be liable for the repayment of
commercial paper debt if it is unwilling to renew its
SLC since the bank's unwillingness most likely
would result in the account party's inability to
refund its debt.
Third, although SLCs frequently are unsecured,
the terms of the bank's contract with the account
party provide another measure of protection against
loss. Typically, the bank's agreement with the
account party stipulates that the bank may: 1)
require the account party to deposit funds to cover
any anticipated disbursements the bank must make
under the SLC, 2) debit the account party's account
to cover disbursements, 3) call for collateral during
theterm of the SLC, and 4) book any unreimbursed
balance as a loan at an interest rate and on terms set
by the bank. ** In the event of the account party's
bankruptcy, such conditions, of course, do not pro-

27

tect the bank against loss in the same way that a
fonnal collateral agreement would. Under most
circumstances, however, they do provide sufficient
incentive for the account party to satisfy the terms of
the ul1derlying contract.
A fourth way that banks can manage the credit
risk involved in SLC issuance is through portfolio
diversification. (This approach, of course, cannot
reduce systematic risk.) Banks that specialize in
issuing certain types of SLCs - backing commercial paper issued by nuclear fuel trusts, for example
- still can diversify by buying and selling participations in SLCs. By selling a participation in an SLC it
has issued, a bank in effect reinsures some of the
risk. If payment must be made to the beneficiary and
the account party is unable to make reimbursement,
the issuing bank and the bank that purchased a share
of the SLC will share in the resulting losses. Under a
participation arrangement, the issuing bank will be
liable for the full amount of the SLC only if the
participating bank were to fail. Participations of
SLCs accounted for 11 percent of the $149.2 billion
in SLCs outstanding as of March 1985.

Finally, in response to growing regulatory concern over banks' SLC exposure, banks are beginning
to manage risk by placing limitations on SLC
growth. A number of large banks have established
sOme multiple of capital (for example, 150 percent)
as alimit on the amount of their SLCs outstanding.
In addition to administratively imposed limitations,
the commercial paper market tends to limit SLC
growth as well. Since SLC-backed commercial
paper trades as an obligation of the SLC issuer,
excessive SLC issuance will reduce the value of the
issuing hank's guarantee as well as the price of its
own .commercial paper.
* Based on information from an informal survey of
large banks in the Twelfth Federal Reserve District
conducted in August 1985.
**See Lloyd-Davies' article on standby letters of
credit in Below the Bottom Line, a staff study of the
Board of Governors of the Federal Reserve System,
January 1982, for a more detailed discussion of the
contractual terms of the LC agreement.

FOOTNOTES
1. Historically, banking laws have prohibited banks from
offering financial and performance guarantees in order to
preserve the traditional separation between banking and
commerce in this country. Standby letters of credit (and
commercial letters of credit, for that matter) are not technically guarantees, however, since the issuing bank's obligation under an SLC is to advance funds upon presentation of certain documents regardless of whether the
underlying contract between the beneficiary and the
account party has been performed to both parties' satisfaction.

9. The Federal Reserve Board's proposed rules on riskbased capital guidelines were set forth in Federal Register,
January 31, 1986, p. 3976. The comment period for this
proposal extends until April 25, 1986.
10. For unsecured debt and SLCs, the relevant price is the
value of the bank's prorated share of the firm's assets in a
bankruptcy proceeding.
11. Black and Scholes have shown that an option's value
is determined by the riskiness of the underlying asset (that
is, variance of return on the asset), the option's term to
maturity, and the level of the risk-free interest rate, as well
as the level of the exercise price and the market value of
the underlying asset.

2. Senior Loan Officer Opinion Survey conducted by the
Federal Reserve System in August 1985.
3. Insurers traditionally have issued surety bonds which
are, technically, performance guarantees. Lately, they
have become active issuers of financial guarantees. Revenue from these two lines of business are reported together
as revenues from surety operations.

12. Based on information from an informal survey of large
banks in the Twelfth Federal Reserve District conducted in
August 1985.

4. Eric Gelman, et ai, "Insurance: Now It's a Risky Business," Newsweek, November 4, 1985.

14. Survey of Terms of Lending at Commercial Banks,
May 1985, conducted by the Federal Reserve System.

5. Senior Loan Officer Opinion Survey, August 1985.

15. Peter Lloyd-Davies, "Survey of Standby Letters of
Credit," Federal Reserve Bulletin, December 1979, pp.
716-719.

13. Ibid.

6. Senior Loan Officer Opinion Survey, August 1985.
7. This estimate is based on the opportunity cost, at
current interest rates, of the 3 percent marginal reserve
requirement on large CDs.

16. August 1985 survey of large 12th District banks.
17. Michael Goldberg and Peter Lloyd-Davies, "Standby
Letters of Credit: Are Banks Overextending Themselves?,"
Journal of Bank Research, Spring 1985, pp. 28-39

8. For a more detailed discussion of the deposit insurance
system and the risk-taking incentives it creates, please see
the articles by Barbara Bennett and David Pyle in the
Spring 1984 issue of the Federal Reserve Bank of San
Francisco's Economic Review.

28

REFERENCES
Board of Governors of the Federal Reserve System, "Senior Loan Officer Opinion Survey on Bank Lending
Practices," August 1985.
Brenner, Lynn. "Booming Financial Guarantees Market
Generates Profits and Some Questions," American
Banker, June 24, 1985.
----------. "The Illusory World of Guarantees," American
Banker, June 25, 1985.
----------. "Regulators Worry About Guarantees," American
Banker, June 26, 1985.
-----,----. "How Much Risk is Too Much?," American
Banker, June 28, 1985.
Comptroller of the Currency, Federal Deposit Insurance
Corporation and Federal Reserve Board, Joint News
Release, January 15, 1986.
Copeland, Thomas E. and J. Fred Weston. Financial Theory and Corporate Policy. Reading: Addison-Wesley
PUblishing Co., 2nd Edition, 1983.
"Draft of Fed Proposed Rules on Risk-Based Capital
Guidelines," Washington Financial Reports, January
20, 1986

Forbes, Daniel, "Financial Guarantees: Providing New
Hope to Insurers," Risk Management, October 1984.
Gelman, Eric, et al "Insurance: Now It's a Risky Business,"
Newsweek, November 4,1985.
Goldberg, Michael and Peter Lloyd-Davies. "Standby Letters of Credit: Are Banks Overextending Themselves?," Journal of Bank Research, Spring 1985.
Judd, John. "Competition Between the Commercial Paper
Market and Commercial Banks," Economic Review,
Federal Reserve Bank of San Francisco, Winter 1979.
lloyd-Davies, Peter. "Standby Letters of Credit of Commercial Banks" in Below the Bottom Line, a staff study
of the Board of Governors of the Federal Reserve
System, January 1982.
----------. "Survey of Standby Letters of Credit," Federal
Reserve Bulletin, December 1979.
Lyons, Lois J. "Surety Industry At a Low Point," National
Underwriter, May 17, 1985.
Verkuil, Paul R. "Bank Solvency and Guaranty Letters of
Credit," Stanford Law Review, May 1973.

29

30

John Pippenger*
The theory of Purchasing Power Parity was the first well-developed
theory of exchange rate determination. Although the efficient market
approach is an important theoretical advance over the conventional
arbitrage interpretation ofpurchasing power parity, many ofthe empirical implications of the two approaches are similar. As a result, at this
time, the empirical evidence supports both views.
The adoption of more flexible exchange rates in
the early 1970s spurred both theoretical and empirical research on purchasing power parity (PPP). The
theoretical work refined existing ideas about the
theory and led to a new version of PPP based on
efficient commodity markets. The empirical
research created an impressive body of evidence.
This article reviews the theory behind two major
approaches to purchasing power parity, the arbitrage
and efficient markets approaches, and discusses the
evidence relevant to each.
The arbitrage approach is discussed first. In
spite of a widespread belief that arbitrage has

failed, particularly during the current float, the
evidence provides substantial support for an
arbitrage interpretation of purchasing power parity. The efficient commodity market approach to
purchasing power parity initially proposed by
Richard Roll (1979) is the newest version of PPP,
and it is discussed more thoroughly. Although the
efficient market approach is an important theoretical advance over the conventional arbitrage interpretation of purchasing power parity, many of the
empirical implications of the two approaches are
similar. As a result, at this time, the empirical
evidence supports both views.

I. Arbitrage
Theory
The arbitrage version of purchasing power parity was the first well-developed theory of the
determination of exchange rates. Although the
roots of the theory go back at least to the period
when gold from the New World began to influence
prices in Europe, Gustav Cassel (1916) is gener-

ally credited with the first formal statement of the
theory. The name, purchasing power parity, comes
from Cassel's basic idea that exchange rates
should, in time, adjust so that a given amount of
currency buys the same bundle of goods in all
countries. In other words, exchange rates tend to
settle at the point where the purchasing power of a
currency is the same, or at parity, in all countries. l
As an example, start with a single commodity. It
might be a quart of milk, a Sony Walkman®, a
gallon of gasoline or a bushel of number 2 red
wheat. Ignoring information and transaction
costs, with effective arbitrage, the cost of buying

*Professor, University of California, Santa Barbara,
and Visiting Scholar, Federal Reserve Bank of San
Francisco. I have received helpful comments from
several members of the FRBSF. I am also indebted
to Nurhan Davutyan and John Mussachia.
31

the good in the United States at time t, p(R,t),
should equal the cost of the good in Great Britain
at time t, p(F,t), converted to dollars using the
dollar price of the pound at time t, Set). That is,
p(R,t) should equal S(t)p(F,t). This is commonly
referred to as the law of one price. The law of one
price implies that the domestic price of foreign
exchange Set) equals the domestic price of the
product p(H,t) divided by the foreign price p(F,t).
If the product were wheat and the countries the
United States and Great Britain, then the dollar
price of pound sterling should equal the dollar
price of wheat divided by the pound price of
wheat.
The arbitrage interpretation of purchasing
power parity rests on a weaker version of the law
of one price that does not require zero information
and transaction costs. For some goods, p(H,t)
may be less than S(t)p(F,t). For actual or potential
exports by the United States, the price differential
would reflect the information and transaction
costs associated with shipping goods to Great
Britain. For actual or potential imports, the excess
of p(H,t) over S(t)p(F,t) reflects the cost of moving
the goods from the U. K. to the U. S. lfthe information and transaction costs are roughly the same
in both directions, then the price in the U. S. of a
broadly based bundle of goods, P(H,t), should
tend to equal the price of that bundle in the U . K. ,
P(F, t), converted into dollars at the going
exchange rate, Set). If there are goods for which
the information and transaction costs exclude any
possibility of international trade, then this version
of PPP implicitly assumes that there is no systematic difference in their relative prices between
any two countries.
Set) _ P(R,t)
- P(F,t)

bundles of goods in different countries is difficult
to locate. Third, for many purposes, it is the
change in exchange rates that is important, not the
level.
For these reasons, almost all empirical work on
PPP has concentrated on the relative version of the
theory, which explains changes in the exchange
rate. Let Sea) be the exchange rate in some base
period, and P(H,O) and P(F,O) be the domestic and
foreign price of the broadly based bundle of goods
in the base period. The relative version of PPP
says that the change in the exchange rate from the
base period to some later period t equals the
relative change in the price of the bundle of goods
in the two countries.

°

Set)
Sea)

P(R,t)/P(F,t)
P(R,O)/P(F,O)

(2)

The right hand side of this equation can be
rearranged into a more familiar form - a ratio of
price indices. With a little manipulation, the right
hand side of equation 2 becomes [P(R, t)/
P(R,O)]/[P(F,t)/P(F,O)]. The numerator of this
ratio is simply a price index for the United States,
pH, and the denominator a price index for the
foreign country, PF. Both indices have the same
base period and use identical weights. Equation 3
uses these price indices to describe the relative
version of purchasing power parity. 2
Set)
S(o)

pH

=

pF

(3)

Most empirical research on PPP involves regressing the log of the ratio of exchange rates on the log
of a ratio of price indices:
In(s)

(1)

=

ex + 131n

(:* ) + z

(4)

where z is an error term; In(x) is the natural log of x;
S equals S(t)/S(O); P is a domestic price index; p* a
foreign price index; and the price indexes usually
are consumer or wholesale indexes, or GNP deflators not based on identical bundles of goods. 3 The
usual interpretation of equation 4 is that it supports
PPP when estimates of ex are not different from zero,
estimates of 13 are not significantly different from
one, and the R2 is high. 4

Equation I describes absolute purchasing
power parity. That is, it describes the relation
between the level of exchange rates and relative
price levels. This version of the theory is not
widely used for at least three reasons. First, in
spite of relatively little research, there is a general
consensus that it is not very accurate. Second,
while price indices are easy to find for almost all
countries, information about the price of identical
32

Evidence
Most of the evidence concerning the arbitrage
version of purchasing power parity has come
either from estimating equations like 4 or analyzing the behavior of real exchange rates (which are
actual exchange rates divided by the rates implied
by PPP). This section concentrates on regression
results. The behavior of real exchange rates is
covered in the section dealing with the evidence
for efficient commodity markets. For an extensive
review of the results of regression analysis, see
Officer (1976). Dornbusch (1985) provides a
briefer review that covers most of the relevant
research through 1984.
The general consensus on this empirical
research is that, while regression results may
provide some support for PPP during the 1920s,
they provide almost no support for the theory
during the 1970s. 5 However, this conclusion is too
negative for two reasons. First, recent evidence
not available to Officer or Dornbusch supports
PPP. Second, in many cases, the rejection of PPP
is based on a misinterpretation of the regression
results.
As an example of some of the evidence not
available to Officer or Dornbusch, Mark Rush and
Steven Husted (1985) report long-run support for
PPP between the U. S. and several countries. For
other combinations of countries, their results are
mixed. In addition, Craig Hakkio (1984) combines time series and cross section analysis, and
obtains results that provide strong support for PPP.
Although Tahmoures Parsai's (1982) research
indicates that other factors influence exchange
rates, his estimates of the relationship between
price levels and exchange rates also support PPP
and are not sensitive to the inclusion of other
variables. As Paul Krugman (l978) points out" ...
one must be cautious in determining the extent of
and the reasons for failure of PPP to hold, for the
world has laid statistical traps for the unwary."
The following sections use the arbitrage
approach to PPP to examine why PPP might
appear to fail and to show how these apparent
failures can be statistical traps. They also review
the evidence concerning the relative importance
of the various sources for failure. The last section

provides some examples of how regressions can
be misinterpreted.
Different Weights
From an arbitrage point of view, the weights in
price indices must be the same. Using consumer
or wholesale indices or GNP deflators violates
this requirement. The following example illustrates the problem. Suppose the United States
produces only wheat and Great Britain produces
only cloth. Some real shock causes the price of
wheat to rise ten percent in both countries and the
price of cloth to fall ten percent. If the law of one
price holds, then PPP holds and the exchange rate
should not change.
But consider what happens if one tests PPP
using equation 4 and GNP deflators. The GNP
deflator in the U. S. rises ten percent because it
contains only wheat. The GNP deflator for the
U.K. falls ten percent because it contains only
cloth. The exchange rate is constant, but the ratio
of the price indices rises. Because the indices do
not have identical weights, estimates ofequation 4
can reject purchasing power parity even though
the theory holds exactly.
From an arbitrage perspective, different
weights introduce a form of measurement error
into relative price levels. As an example, suppose
the variance in the ratio of price indices, (T2,
comes from two independent sources: pure monetary shocks for which PPP holds exactly, (Tit, and
movements in the ratio of price indices that come
from changes in relative prices with unequal
weights, (Tw.
(T2 = (Tit + (TW
Under these conditions, ordinary least squares
yields the following estimate for B:

0--:-_
plim B = _--=..:1.c..:.
1.0+

~

(5)

(TM

As inflationary shocks dominate measurement
error, estimates of B and the R2 approach unity.
But as monetary shocks decline relative to the
measurement error, R2 declines and estimates of B
approach zero even though PPP in the form of
equation 3 holds exactly regardless of the relative

33

sources for bias appear to be more important for two
reasons. If the conventional arbitrage version of
PPP were correct and simultaneous equations bias
per se were the problem with the regressions, then
the real exchange rate would not behave as though it
were very close to a random walk. In addition, when
the test equation for PPP is reformulated so as to
reduce the bias from these other sources, two stage
and ordinary least squares yield essentially the same
results.?

importance of monetary shocks and measurement
error.
For a number of years, the Federal Statistical
Office of Germany has used identical bundles of
goods to calculate absolute purchasing power parities for several countries. 6 John Mussachia (1984)
compares the results of testing PPP with this data
and conventional price indices. The results suggest that, except perhaps for very stable relative
price levels, different weights are not a major
source for the observed errors in purchasing power
parity.

Information and Transaction Costs
Tradables. In discussions of PPP, it is customary
to divide goods into two categories: tradables, for
which information and transaction costs as well as
other impediments are zero, and nontradables, for
which these impediments effectively prevent trade.
The assumption of no impediments for tradables is
analytically convenient, but not very accurate.
Transaction costs and tariffs introduce errors into
the law of one price even for widely traded goods
such as wheat and oil. Although these impediments
can introduce errors into PPP, the errors are
bounded. Once the pound price of wheat converted
into dollars at the going exchange rate exceeds the
dollar price of wheat by the cost of shipping wheat
plus any tariff, arbitrage presumably prevents the
next shock from widening that gap. (See Aizenman,
1984a and 1984b, for a detailed discussion of how
transaction costs introduce errors into PPP and how
these errors can bias the estimate of B toward zero.)
As a result, if the errors in PPP were primarily the
result of the effects of information and transaction
costs for tradables, then real exchange rates should
not behave like random walks.
Work by Richard Roll (1979), Michael Darby
(1980), John Pippenger (1982), and Michael Adler
and Bruce Lehman (1983) indicates that real
exchange rates behave randomly, which implies that
the predictive error in PPP is unbounded. Although
some new evidence presented below indicates that
the errors are bounded, the boundaries appear to be
very wide and/or very weak. The behavior of real
exchange rates, therefore, suggests that the errors in
purchasing power parity are not primarily due to the
effects of trade impediments on tradables.
Dynamics. Purchasing power parity is usually
viewed as primarily a theory of the long-run deter-

SimUltaneity
Even if purchasing power parity held exactly and
there were no problems with price indices, tests of
equation 4 still could yield a low R2 and estimates of
13 close to zero. Under the arbitrage version of PPP,
neither price levels nor exchange rates are
exogenous variables. As a result, there is the possibility of bias due to simultaneous equations. Krugman (1978) provides a simple example of simultaneous equations bias in PPP. In his model, the
central bank attempts to stabilize the exchange rate
by expanding the domestic money supply as the
domestic price of foreign exchange falls. This stabilization policy biases the estimate of B toward
zero because it causes the error term in equation 4 to
be correlated with the ratio of price levels, violating
one of the assumptions of ordinary least squares
(OLS) regression.
Two stage least squares (2SLS) is the standard
way to deal with this problem. The first stage of
2SLS develops a proxy variable. If this variable is a
good proxy for the original explanatory variable,
e.g., pH/pF, and it is also independent of the error
term in the original regression, then substituting the
proxy for the original explanatory variable in the
second stage regression eliminates the correlation
with the error term and eliminates the bias.
Although OLS estimates of equation 4 are subject
to bias due to simultaneous equations, this bias does
not appear to be a major reason that regressions
often fail to support PPP. Measurement error due to
unequal weights and some of the other sources for
errors in PPP described below also introduce bias
and cause the error term in equation 4 to be correlated with the ratio of price levels. These other

34

Nontradables. As mentioned earlier in discussing PPP, it is convenient to divide goods into two
groups: tradables with no impediments and nontradabIes where transaction costs or trade restrictions
effectively prohibit trade. For tradables, the law of
one price holds and so does equation 3 as long as the
bundle contains only tradables. When price indices
contain nontradables, real shocks can cause PPP to
fail.
Take concrete as an example of a nontradable.
Suppose some shock raises the price of concrete in
the U.S. and lowers the price of concrete in the
U. K., but all other prices in both countries are
unchanged. With no change in the prices of traded
goods, the exchange rate is unchanged. But a price
index including concrete rises in the U.S. and falls
in the U.K. Purchasing power parity fails because
the change in relative prices between tradables and
nontradables is different in the two countries.
The distinction between the structure of the errors
for tradables and nontradables is important. If the
errors in PPP are due primarily to shocks that affect
tradables, then the errors are bounded. If the errors
are due primarily to changes in relative prices for
nontradables, no such restriction applies. A given
shock might raise the relative price of concrete in
the U. S., but the next shock might either accentuate
or offset the effect of the first shock. 9
From an arbitrage perspective, changes in capital
flows, tastes or technology can introduce large
persistent errors into PPP by causing relative prices
between tradables and nontradables to change differently in different countries. This interpretation of
the effects of such shocks helps explain why it is so
difficult to find any empirical regularity between a
given type of shock and the error in PPP. Under
some circumstances a larger capital flow might
cause the relative price of concrete to rise in a
country; under others, the relative price might fall.
Changes in relative prices for nontradables not
only introduce errors into PPP, they also bias the
estimate of B toward zero. Suppose the variance in
the ratio of price indices is 0'2 and part of this
variance comes from purely monetary shocks, O'*",
for which PPP holds perfectly. In addition, there is
another element, O'~, that comes from real shocks.
If these different sources for the variance in the ratio
of price indices are uncorrelated, then 0'2 equals O'*"

mination of exchange rates. Actual and parity rates
can diverge in the short-run, but in the long-run they
tend to converge. 8 Almost every asset model of the
exchange rate implies this kind of behavior. Indeed,
many asset models assume PPP fails completely in
the short-run but holds exacty in the long-run.
A dynamic interpretation of PPP implies that
equation 4 is misspecified. In a dynamic framework, the current exchange rate depends on both
current and lagged relative price levels and, perhaps, lagged exchange rates. See Hodgson and
Phelps (1975) for an attempt to estimate a dynamic
version of equation 4.
If market forces tend to bring actual and parity
rates into equality in the long-run, then changes in
the deviation from PPP must be correlated. Suppose
the actual rate is above the rate implied by PPP. If
the error is random, then that gap is as likely to
increase as decrease. Any move above parity is as
likely to be followed by a further move away from as
a move toward parity, and the changes in the error
are uncorrrelated. But if there are market forces at
work bringing actual and parity rates together, then
the gap is more likely to decrease than increase.
Beyond some point, any move above parity eventually is followed by a movement back toward parity,
and there is negative serial correlation in the
changes in the error. Since, as mentioned earlier, the
predictive errors for PPP behave almost like random
walks, a dynamic version of equation 4 does not
appear to be appropriate.
The evidence concerning the behavior of real
exchange rates raises serious questions about the
view that purchasing power parity is essentially a
long-run theory. Although there is evidence that real
rates do not behave exactly like random walks, the
deviation from a random walk is so slight that it does
not indicate any strong tendency for actual and
parity rates to converge in the long-run. Opponents
of PPP will be tempted to interpret this pattern as
evidence that the theory does not hold much better
in the long-run than in the short-run. However, the
efficient commodity market model of purchasing
power parity discussed below suggests a different
interpretation. From that perspective, the observed
behavior of real exchange rates suggests that commodity markets influence exchange rates in both the
long-run and short-run.

35

plus CTR., and a variation of equation 5' describes the
estimate of B. 10

wholesale, they tend to reflect posted prices rather
than the actual prices at which trade takes place.
When market prices for individual products such as
Malaysian rubber are used, the results provide more
support for arbitrage. See, for example, Liliane
Crouhy-Veyrac, Michel Crouhy and Jacques Melitz
(1980) and Aris Protopapadakis and Hans Stoll
(1984).
Another problem with this interpretation of the
errors is that almost everything is tradable. Concrete
is traded internationally and tourists get haircuts. If
almost everything is tradable, but the boundaries
generated by impediments are very wide and not
very rigid for many commodities, then the boundaries for real exchange rates could be quite wide and
not very rigid. In that case, real exchange rates
would behave like a random walk with wide and
flexible boundaries, which is consistent with evidence discussed later. Errors of this type would not
eliminate the kind of bias described in equation 5';
they would just make the problem more complex.
The efficient commodity market model discussed
below provides still another possible interpretation
of the observed errors in PPP. In that context,
efficient international speculation in commodities
in the absence of trade generates a random walk in
real exchange rates.

plim B = _--=.:1.:-;:.0_ _

¥

1.0 +CT
M

(5')

As inflationary shocks dominate real shocks, the R2
and estimate of B approach unity. As monetary
shocks disappear, the R 2 and estimate of B approach
zero even though PPP holds perfectly for monetary
shocks and real shocks have not increased. In other
words, under these conditions, regression results do
not depend on just the effectiveness of arbitrage and
PPP, they also depend on the degree of monetary
coordination in the two countries. On the one hand,
the real shocks can be relatively large, but if the
differences in the rates of inflation are also very
large, then the R2 and B are close to unity. On the
other hand, even if the errors in PPP due to real
shocks are very small, a sufficient degree of monetary coordination can make the ratio CTR./CTtt such
that the R2 and B are not statistically different from
zero. As a result, PPP can appear to fail when the
errors are relatively small, and to succeed even
though the errors are relatively large.
Since the behavior of real exchange rates is very
close to a random walk, from an arbitrage perspective, the errors in purchasing power parity appear to
be dominated by changes in relative prices for
nontradables. Some shock raises the relative price
of haircuts or concrete in the United States, but not
in Great Britain. If the price of traded goods remains
constant, the U.S. price level rises relative to the
price level in the U.K., but the exchange rate does
not change. If the next shock is as likely to reinforce
as reverse the first, then the errors in PPP behave like
a random walk.
This interpretation of the error structure must be
taken as tentative for several reasons. First, direct
tests of the effectiveness of arbitrage for traded
goods suggest that the law of one price does not hold
as a reasonable approximation even for traded
goods. See, for example, Peter Isard (1977) and 1.
David Richardson (1978). These results, however,
are suspect because they are based on subcategories
such as leather products in price indices in different
countries that do not refer to identical, or even very
similar, products. In addition, when the indices are

Examples
Paul De Grauwe, Marc Janssens and Hilde
Leliaert (1982), De Grauwe and Marc Rosiers
(1984) and Davutyan and Pippenger (1985) show
that the predictive errors for PPP tend to be relatively large when there are large differences in the
rates of inflation. If, as seems likely, these errors are
the result of changes in the relative prices for nontradables, then regression estimates of equation 4
will give the best results when, in terms of the
predictive error, PPP works the worst. The reason is
that, even though monetary instability tends to
increase CTR., it also makes CTR.I CTtt very small. 11
The dependence of regression results on the
degree of monetary coordination not only leads to a
misinterpretation of the evidence, it also invites
specification search. Advocates of PPP can find
episodes where regressions appear to support the
theory, and those who oppose it can find situations
in which the same regressions appear to reject PPP.

36

because there is more variability in both exchange
rates and relative price levels during the later period.
In other words, <TK-t is larger in the second period.
The fact that the standard errors are identical in the
two periods means that the amount of variation in
the exchange rate that cannot be explained by PPP is
identical in the two cases, which indicates that <T~ is
the same in both periods. PPP worked just as well in
the earlier period as in the later. The difference
between the two periods is primarily that <T~/<TK-t is
smaller in the second period because <TK-t is larger.
The bottom half of Table 1 shows Frenkel's results
for France in the 1920s and 1970s using two stage
least squares. Based on the estimates of R2 and ~,
the results for the 1920s appear to support purchasing power parity while those for the 1970s reject the
theory. The widespread belief that PPP worked in
the 1920s but failed in the 1970s is based on similar
results for a number of countries. 12
However, if one interprets the standard errors of
the regression as an index of the effects of real
shocks, the evidence does not support the conclusion that purchasing power parity worked in the
1920s and failed in the 1970s. Indeed, those errors
suggest just the opposite. The standard error for
France in the 1920s is 0.054, but it falls to 0.029 in
the 1970s.1 3 The large R2 and ~ during the 1920s is
simply a reflection of the fact that a very large
proportion of the variability in the exchange rate can

Estimates of equation 4 for the United States and
Canada during the 1970s and early 1980s in the first
half of Table 1 provide an example of the importance
of relative monetary stability, and illustrate how
specification search can influence regression
results. An examination of the regression errors for
France from the 1920s and 1970s illustrates why it is
incorrect to conclude that PPP worked during the
1920s but failed during the 1970s.
The first half of Table 1 shows estimates of
equation 4 using monthly data from January 1972 to
December 1977, and January 1978 to February
1984. During the first period, price levels in the two
countries moved together very closely. Wholesale
prices in Canada rose only five percent more than in
the United States. For that period, both ~ and theR2
are effectively zero. During the later period, the
Canadian price level rose 15 percent more than the
price level in the United States. For that period, the
R2 is respectable and the estimate of ~ is not
statistically different from unity. Using the usual
criteria of R2 and ~, anyone wishing to reject PPP
could use the earlier period and anyone wishing to
support PPP could use the later period.
Although estimates for the earlier period appear
to reject PPP and estimates for the later period
support the theory, this interpretation of the evidence is misleading. Although the R2 and ~ are
closer to unity for the later period, this is primarily

TABLE

1

Monthly Estimates of Equation 4 Using Wholesale
Indices
R2j

Standard

Country
Canada

France

Period

Error

Durbin
Watson

p

Jan. 1972
Dec. 1977

-0.02
(0.01)

0.25
(0.16)

0.03
0.010

1.07

0.82

Jan. 1978
Feb. 1984

-0.15
(0.00)

0.82
(0.12)

0.37
0.010

1.62

0.73

Feb. 1921
May 1925

1.183
(0.157)

1.091
(0.109)

n.a.
0.054

1.70

0.58

June 1973
July 1979

-1.52
(0.03)

-0.18
(0.37)

n.a.
0.029

2.26

0.86

Sources: Canada, Davutyan and Pippenger (1985) Table 5. France, Frenkel (1981) Tables I and 2.
Note:
Canadian estimates use SAS autoreg corrected for one period serial corrrelation. French estimates use two stage least squares.
Standard errors in parentheses. No base period.

37

estimate of B for that period are low because ait is
low, not because the errors due to real shocks, a 2 R'
are large. If purchasing power parity was a success
in the 1920s, it did not collapse in the 1970s.
The widespread belief that PPP collapsed in the
last decade is based on a serious misinterpretation of
the evidence that ignores the econometric traps
involved in estimating purchasing power parity.

be explained by monetary shocks. In other words,
the R2 and estimates of B are close to one for the
1920s because ali is large, not because a~ is small.
The fact that the absolute size of the standard error is
smaller during the 1970s means that amount of the
variability in the exchange rate that cannot be
explained by PPP is smaller during the 1970s. Since
there was much more monetary coordination during
the 1970s, this result indicates that the R2 and

II. Efficient Commodity Markets
A number of studies referred to earlier indicate
that real exchange rates behave like a random walk.
To explain these random walks, Roll (1979)
developed a theory based on speculation in efficient
international commodity markets. Roll's theory
expands the traditional view of purchasing power
parity in two ways. It uses speculation rather than
arbitrage and stresses intertemporal transactions.
Since most international trade involves time and
some element of speculation, this approach is a
significant advance in terms of realism over the
traditional arbitrage approach to purchasing power
parity. 14
Under the arbitrage approach, a trader buys a
good this month at home and sells it this month in
another country. Since the presence of risk is never
mentioned in such an analysis, there is an implicit
assumption that all prices are known with certainty.
In Roll's model, there is no physical transfer of
commodities. Instead, speculators in one country
speculate on changes in exchange rates and changes
in commodity prices in the other country.

of the return over the cost is approximately the
percentage difference between the two, the gross
rate of return from this transaction is

Intertemporal SpeCUlation without Trade

If international commodity speculation is efficient, then, based on the information available in
period t-l, the expected net return should be
zero. 15

1 [
S(t)p(F,t)
n
S(t-1)p(F,t-l)

J

(6)

Whether or not the speculator engages in such a
transaction depends on the net return, which is the
difference between the return from foreign speculation and a similar domestic transaction. Let p(H,t1) be the domestic price of the good in t - 1 and
p(H,t) the price in t. Under these conditions, equation 7 describes the net return r s from intertemporal
international speculation.

= In

r
s

= In

As an example of Roll's approach, consider a
speculator who buys a commodity in a foreign
country in month t - 1 for sale in that country the next
month t. If p(F,t-l) is the cost of the good in the
foreign countrty in t-l and S(t-1) is the domestic
price of foreign exchange that month, then the
domestic price of the foreign good in t-l is S(tl)p(F,t-l). The return from the sale of the commodity is S(t)p(F,t), where S(t) is the exchange rate
in t and p(F,t) is the price the speculator receives for
the good in the foreign country. Since the natural log

1-

r
L

S(t)p(F,t)
S(t-1)p(F,t-l) .J

In [

P(H,t)]
p(H,t-l)

J

[S(t)p(F,t)l - In [S(t-l)P(F,t-l)] (7)
p(H,t)
p(H,t-l)

(8)

where E is the expectations operator and
1) is
the information available in t-l.
Equations 7 and 8 imply equation 9, where Ut
is an uncorrelated random variable with zero mean.

38

In[--p-(H--'-'~;"-";~'-(F-' t-)-]
-In .[

-

S(t-1)

• p(H,t-I)[p(F,t- I)

]

Using the earlier notation, •the gross return from this
transaction is In - ([S(t)p(F,t)]/p(H,t -I)}. The net
return, which is the incentive for such activity,
depends on the return from similar domestic transactionS. lfthe speclliatorbuysthe goodathollleolle
month and sells it at home next month, the return is
In[p(H,t)/p(H,t-I)]. The net return from speculation with trade, rT , is the difference between these
two gross returns.

(9)

In the tenninologyof efficient markets, equation 9
means all the infonnation relevant fordetennining
the real exchange rate next period is already fully
reflected in the current real exchange rate.
Consider the following implication of equation
9. Suppose the price of wheat in Canada this
month times the current price of the Canadian dollar
does not equal the current price of wheat in the
United States. According to equation 9, that difference is as likely to increase as to decrease in the
next month. Given efficient international speculation without trade, market forces do not work toward
restoring the law of one price. Since this is true for
every commodity, it holds for arbitrary bundles of
commodities. 16 As a result, there are no market
forces at work restoring long-run equality between
actual exchange rates and the rates implied by
purchasing power parity. Real exchange rates perfonn a random walk because, no matter what the gap
between the actual and parity rate is in one period,
the gap is as likely to grow as to shrink in the next
period. 17

rt

- I •••[ •. S(t)p(F,t)
- In [... P(H,t)].
n.. p(H,t-1) ]
p(H,t-l)

= In

.[S(t)P(F,t)]
p(H,t)

(10)

The net return is the percentage error in the law of
one price. If S(t)p(F,t)/p(H,t) is unity, the law of one
price holds and the return from additional intertemporal international trade is zero.
The arbitrage version of the law of one price is
based on international trade at known prices within
a given time period where, ignoring transaction
costs, arbitrage eliminates any net return. An efficient market version involves intertemporaltrade
with expected prices where the expected net return is
zero given all currently available infonnation.
(11)

Intertemporal SpeCUlation with Trade
Without trade, speculators can only guess
whether an expected change in a price at home
p(H,t)/p(H,t-l) will equal the domestic value of
the change in the price of the same good in a foreign
country - S(t)p(F,t)/S(t-1)p(F,t-I). In this type
of speculation, the level of the exchange rate is
irrelevant. Halving or doubling S(t) and S(t-I)
does not alter S(t)p(F,t)/S(t-l)P(F,t-I). When
speculation involves trade, the returns depend on the
level of exchange rates. If the price of pound sterling
rises with no change in the product price in the U. S.
or U.K., it becomes relatively more profitable to
buy in the U.S. "tllis" period for sale in the U.K. in
the "next"period.
Consider an exporter who buys a good at home
this period, ships it, and sells it abroad next period.

Equations 10 and II imply the conventional law
of one price with an error tenn that reflects the
uncertainty about future prices.
In[p(H,t)] = In [S(t)p(F,t)] + e t
(12)
where et is an uncorrelated random variable with
zero mean.
Although the argument has been developed in
tennsof a single commodity, exactly the same
reasoning applies to any arbitrary bundle of commodities. In an efficient market without transaction
costs, the expected return from buying anybundle at
home this period and selling it abroad next period
cannot exceed the expected return from buying at
hOme and selling at home. 18 Efficient commodity
markets with trade imply the absolute version of
purchasing power parity with an error tenn.

39

In[S(t)]

=

P(H,t)]
In [ P(F,t) + 'YI

run, deviations of the actual rate from parity are not
only bounded, they also tend to disappear in the
long-run.
To see the relation between an efficient market
interpretationofPPP with trade and Roll's interpretation without trade, consider the following
example. Suppose Roll's speculation in wheat
between the U.S. and Canada generates a random
walk for the •real· wheat·exchange rate between the
two countries. If$/$C is the U.S. price of the
Canadiandollar, W is wheat in the U.S. and WC is
wheat in Canada, then speculation without trade
causes ($/$C)/[($/W)/($C!WC)] to perform a random walk. As a result, in the absence of any other
influences, the real wheat exchange rate will drift
off toward plus or minus infinity in time.
But long before that happens, trade takes place.
Suppose this morning the price of wheat in Winnipeg converted to U. S. dollars is less than the price
expected next week in Chicago. If the price difference exceeds the transportation costs, there is an
incentive to buy wheat in Winnipeg, load it on a
train and ship it to Chicago for sale next week. From
that point on, real exchange rates no longer behave
like a random walk. Any further downward movement in the real wheat exchange rate is resisted by
wheat moving from Canada to the United States.
The shipments of wheat put upward pressure on
Canadian wheat prices, downward pressure on
wheat prices in United States, and increase the
demand for Canadian dollars. Since the same argument holds for every commodity, efficient international commodity markets with trade imply that
changes in real exchange rates should show evidence of negative serial correlation and should not
be random walks.

(13)

whereP(H,t) and P(F,t) are the home and foreign
price of an identical bundle of goods, and 'YI is an
uncorrelated random variable with zero mean.
Since the discussion has ignored the transaction
costs associated with trade, the source of the error
term 'Y in equation 13 is the same as the source for
the error u in equation 9. They both come from
imperfect information. In equation 9, imperfect
information generates a random walk in real
exchange rates because expected returns depend on
expected changes in prices and exchange rates.
With trade, expected returns depend on the level of
prices and exchange rates, and so deviations of the
actual rate from the rate implied by parity are
uncorrelated. If they were correlated, expected net
returns from trade would not be zero and trade in
international commodity markets would not be efficient.
Recognizing the information and transaction
costs associated with trade provides a link between
the arbitrage version ofPPP and the efficient market
interpretation with trade. In the conventional
arbitrage version of PPP, these costs introduce
errors that are larger in the short-run than in the
long-run. An efficient market interpretation of PPP
with trade essentially adds an error term like 'Y to the
arbitrage version. 19 Equation 14 describes the relative version of an efficient market interpretation of
PPP with trade and transactions costs.
S(t+ 1)] -_ In (
In ~
[

PH) +
pF

VI

+

gl

(14)

where both v and g are error terms with negative
serial correlation. 20 The term g has negative serial
correlation because it represents the temporary
deviations of the actual rate from parity generated
by imperfect information. 21 The term v has negative
serial correlation because it is due to transaction
costs that allow only limited deviations between
actual exchange rates and those implied by PPP.
This negative serial correlation is reinforced when
these costs are effectively zero in the long-run. In
the case where transaction costs are zero in the long-

Evidence
Roll (1979), Darby (1980) and Mussachia (1984)
analyze monthly real exchange rates for many countriesduring the 1970s while Pippenger (1982) and
Adler and Lehman (1983) use annual data over long
periods. 22 The tests include regressions, autocorrelations and spectral analysis, and in each case real
exchange rates appear to behave as though they were
random walks. Although a random walk is consistent with efficient international commodity markets

40

without trade, trade should impose boundaries on
real exchange rates. The evidence presented next
suggests that such boundaries exist.
These tests combine autocorrelation and spectral
analysis with a technique used by Roll (l979). Roll
tests his model by calculating the means of regression coefficients for many pairs of countries. The
advantage of this approach is that it can reveal
patterns that are so weak thattheyare not observable
for any given pair. A regression coefficient might be
statistically insignificant for 20 different .pairs of
countries, butifit is positive for all of them then it is
almost certainly positive. Unfortunately, Roll's
regressions were not designed to test for the presence of the kind of barriers that exist with trade.
Since autocorrelation and spectral analysis are natural ways to test for such barriers, Tables 2 and 3
apply Roll's technique to autocorrelation and spectral estimates respectively.

monthly data in the 1970s, combining the results
from several countries suggests that reflecting barriers do exist.
The technique is simple: obtain the autocorrelationestimates for 13 lags for 24reaLexchangera.tes
using wholesale indices and end-of-month
exchange rates from the International Financial Statistics tape for 1976.7 to 1983.12. 23 Compute the
average autocorrelation estimate at each Jagllsing
the 24 pairs of countries and, in addition, take the
mean ofthese .averages. The reason for computing
the mean of the averages at the various Jags is that
reflecting barriers are probably not identical for the
various countries; their differences would lead to
different lag structures. If the series are true random
walks, there should be no evidence of either negative or positive correlation. If there are reflecting
barriers, then there should be some evidence of
negative serial correlation.
Table 2 shows the average autocorrelation estimates. For these countries the real exchange rate is
not a random walk. Five of the lags are significant at
the one percent level ,24 but there is no clear pattern
of negative serial correlation because two of these
estimates are positive. The mean of the 13 autocorrelation estimates, however, is negative and significant at the ten percent level. The average autocorrelations are not consistent with a random walk, but

Autocorrelation
As pointed out earlier, one implication of both the
arbitrage view of PPP and efficient markets with
trade is that· real exchange rates are bounded by
"reflecting barriers" and changes in real exchange
rates have negative serial correlation. Although Roll
(1979), Darby (1980) and Mussachia (1984) all find
no evidence of negative serial correlation for
TABLE

2

Average Autocorrelation Estimates

Lag

Estimate

13

-0.065
-0.017
0.028
-0.021
0.017
-0.063
0.058
0.019
-0.018
0.052
0.028
0.012
-0.100

Mean

-0.007

I

2
3
4
5
6
7
8
9

10
II

12

*Significant at ten percent level, single-tailed.
**Significant at five percent level, single-tailed.
***Significant at one percent level, single-tailed.

41

t-Statistic
- 3.29***
-0.92
1.24

-1.29
0.82
3.80***
2.83***
1.56*
-1.52*
2.59***
1.12
-0.59
4.36***
- 1.41 *

they provide only weak support for the existence of
reflecting barriers. Spectral analysis yields stronger
results.

impliyd by .a random walk, then the short-run,
intermediate-run and long-run all contribute equally
to the variance. In Figure 1, which shows average
estimates for spectral density, that implication of a
random waikisshown by the solid horizontal
at
1hT or 0.318. 26
If there are .barriers that restrict long-run movements in real exchange rates, they would reduce the
lon.g-runcomponent of the variance for changes in
real exchange rates. As an example, . suppose the
traditional dynamic view of .f>PP. is cQITect. In the
short-run, a variety ofshocks drive actual rates away
from PPP, but in the long-run, market forces bring
actual and parity rates back into equality. In that
case, there are short-run changes in the real
exchange rate, but no long-run changes because in
the long-run the real exchange rate is constant at
1.0. In other words, none of the variance in changes
in real exchange rates comes from the long-run. A
dynamic interpretation of PPP implies that the spectral density estimates in the figure are above 111r for
short cycles and below 1hT at long cycles.
Pippenger (1982) shows that spectral density
estimates for annual changes in real exchange rates
are essentially constant regardless of the length of

SpectralA....alysis
One natural interpretation of the concept of the
short-run is that it refers to short cycles. A similar
relationship between cycle len?thand the length of
the run holds for the intermediate and long-run. For
income and employment, the short-run might refer
to cycles of up to two years and the long-run to
cycles longer than· the business cycle. In the context
of· highly organized markets such as the foreign
exchange market, the short-run is more likely to
refer to a period of a few days ora few months at
most. Cycles as long as a couple of years almost
certainly would couespondto the long-run, and the
concept of the intermediate-run would apply to
cycles from a few months up to perhaps a year.
Given this association between the length of the run
and the length of cycles, spectral analysis allows us
to see how much of the variance in a variable, such
as the change in the real exchange rates, comes from
the short-run, intermediate-run, and long-run. 25 If
changes in real exchange rates are uncorrelated, as

TABLE

3

A.verage Spectral Density Estimates for 24 Countries
Cycle length
in Months

Estimate

Estimate-1/'IT

0.286
0.319
0.394
0.337
0.326
0.339
0.364
0.310
0.287
0.280
0.303
0.279
0.269
0.280

-0.032
0.001
0.076
0.019
0.008
0.021
0.046
-0.008
-0.031
-0.038
-0.015
-0.039
-0.049
-0.038

2.00
2.17
2.36
2.60
2.89
3.25
3.72
4.33
5.21
6.49
8.69
12.99
26.31
00

*Significant at ten percent level, single-tailed.
**Significant at five percent level, single-tailed.
***Significant at one percent level, single-tailed.

42

t·Statistic
-1.24
0.04
4.21 ***
1.03
0.81
1.16
2.01 **
-0.40
-1.61 *
-2.17
-0.76
-2.77***
-2.75***
-1.57*

be significantly different from IhT. If there were
reflecting barriers, the estimates should be above 11
1T at the shorter cycles and below 1I1T for long
cycles.
Th¢pattem forthe spectral estimates in th¢figure
allows one to reject the idea that the real exchange
rate performs a random walk. Instead, it supports a
dynamic interpretation ofPPP. There. is a clear
tendency for the estimates to lie below l/1Tfor the
longestcycles. Table 3 shows that,althoughthe
estimate at the shortest cycle is below 1/1T (although
not significantly so at even the ten percent level), the
next six estimates all are above 1I1T. At the seven
longest cycles, all estimates are below IhT with two
significant at the ten percent level, one at the five
percent level, and two estimates significantly below
1I1T at the one percent level. For these countries as a
group and for this time period, real exchange rates
do not behave as a random walk. The spectral
density estimates strongly support the existence of
elastic reflecting barriers that restrain long-run
movements in real exchange rates. These barriers
may be quite wide and very elastic, but they do
appear to exist. The pattern shown in the figure and
Table 3 does not refute Roll's basic idea of efficient
international commodity markets, it simply indicates that, beyond some point, trade limits the
movement in real exchange rates.

Chart 1
Spectral Density Estimates
Estimate

5.00
4.00
3.18 1-----4:.........::::~:......~r_
3.00

,

,

,

.

,

,

,

,

,

t2.90
6.44
4.33
3.25
2.60
2.17
26.30
8.69
5.21
3.72
2.89
2.36
2.00

the cycle. Since Mussachia (1984) obtains similar
results for monthly data during the 1970s, an
approach like the one used for autocorrelations is
applied to the spectral estimates. That is, the estimate for the two-month cycle is the mean of the
spectral density estimates for the twenty-four real
exchange rates in that cycle.
The broken line in the figure shows the average
special density estimates for the countries used
earlier. These estimates and their deviation from
117r are given in Table 3. If there were no reflecting
barriers and real exchange rates perform a random
walk, then the spectral density estimates should not

m.

Accept or Reject?
exchange rates and, at this time, that approach has
failed. 27 There is no choice. In the strict sense, we
must accept purchasing power parity because it
yields the best predictions.
Most of the objections to PPP are related to the
accuracy of the theory. Even if it is the best available,
many people are unwilling to accept a theory unless
it achieves some minimal level of accuracy. Perfonning only slightly better than demon chance is not
good enough. The problem with this aspect of
acceptance is that it is almost entirely subjective. Is
the glass half full or half empty? Is an error of ten
percent large or small?
Table 4 illustrates the problem. It shows the
"real" German mark price of the United States
dollar, French franc, British pound and Canadian
dollar from 1975 to 1985 using identical bundles. 28

In most people's mind, the decision to accept or
reject a theory involves two closely related, but
different, issues. The first is whether the theory is
the best available and the second is whether it is
accurate. There is a good deal of support for the
arbitrage and efficient market interpretations of
purchasing power parity. After allowing for the
economic and econometric effects of information
and transaction costs, the evidence supports the
basic implication of purchasing power parity - that
substantial and prolonged changes in relative price
levels are associated with roughly proportional
changes in exchange rates.
Even more important, no theory can explain
either the level or change in exchange rates over time
and across space as well as purchasing power parity.
The only serious contender is the asset approach to

43

for each of the four countries over the 10 years is
much smaller. They range from - 2 percent for
France to 10 percent for Canada. The average error
for all the countries combined over the 10 years is
only 2 percent. Deviations from absolute PPP can
be very large, but, on average, the theory is
amazingly accurate. 30
Whether or not the occasionally large errors justifyrejecting purchasing power parity,. or tl1e small
average error warrants acceptance, is up to each
individual to decide. The way one uses PPP will play
an important role in that decision. For policymakers, the potential for large errors means potentially serious mistakes when policy is based primarily on PPP. For scientific purposes, the
occasionally large errors are challenges for future
research rather than potential disasters.

At.one extreme, from 1975 to 1985, the actual mark
price of the French franc rose only four ~rcent more
thanimplied by PPP. At the other, the mark price of
United States' dollars rose 56 percent more than
implied. by .PPP.29 For •tl1esecountries •. on average,
the actual rate rose 28 ~rcent more.than predicted
by PPP. Relative PPP as an explanation of exchange
rates certainly is not impressive for this time period
and these countries.
The errors for absolute PPP in Table 4 range from
a l1linlls22 percentfor Great Britain in January 1977
to 59 percent for the U. S.• in January 1985. Th.at is,
in January 1985, the actual mark price of the dollar
was 59 percent higher than predicted by purchasing
power parity based on the bundle of goods used by
the German Federal Statistical Office. Although
individual errors are quite large, the average error

TABLE

4

Real German Exchange Rates Using Identical Bundles
country Pairs
Period

OM/US

OM/FF

Jan 1975
to
Jan 1985

0.56

0.04

Jan 1975
Jan 1976
Jan 1977
Jan 1978
Jan 1979
Jan 1980
Jan 1981
Jan 1982
Jan 1983
Jan 1984
Jan 1985
Average

0.92
1.02
0.97
0.88
0.81
0.81
1.02
1.11
1.19
1.40
1.59
1.06

0.93
1.05
0.92
0.89
0.94
0.99
1.06
1.03
0.98
0.96
1.01
0.98

OM/UK
Relative

OM/CAN

AVERAGE

0.19

0.35

0.28

0.70
0.88
0.78
0.82
0.79
0.94
1.28
1.15
1.00
1.08
1.01

1.00
l.15
1.08
0.91
0.79
0.79
0.99
1.13
1.21
1.41
1.52

0.95

1.10

0.89
1.03
0.94
0.88
0.83
0.88
1.09
1.11
1.09
1.21
1.28
1.02

Absolute

Data: Absolute PPP, Gennan Federal Statyistical Office. Actual exchange rates, end of month from IFS tape.

44

IV. Summary
arbitrage version imply that the errors in PPP should
be primarily short-run in nature. The evidence,
however, indicates that the predictive errors are
almost as large in the long-run as in the short-run.
The fact that the predictive error behaves in the
fashion of a random walk - with wide elastic
reflecting barriers - tends to favor the •efficient
market interpretation. But Roll developed theefficient commodity market model in order to explain
random walks in exchange rates, so random
behavior does not constitute a true test of the theory.
Until some new implications ofthe efficientcommodity market model are derived and tested, the
evidence appears to support both the arbitrage and
efficient market approaches.
The choice between the two models is important.
The arbitrage version is consistent with the attempt
to build asset models to explain the behavior of
exchange rates. Since the conventional arbitrage
version of PPP is essentially a theory about the longrun behavior of exchange rates, and the asset
approach concentrates on the short-run, there is no
inherent conflict between the two. The efficient
commodity markets model, however, implies that
commodity markets playa key role in the short-run
determination of exchange rates. This approach is
inconsistent with most existing asset models of the
exchange rate because they exclude any role for
efficient commodity markets in the short-run determination of exchange rates.

The evidence supporting the arbitrage version.of
purchasing power parity is strongerthan generally
realized. Rejection of the theory often rests on a
misinterpretation of the <evideIlce. RegressioIls can
yield low coefficients and R2 s even though the
predictive errors are relatively small. In addition, in
the absence of rapid inflation, the average predictive
error for absolute PPPappears to be quite small.
Those who insist on a high degree of accuracy
might reject the. theory because individual predictive errors are sometimes very large. In terms of
relative predictive power, however, one must choose
between the arbitrage and efficient commodity markets versions of PPP. Over time and space, no other
theory about exchange rates is as consistent with the
evidence. The only other serious contender, the
asset approach, has failed so far.
Accepting either an arbitrage or efficient market
version of purchasing power parity implies nothing
about the direction of causation. In addition, acceptance is not an assertion that other influences are not
important. The exceptionally strong dollar in the
1980s suggests that other factors are indeed important. One of the advantages of the arbitrage
approach is that it provides a way of thinking about
how real shocks, such as changes in capital flows or
technology, drive actual rates away from the rates
implied by PPP.
Whether an arbitrage or efficient markets
approach to purchasing power parity is the right
choice is less clear. Standard interpretations of the

45

FOOTNOTES
1. For a more thorough review of the theory underlying
PPP,see Lawrence Officer (1976) and Rudiger Dornbusch
(1985).

16. Instead of the arbitrage approach to PPP used here,
Roll (1979, p. 142) uses a welfare approach. "When relative prices are not assumed to be constant, the continuously compounded rate of inflation must be measured by
another log price change, thatofthe price index relevantto
the speculator's purchasing power."

2. Although the relative version of PPP in general requires
weaker assumptions than the absolute version, it does
involve at least one important assumption thatthe absolute
form does not require. Relative PPP implicitly assumes that
the base period describes an equilibrium or normal situation.

17. Technically, the error isa martingale. But because iUs
more widely recognized, the term random walk is used
throughout instead of the more accurate martingale.

3.AIthough equation 4 is the basic test equation, several
stl..ldiesinclude lags, e.g., John Hodgson and Patricia
Phelps (1975), or other explanatory variables, e.g.,
Richard Dino (1977).

18. With transaction costs, the expected net return would
have to at least cover those costs before goods would be
shippec;i.
19. vvheninternational trade invplves buying either at
home or abroad in t-1 for sale at home in t, there is no
international uncertainty and "{ disappears.

4. In many cases, the left hand side of the equation is
simply In[S(t)J and a is an estimate of the log of the base
period exchange r<:ite. In that case, a nonzero estimate for
a does not reject PPP.

20. Since the errors are correlated with both sides of
equation 18, from an econometric perspective it would be
more accurate to write this equation as [1 n{[S(t+ 1)/
S(t)]/(pH/pF)} = vt+ gt.
21. More formally, g has first order negative serial correlation because it is the first difference of an uncorrelated
random variable "{.

5. See in particular Jacob Frenkel (1981).
6. For a description of this data, see W. Kohlhammer
(1970).
7. See Nurhan Davutyan and John Pippenger (1984).
8. This dynamic view of PPP implicitly assumes that transaction costs decline with the length of the run. The discussion of the nature of costs by Armen Alchian (1959) suggests a number of reasons for this decline.

22. Roll's data cover more than the 1970s. They run from
1957 to 1976.

9. Since the difference between tradables and nontradabies is one of degree, not kind, this argument overstates
the case. The basic point, however, is valid. The structure
of the error terms should be substantially different depending on whether it is related to tradables or nontradables.

23. The countries are Argentina, Australia, Brazil, Canada, Germany, Italy, Israel, Japan, U.K., and U.S. To avoid
any undue weight on inflationary episodes, for Argentina,
Brazil, and Israel, only real rates with the U.S. are used. The
time period, number of lags and countries were selected
before the tests were conducted.

10. Since real and monetary shocks can be, and apparently are, correlated, the problem is more complex than in
this simple example.

24. The t-tests are based on the observed standard
deviation, not the theoretical standard deviation which
would assume independence.

11. The effects of transaction costs on tradables, which is
what Aizenman (1984a and b), De Grauwe, Janssens and
Leliaert (1982) and De Grauwe and Rosiers (1984) stress,
and different weights, reinforce the bias from changes in
relative prices for nontradables.

25. See Jenkins and Watts (1968) for a detailed discussion of spectral analysis.
26. Spectral density is the normalized spectrum. It has the
same relation to the spectrum that autocorreiation has to
autocovariance. When frequency is measured in radians,
the observed frequencies run from 0 to 'IT. Since the
estimates of the spectral density must sum to unity, the
estimates must equal 1/'IT to be constant across frequency.

12. Although similar results hold for a number of countries,
they do not hold for all. Price levels in Canada and the
United States moved together very closely in both the
1920s and early 1970s, and estimates of R2 and (3 reject
PPP in both periods. In addition, estimates for inflationary
countries in the 1970s such as Israel, Argentina and Brazil
yield results that are similar to the results for France in the
1920s. See Davutyan and Pippenger (1985).

27. See, for example, Graham Hacche and John Townend
(1983) and Waseem Khan and Thomas Willett (1984).
28. The series start in 1975 because there is a break in the
German data in 1974.
29. Forthe U.S. dollar, relative PPP even gets the direction
wrong. I(predicts a 21 percent fall in the mark valueof the
dollar when the value of the dollar actually rises 35 percent.

13. This result is not particular to France. The average
standard error for the regressions that Frenkel reports for
the 1920s is 0.102, but it falls to 0.029 for the 1970s.

30. With rapid inflation, the average predictive errors are
much larger. See Davutyan and Pippenger (1985), Table

14. See Alan Shapiro (1983) for a discussion of efficient
commodity markets and purchasing power parity.

2.

15. If there is no risk premium and futures prices equal
expected prices, then a similar argument holds for a form
of international arbitrage without trade.

46

REFERENCES
Adler, Michael and Bruce Lehmann. "Deviations from Purchasing Power Parity in the Long Run," Journal of
Finance, Vol. 38, 1983, 1471-88.
Aizenman, Joshua. "Modeling Deviations from Purchasing
Power Parity," International Economic Review, Vol. 25,
1984a, 175-91.
----------. "Testing Deviations from Purchasing Power Parity." National Bureau of Economic Research, Working
Paper No. 1475, October 1984b.
Alchian, Armen. "Costs and Outputs" in Moses Abramovitz, ed., The Allocation of Economic Resources.
Stanford: Stanford University Press,1959.
Cassel, Gustav. "The Present Situation of the Foreign
Exchanges," Economic Journal, Vol. 26, 1916.
Crouhy-Veyrac, Liliane, Michel Crouhy and Jacques
Melitz. "More about the Law of One Price, lnstitut
National de la Statistique et d'Etudes Economiques,
Working Paper No. 8002, March 1980.
Darby, Michael. "Does Purchasing Power Parity Work?"
National Bureau of Economic Research, Working
Paper No. 607, 1980.
Davutyan, Nurhan and John Pippenger. "Testing Purchasing Power Parity." Manuscript, University of California
at Santa Barbara, September 1984.
----------. "Purchasing Power Parity Did not Fail in the
1970s," American Economic Review, Vol. 75, 1985.
De Grauwe, Paul, M. Janssens and H. Leliaert. "Real
Exchange Rate Variability during 1920-26 and
1973-82." Manuscript, University of Louvain, June
1982.
---------- and Marc Rosiers. "Real Exchange Rate Variability
and Monetary Disturbances." Manuscript, University
of Louvain, October 1984.
Dino, Richard. An Econometric Test of the Purchasing
Power Parity Theory: Canada 1870-1975. PhD. dissertation, State University of New York at Buffalo, 1977.
Dornbusch, RUdiger. "Purchasing Power Parity," National
Bureau of Economic Research, Working Paper No.
1591, March 1985.
Frenkel, Jacob. "The Collapse of Purchasing Power Parity
during the 1970s," European Economic Review, Vol.
16,1981,145-65.
Hacche, Graham and John Townsend. "Some Problems in
Exchange Rate Modelling: The Case of Sterling," in L.
Klein and W. Krelle, eds., Capital Flows and Exchange
Rate Determination, Zeitschrift fur Nationalokonomie,
Supplementum 3,1983.
Hakkio, Craig. "A Re-examination of Purchasing Power
Parity: A Multi-country and Multi-period Study," Journal of International Economics, Vol. 17, 1984.

Hodgson, John and Patricia Phelps. "The Distributed
Impact of Price-level Variation on Floating Exchange
Rates," The Review of Economics and Statistics, Vol.
57,1975.
Isard, Peter, "How Far Can We Push the Law
One
Price?" American Economic Review, Vol. 67, 1977.
Jenkins, Gwilym and Donald Watts. Spectral Analysis and
Its Applications. San Francisco: Holden-Day, 1968.
Khan, Waseem and Thomas D. Willett. "The Monetary
Approach to Exchanges Rates: A Review of Recent
Empirical Studies," Kredit und Kapital, Vol 17, 1984.
Kohlhammer, W. International Comparison of Consumer
Prices, Studies on Statistics, Federal Statistical Office
of the Federal Republic of Germany. Stuttgart and
Mainz, January 1970.
Krugman, Paul. "Purchasing Power Parity and Exchange
Rates: Another Look at the Evidence," Journal of
International Economics, Vol. 8, 1978.
Mussachia, John. "A Reexamination of the Purchasing
Power Parity Theory During the Recent Floating Rate
Period." Ph. D. dissertation, University of California,
Santa Barbara, 1984.
Officer, Lawrence. "The Purchasing-Power-Parity Theory
of Exchange Rates: A Review Article," IMF Staff
Papers, Vol. 23, 1976.
Parsai, Tahmoures. "A Sensitivity Test of Purchasing
Power Parity," Ph. D. dissertation, University of California, Santa Barbara, 1982.
Pippenger, John. "Purchasing Power arity: An Analysis of
Predictive Error," Canadian Journal of Economics,
Vol. 15, 1982.
Protopapadakis, Aris and Hans Stoll. "The Law of One
Price in International Commodity Markets: A Reformulation and "Some Formal Tests," Federal Reserve
Bank of Philadelphia, Working Paper No. 84-5, October 1984.
Richardson, J. David. Some Empirical Evidence on Commodity Arbitrage and the Law of One Price," Journal of
International Economics, Vol 8, 1978.
Roll, Richard. "Violations of Purchasing Power Parity and
Their Implications for Efficient International Commodity Markets," in M. Sarnat and G. Szego, eds.,
International Finance and Trade: Volume I.
Cambridge, Mass., Ballinger Publishing Company,
1979.
Shapiro, Alan. "What Does Purchasing Power Parity
Mean?" Journal of International Money and Finance,
Vol. 2,1983.
Rush, Mark and Steven Husted. "Purchasing Power Parity
in the Long Run," Canadian Journal of Economics,
Vol. 18, 1985

47