View original document

The full text on this page is automatically extracted from the file linked above and may contain errors and inconsistencies.



Investm ent, G N P , and
real exch an g e rate s........................................................................................... 2
Paula W o rthing to n

A new study shows that industry investment
rates and exchange rates are correlated,
suggesting that changes in the value of the
dollar affect the international competitiveness
of U.S. firms.

Prod uctive e fficie n cy in b an kin g ............................................................... 11
Douglas D. E van off and
Philip R. Israilevich

Studies show that banks are inefficient.
The authors discuss why, and what this
means for the future of the industry.

Karl A. Scheld, Senior Vice President and
Director of Research
Editorial direction

Carolyn McMullen, editor, David R. Allardice, regional
studies, Herbert Baer, financial structure and regulation,
Steven Strongin, monetary policy,
Anne Weaver, administration

Nancy Ahlstrom, typesetting coordinator,
Rita Molloy, Yvonne Peeples, typesetters,
Kathleen Solotroff, graphics coordinator
Roger Thryselius, Thomas O’Connell,
Lynn Busby-Ward, John Dixon, graphics
Kathryn Moran, assistant editor

ju ly/a u g u st

1991 Volum e XV , Issue 4

the Research Department of the Federal Reserve
Bank of Chicago. The views expressed are the
authors’ and do not necessarily reflect the views
of the management of the Federal Reserve Bank.
Single-copy subscriptions are available free of
charge. Please send requests for single- and
multiple-copy subscriptions, back issues, and
address changes to Public Information Center,
Federal Reserve Bank of Chicago, P.O. Box 834,
Chicago, Illinois 60690-0834, or telephone (312)
Articles may be reprinted provided source is
credited and The Public Information Center is
provided with a copy of the published material.
ISSN 0164-0682

In v e stm e n t, GIMP, and
real e x ch a n g e rates

Paula R. W o rth in g to n

The value of the U.S. dollar
varied widely over the 19631986 time period. Those same
years witnessed several
cyclical expansions and
contractions and even wider swings in aggregate
fixed investment rates. One explanation for
some of the investment rate swings is the
dramatic movements in exchange rates over this
period. In this article, I use newly constructed
capital stock and investment series for 270 U.S.
manufacturing industries to examine investment
responsiveness to changes in real exchange rates
for 1963-1986. My research shows that invest­
ment rates are sensitive to real exchange rate
movements and that appreciation of the U.S.
dollar is associated with a decrease in industry
investment rates—particularly in durable goods
industries. Analysis of industries for which
imports-sales data are available further suggests
that investment is more responsive in industries
with greater exposure to foreign competition.
Finally, I document the existence of substantial
interindustry variation in the influence of real
exchange rates on investment. My results are
broadly consistent with international trade
models in which changes in real exchange rates
drive changes in the relative competitiveness of
domestic and foreign industries.
Changes in real exchange rates are often
thought to reflect changes in the international
competitiveness of domestic and foreign
industries. For example, the depreciation of the
dollar is said to be correlated with improved
competitiveness of U.S. firms, because U.S. and
foreign consumers find it relatively cheap to buy


U.S. goods. In the long run, being competitive
in international markets requires investing in
capital equipment that will be used to satisfy
current and future market demand. This
suggests that real exchange rate movements are
correlated with changes in international competi­
tiveness now and will continue to be in the
future. By analyzing the extent to which
investment spending of U.S. manufacturing
industries has historically varied with changes in
the value of the dollar, I indirectly examine how
internationally competitive the U.S. manufactur­
ing sector will be in the future.
The article is organized as follows. The
next section outlines the expected effects of
changes in the value of the dollar on output and
input demands of U.S. manufacturing industries.
The third section describes the data used in the
article, and the fourth reports the results.
Conclusions are in the final section.
Why should real exchange rates m atter?

Movements in the value of the dollar will
affect the input and output choices of U.S.
manufacturing firms as long as the goods
produced are tradeable, that is, as long as output
demand is sensitive to the relative price of
domestic and foreign goods. Simply put, an
The author is visiting assistant professor in the
Department of Economics, Northwestern University,
and consultant in the Economic Research Depart­
ment, Federal Reserve Bank of Chicago. The author
w ould like to thank Hesna Genay, Jack Hervey,
Prakash Loungani, Steve Strongin, and the editor for
useful com m ents on earlier drafts and Jack Hervey
and W illiam Strauss fo r assistance w ith some of
the data.


appreciation of the dollar lowers the relative
price of foreign goods to U.S. goods. This
causes demand for domestically produced goods
to fall and, as a consequence, reduces input
demands in the affected sectors.
The appropriate measure of the relative
price of home and foreign goods is the real
exchange rate, which depends on the nominal
exchange rate and home and foreign prices. To
illustrate this relationship, consider Equation (1),
where E is the nominal exchange rate, expressed
in terms of units of foreign currency per U.S.
dollar, and Pus (PF is the price level of the
United States (foreign country). Equation (1)
shows that the real exchange rate, e, is defined as

rates and investment rates. My analysis relies on
the assumption that changes in e are exogenous
at the individual industry level, that is, that the
exchange rate is not affected by the actions of
individual industries. This exogeneity assump­
tion has been exploited by other researchers
interested in measuring the impact of exchange
rate movements on industry outputs and inputs.3
The present analysis is only a first step
towards understanding the relationship among
investment spending, exchange rate movements,
and international competitiveness. The evidence
for the patterns documented here is suggestive,
not conclusive, about the nature of this relation­
ship, and this article lays the groundwork for
future analysis.

(1) e = E* — .

A review o f th e data

The idea behind many theories of interna­
tional trade is that increases in e (appreciation of
the dollar) cause decreases in domestic output
and derived input demands. According to this
view, the size of the output response in any
given sector or industry will depend on the
relevant demand elasticities and the expected
persistence of the exchange rate shock. In turn,
technologically determined elasticities of
substitution and adjustment costs will determine
the size of the input demand response.1 Shocks
that are expected to be permanent may be met
with changes in inputs that are relatively costly
to adjust, such as capital, while more transitory
shocks may be met with change in more easily
altered inputs, such as labor. Furthermore, firms
may alter prices instead of outputs and inputs, so
that price-cost margins may also be affected
when real exchange rates change.
In this article, I do not seek to directly
develop and test a model of real effects of
exchange rate movements. Instead, I focus on
the correlation between changes in the demand
for one particular input, capital, and changes in
an index of the real value of the dollar.2
Changes in the demand for capital, as measured
by investment spending, are of interest because
of the strong empirical evidence that investment
spending is a large and cyclically sensitive
component of U.S. total aggregate spending.
Because industries differ widely with respect to
their output and input demand elasticities as well
as in their exposure to international markets, I
expect to observe substantial cross-sectional
variation in the relationship between exchange

The industry data used in this article are
annual figures for a subset of U.S. four-digit
Standard Industrial Classification (SIC) manu­
facturing industries during the years 1963-1986.4
After elimination of industries with missing
data, 270 industries remain in the data set. The
data are derived from the Census of Manufac­
tures and the Annual Survey of Manufactures
and were originally assembled by Domowitz,
Hubbard, and Petersen (DHP) (1987). Data on
capital stocks and investment, as well as other
variables, are included in the data base, and the
original data were used to construct several
series used in this article. Capital stock series
were computed by applying standard recursion
formulas to benchmark stocks. See the Box for
Table 1 gives the reader some background
information on the industries studied here. The
Table reports the full sample means and standard
deviations for the gross investment rate, the sales
to capital ratio, and the price-cost margin, and it
also presents the same statistics for durable
goods and nondurable goods industries sepa­
rately.5 The mean gross investment rate in the
sample was .132, and the average sales to capital
ratio was 5.11, implying a .20 capital-sales ratio.
Durable goods industries are characterized by
higher levels of capital intensity, higher invest­
ment rates, and higher price-cost margins than
nondurable goods industries.6
Because investment spending is highly
procyclical, I need to control for the level of
macroeconomic activity in the analysis below. I
use the ratio of actual to potential gross national
product (GNP) for each year in the sample as my



Data sources and construction
Data for the four-digit SIC industries are obtained
from the data of Domowitz, Hubbard, and Petersen
(DHP) (1987), who assembled the set from various
years of the C e n s u s o f M a n u f a c tu r e s and A n n u a l
S u r v e y o f M a n u f a c tu r e s . DHP’s original data set was
updated and expanded at the Federal Reserve Bank
of Chicago. Macroeconomic data are obtained from
the National Income and Product Accounts (NIPA).
Specifically, the following definitions and proce­
dures were used in constructing the data set used in
this article. Unless otherwise noted, the annual data
cover the 1963-1986 time period.
In v es tm en t

The Census reports total gross investment
(dollars spent on new capital goods) in current
(nominal) dollars.
C apital sto ck

The Census contains gross stock figures, but
these data are not good measures o f capital for at
least two reasons. First, the data embody an
assumption of “one-horse-shay” depreciation.*
Second, because stocks purchased at different times
are added together, it is difficult to correct for
changes in the price o f capital goods. Consequently,
I construct a current (nominal) dollar capital stock
series for each industry by applying a standard
capital accumulation relationship to a benchmark
capital stock. I use an annual geometric depreciation
rate (5) for the total capital stock o f .0926, computed
by the Bureau of Economic Analysis (BEA) and
cited in Shapiro (1986). The capital accumulation
equation embodies the “time-to-build” assumption
and applies depreciation only to the current stock, not
to the current year’s investment:

measure of aggregate economic activity. The
mean of this ratio over the 1963-1986 time
period is 1.00. The real exchange rate measure
used in this article is the real, trade-weighted
index of the U.S. dollar developed at the Federal
Reserve Bank of Chicago. This index, which is
described in detail by Hervey and Strauss
(1987a, 1987b, 1987c), was originally developed
to measure exchange rate movements over the
1971-1986 time period and has recently been
extended as far back as 1960. The index
includes 16 countries, uses current consumer
price indexes to convert nominal to real ex­
change rates, and is based to equal 1.0 in the first


v 7

K it =1 i t - 1

Ip k


K it—1v


where K is the capital stock and p* is the implicit
price deflator for capital goods, taken from NIPA. I
use the 1958 gross stocks as benchmarks.
Gross in v es tm en t rate

The gross investment rate is defined as the ratio
o f gross investment expenditures to the previous
year’s capital stock: A I K .t ,, where both f and
K jt jare measured in current dollars.
N o m in al sales

Nominal sales is defined as output minus the
value o f final goods inventory changes. Specifically,
S = V A D + C M . - T I N T Y + T I N T Y „ where V A D is
value added, C M is cost of materials, and T IN T Y is
final goods inventories, all taken directly from the
Census. The sales-capital ratio, S _ K , is defined as
S — K n = IK .it-\
Price-cost m argin

The price-cost margin (P C M jt) is defined as
( V A D - P A Y . ) ! ( V A D . + C M .) , where P A Y is total
payroll, which is reported directly by the Census.
M acro econ o m ic m easures

I used the actual and potential gross national
product (GNP) figures reported in NIPA, and I
defined A _ P G N P t as the ratio o f actual to potential
GNP in year t. This measure is identical to the one
used by Petersen and Strauss (1989, 1991).
*See Hulten and Wykoff (1981) for evidence that
depreciation patterns tend to be geometric.

quarter of 1973.7 The index series is quarterly;
I use the four-quarter average index for each
year in the sample.9

Before analyzing industry-level investment
sensitivity to GNP and real exchange rate
movements, it is instructive to consider invest­
ment behavior in the aggregate. Let I_Kt be
defined as the simple cross-sectional average
investment rate in year t. Figure 1 contains a
graph which plots the ratio of /_Kt to its mean
(.132), the ratio of actual to potential GNP
(APGNP), and the real, trade-weighted dollar


value of the dollar (R7GMA) more


Summary statistics, 1963-1986

Durable Nondurable
industries industries

Variable name


Price-cost margin




investment rate






Number of





NOTE: Standard deviations are in parentheses.

index, R7GMAt, over the 1963-1986 time period.
The investment rate clearly varies procyclically,
and investment’s variability appears to exceed
that of output. The relationship between
investment and the value of the dollar appears to
be negative, at least after 1971 or so. This
Figure suggests that, in the aggregate, invest­
ment does indeed vary procyclically and does
increase when the dollar depreciates. The
remainder of the article examines the data at the
four-digit level.
Table 2 presents the results of estimating
the relationship between investment rates (I K.),
actual to potential GNP (APG N P), and the real


(2) I_K, = P0 + P,*A_PGNP,+
P,*R7GMAt + £it7
• 2


where /' denotes industry i, t
denotes year t, and e7is an
econometric error term. Because
preliminary analysis suggested that
the error term was serially corre­
lated, I present both ordinary least
squares (OLS) estimates and least
squares estimates corrected for
first order serial correlation, which
are denoted as PW, for PraisWinsten.1 I present results for the
full sample, for durable and nondurable goods
industries separately, and for producer and
consumer goods industries separately.
The OLS and PW results are qualitatively
similar; I will discuss only the PW results." The
positive and significant coefficients on APG NP
are interpreted as measuring the sensitivity of
investment rates to changes in the strength of the
macroeconomy. For the sample as a whole, a 1
percent increase in A PGNP from its mean of
1.00 implies an increase of .00407 in the
investment rate, or a 3.1 percent increase relative
to the rate’s mean of .132. These results
conform with previous work by Petersen and


associated with a decrease in the
investment rate of .00075, or a .57
Investment rates, GNP, and real exchange rates
percent decrease relative to its
Dependent variable: I Kit7 1963-1986
Table 2 also confirms that
investment patterns in durable
goods industries differ from
industries (270)
patterns in nondurable goods
industries. Durable goods invest­
ment is more cyclical and more
responsive to changes in real
exchange rates than is nondurable
Durable goods
goods investment. This difference
industries (140)
is significant at the 1 percent level.
An alternative method of
distinguishing broad groups of
-.1 1 1a
industries is to group them on the
basis of the buyer’s identity rather
Nondurable goods
than the good type. Table 2 reports
industries (130)
the results of estimating Equation
-0 3 3 a
(2) separately for producer goods
and consumer goods industries.1
Although the coefficient estimates
do differ between the groups, an F
test at conventional significance
Producer goods
industries (196)
levels fails to reject the hypothesis
that the coefficients do not differ.
It appears, then, that the type of
-. 184a
good produced (the durable goods/
nondurable goods distinction)
matters more than the identity of
Consumer goods
industries (74)
the customer (the producer goods/
consumer goods distinction) in
explaining investment patterns
over this time period.
Previous researchers have
documented substantial variation
NOTES: l_Kit is the investment rate for industry /' in year t, A_PGNPt is the
ratio of actual to potential GNP at time f, and R7GMAt is the real tradein output and input demand
weighted dollar index at time t. OLS refers to the ordinary least squares
behavior at the two-digit SIC level,
estimates, and PW refers to the Prais-Winsten estimates, which correct for
first order serial correlation. Standard errors are in parentheses under
so in Table 3 I present the results
coefficient estimates. Superscripts a, b, and c denote statistical significance
at the 1 percent, 5 percent, and 10 percent level, respectively.
of reestimating Equation (2) while
allowing all coefficients to vary
across two-digit groups.1 The
Table’s results confirm that investment rates
Strauss (1989, 1991), which concludes that
vary negatively with the value of the dollar and
investment is more cyclical, relative to its
that this effect varies across two-digit groups.
mean, than output.
Consider first the coefficients on A PGNP; most
The coefficients on R7GMA are significant
are positive, as expected. Thus investment is
and have the expected negative signs. Thus
procyclical, and the degree of procyclicality
increases in the value of the dollar are associat­
varies across industries. Of the five two-digit
ed with declines in investment rates in U.S.
groups with negative coefficients, only textiles
manufacturing. The magnitude of the effect
(SIC 22) and rubber (SIC 30) have significant
suggests that investment is fairly responsive to
changes in the value of the dollar. For the full
The coefficients on R7GMA are a bit more
sample, a 1 percent increase in R7GMA is




Industry investment rates, GNP,
and real exchange rates
Dependent variable: I K t, 1963-1986
Prais-Winsten estimates




20 Food




21 Tobacco




22 Textiles




23 Clothing




24 Lumber




25 Furniture




26 Paper




27 Publishing




28 Chemicals




29 Petroleum refining




30 Rubber




31 Leather




32 Stone, clay, glass




33 Primary metals




34 Metal products




35 Industrial equipment




36 Electronic equipment







38 Instruments




39 Miscellaneous




37 Transportation

Notes: l_Klt is the investment rate for industry /' in year t, A _ P G N P t is the ratio of
actual to potential GNP at time t, and R 7 G M A : is the real trade-weighted dollar
index at time t. The table reports the results of the PW regression of l_ K on
A _ P G N P and R 7 G M A , while permitting all coefficients to vary over two-digit SIC
groups. Reported coefficients are the total effect for the given two-digit group.
Standard errors are in parentheses under coefficient estimates. Superscripts a, b,
and c denote statistical significance at the 1 percent, 5 percent and 10 percent
level, respectively.


varied in sign and magnitude. For
16 of the 20 two-digit groups,
R7GMA enters with a negative
sign, as expected; 9 of these 16
coefficients differ significantly
from 0. The coefficients are
largest for two-digit groups 29
(petroleum), 21 (tobacco), 25
(furniture), 32 (stone, clay, and
glass), and 34 (metal products).1
These results appear generally
consistent with those of Branson
and Love (1988), who find that the
real exchange rate has its greatest
effects on employment in the two
digit groups 33 (primary metal), 35
(industrial equipment), 34, 29, 32,
and 39 (miscellaneous).1 Again,
textiles and rubber are the only
groups whose coefficients are
significant and the wrong sign.
The textiles industry enjoyed
substantial import protection
during the time period covered by
this study, so the industry’s
investment spending may not have
been likely to respond in the
expected way to the appreciation
of the dollar.
Finally, as indicated earlier, it
is likely that an industry’s exposure to international markets
influences the size of its investment responsiveness to exchange
rate changes. One measure of that
exposure, the industry import-sales
(IMS) ratio, is available for 173 of
the sample’s 270 industries over
the 1965-1980 time period.
Because of this limited availability,
I computed each industry’s
average IMS over the available
time period and then grouped
industries into high IMS and low
IMS industries, comparing industry
averages to the overall average. I
then re-estimated Equation (2)
over the 173-industry sample and
separately over the high and low
IMS industries, respectively.1 The
results appear in Table 4. In brief,
the coefficient on R7GMA is larger
in the high IMS industries, and an


Sum m ary and conclusions


Investment rates and real exchange rates
Dependent variable: I_Kjt, 1963-1986










-.1 19a


o o
00 «o o
f —

F test rejects the null hypothesis of
pooling of high and low IMS
industries, showing that this
difference is statistically significant.
So, higher IMS ratios are associated
with larger investment responses to
exchange rate fluctuations. This is
reasonable, because industries
experiencing substantial foreign
competition at home are likely to be
sensitive to exchange rate fluctua­


All industries with
IMS data (173)

High IMS
industries (49)



O 00



In this article, I presented
evidence that fixed investment rates
are sensitive to changes in the value
of the dollar. Investment responds
more in durable goods industries
than in nondurable goods industries,
industries (124)
but there appears to be little differ­
ence between consumer goods and
producer goods industries. Further,
investment is more sensitive to
exchange rate fluctuations for
NOTES: l_Kit is the investment rate for industry / in year f, IMS is the
industries experiencing substantial
import-sales ratio, A_PGNPt is the ratio of actual to potential GNP at time t,
and R7GMAf is the real trade-weighted dollar index at time t. OLS refers to
foreign competition.
the ordinary least squares estimates, and PW refers to the Prais-Winsten
estimates, which correct for first order serial correlation. Standard errors
Some readers may be surprised
are in parentheses under coefficient estimates. Superscripts a, b, and c
at investment’s responsiveness to
denote statistical significance at the 1 percent, 5 percent, and 10 percent
level, respectively.
relative price changes, given the
limited role for relative factor prices
in much recent research on invest­
highly valued dollar of the 1980s may have led
ment spending. To what extent might industries
to some long run deterioration in the ability of
absorb exchange rate fluctuations into their
U.S. industries to compete in international
price-cost margins (PCMs) instead of their input
markets. Between 1978 and 1985, the dollar
demands? In fact, in a related, unpublished
index rose from .856 to 1.149, an appreciation of
analysis of industry PCMs, I found that this
34 percent. Table 2’s coefficient estimates
price adjustment effect is present in the data:
imply that the average industry’s investment rate
when the dollar appreciates, domestic PCMs fall,
was .021 lower in 1985 than it would have been
especially so in durable goods industries. So it
in the absence of the dollar’s appreciation. The
appears that as the relative price of domestic
raw investment data for the sample show that
goods changes, U.S. industries respond by
total investment spending in 1985 was $61.8
changing both the price and quantity of output
billion. Combining this figure with appropriate
(hence inputs like capital). Developing struc­
capital stock figures and Table 2’s estimates, I
tural models that can distinguish these two sets
estimate that investment spending in 1985 was
of exchange rate effects is an important area for
$11.3 billion less than it would have been had
future research.
the dollar not appreciated. The decline in the
Finally, although my results should be
dollar in recent years, though not examined
viewed as suggestive only, they do indicate the
directly in this article, may have reversed this
potential importance of exchange rate move­
trend, thus enabling U.S. industries to effectively
ments for the future international competitive­
compete at home and abroad.
ness of U.S. manufacturing industries. The

'Note that strict application of the “purchasing power
parity” argument implies that e - 1, that is, that changes in
E are simultaneously offset by changes in relative prices.
Consequently, for these arguments to be correct, some sort
of price stickiness must prevent parity from being reached.
2In other words, I estimate a “reduced form” relationship
between investment and real exchange rates. For an
example of analysis of a structural relationship between
input demands and real exchange rates, see Krieger (1989),
who argues that real exchange rate changes affect factor
demands through two channels. The first is the one
discussed above: an increase in the value of the dollar
causes an increase in the relative price of U.S. goods, thus a
decrease in aggregate derived factor demands. The second
channel involves the sectoral reallocation of resources that
follows an exchange rate shock, regardless of whether the
shock is positive or negative. The two channels are not
mutually exclusive, and distinguishing between the two
requires a structural model.

9Using fourth quarter values made no qualitative and minor
quantitative difference to the analysis.
l0See Judge et al. (1985), p. 286. Ordinary least squares
(OLS) estimation is appropriate under the following
(3a) E(ejt) = 0 ,
(3b) E ($ = a 2,
(3c) E(eu = 0 for i * j , and
(3d) E(e.tejs) = 0 for t * s .
Preliminary analysis suggested that first order serial
correlation was significant and that the autoregressive
parameter differed across four-digit industries, so I
permitted the parameter to vary in the estimation. This
amounts to replacing assumption (3d) by E (t.ji.J = p., so
that e is assumed to follow the autoregressive process
e i, = P ie i,-, + u i,’

3 example, see Branson and Love (1988) and Krieger
The SIC system assigns all manufacturing establishments
into categories based on the primary activities at the
establishments, and its most often used categories are the
two- and four-digit groupings. Two-digit numbers are used
to denote major groups, such as SIC 20, which is Food and
Kindred Products, while four-digit numbers correspond to
more narrowly defined categories, such as SIC 2011, which
is Meat Packing Plants.
industries in two-digit SIC groups 24, 25, or 32-38 were
labeled durable goods industries; others were placed in the
nondurables group. See Petersen and Strauss (1991).
6For each of the three variables reported in Table 1 ,1 can
reject at the 5 percent level the hypothesis that the mean is
the same for durable and nondurable goods industries.
7Using versions of the index based on lagged (as opposed
to current) prices led to results similar to those reported
8See Hervey and Strauss (1987a) for a discussion of the
appropriate price index to use when constructing a real
exchange rate. Branson and Love (1988) report that using
producer price indexes or more general price indexes made
little difference to their ranking of industries in terms of
their output and employment elasticities with respect to the
real exchange rate.


where I assume that u is a mean zero, variance a 2 random
variable with no serial or contemporaneous correlation.
1'The reader will notice the low R 2 values in the Table.
Low R 2s are common in pooled time-series cross-sectional
analyses. Estimating a pure time series version of (2), so
that the dependent variable is /_K , yields an R 2 of .47, with
coefficient estimates identical to those in the first line of
Table 2. The Durbin-Watson statistic is .98.
l2The classification is taken from DHP (1987).
l3Only the Prais-Winsten estimates are presented.
Specifications that restricted all slope coefficients to be
equal across two-digit groups were rejected by F tests at the
1 percent level. Further, specifications that restricted the
coefficients on A P G N P (R 7 G M A ) while permitting those
on R 7 G M A (A P G N P ) to vary were also rejected.
l4It is possible to compute an elasticity of the investment
rate with respect to the dollar index, but the measure is
difficult to interpret. I choose to focus on the absolute
coefficient estimates themselves.
l5Branson and Love (1988) obtain similar results when
analyzing industrial production’s response to real exchange
rate changes.
l6This procedure is strictly appropriate only if the variable
used to group industries, here the IMS ratio, is exogenous.



Branson, W.H., and J.P. Love, “The real
exchange rate, employment, and output in
manufacturing in the U.S. and Japan,” National
Bureau of Economic Research, Working Paper
2491, January 1988.
Domowitz, I., R.G. Hubbard, and B.C.
Petersen, “Oligopoly supergames: some
empirical evidence on prices and margins,”
Journal of Industrial Economics, Vol. 35, No. 4,
June 1987, pp. 379-398.
Hervey, J.L., and W.A. Strauss, “The interna­
tional value of the dollar: an inflation-adjusted
index,” Economic Perspectives, Vol 2, No. 1,
January/February 1987a, pp. 17-28.
Hervey, J.L., and W.A. Strauss, “Technical
correction: the inflation-adjusted index of the
dollar,” Economic Perspectives, Vol. 2, No. 2,
March/April 1987b, pp. 29-31.
Hervey, J.L., and W.A. Strauss, “The new
dollar indexes are no different from the old
ones,” Economic Perspectives, Vol. 2, No. 4,
July/August 1987c, pp. 3-22.
Hulten, C.R., and F.C. Wykoff, “The measure­
ment of economic depreciation,” In Hulten,
C. R., ed., Depreciation, Inflation, and the
Taxation of Income from Capital, Washington,
D. C., Urban Institute Press, 1981.


Judge, G.G., W.E. Griffiths, R.C. Hill, H.
Lutkepohl, and T.-C. Lee, The theory and
practice of econometrics, 2nd ed, New York,
John Wiley and Sons, 1985.
Krieger, R., “Real exchange rates, sectoral
shifts, and aggregate unemployment,” Federal
Reserve Board of Governors, Finance and
Economics Discussion Series, Working Paper
92, September 1989.
Petersen, B.C., and W.A. Strauss, “Investment
cyclicality in manufacturing industries,”
Economic Perspectives, Vol. 13, No. 6, November/December 1989, pp. 19-28.
Petersen, B.C., and W.A. Strauss, “The
cyclicality of cash flow and investment in U.S.
manufacturing,” Economic Perspectives, Vol.
15, No. 1, January/February 1991, pp. 9-19.
Shapiro, M.D., “The dynamic demand for
capital and labor,” Quarterly Journal of Eco­
nomics, Vol. 51, No. 3, 1986, pp. 513-542.
U.S. Bureau of the Census, Annual Survey of
Manufactures, various years.
U.S. Bureau of the Census, Census of Manu­
factures, various years.


Productive efficiency
in banking

Douglas D. E van off and
Philip R. Israilevich

Then a new CEO came in who
asked, ... “What do we have to
produce by way of results?"
Every one of his store manag­
ers knew the answer, “We
have to increase the amount each shopper
spends per visit.” Then he asked, “Do any of
our stores actually do this?" Three or four—
out of 30 or so—did it. “Will you then tell us,”
the new CEO asked, “what your people do that
gives you the desired results?”'
In tro d u c tio n

In the above epigraph the managers are
attempting to identify, in a particular context,
the firms which are doing the best job of ac­
complishing the company objectives. Such
firms are known as the best practice firms.
Economists typically make similar inquiries
concerning the production process. They
address the issue by theoretically defining the
best practice firm, empirically identifying it,
determining its resource utilization, and then
evaluating how others compare to it. More
generally, economists, like the new CEO, are
concerned with productive efficiency.
Because of changes taking place in the
banking industry, the importance of efficiency
has increased substantially. As geographic and
product deregulation occurs, the resulting
increase in competition should place banks in a
situation where their success will depend on
their ability to adapt and operate efficiently in
the new environment. Banks unable to do so
will have difficulty surviving.


Most studies of bank efficiency have
concentrated on cost advantages resulting from
the scale of production. In fact, this is probably
one of the most researched topics in banking.2
There are, however, other aspects of efficiency
which students of the industry have just begun
to evaluate. For example, do the producers of
banking services effectively combine their
productive inputs? Once employed, do they
use the inputs effectively? If not, how ineffi­
cient are they? What allows them to continue
to do this and stay in business? Given its im­
portance in the deregulated environment, it is
imperative that the various aspects of bank
efficiency be understood and empirically
In this article we discuss the concept of
efficiency in production, define its various
aspects and the means to measure it, and review
the relevant literature concerning inefficiency
in the banking industry. Our major conclusion
is that there appears to be significant inefficien­
cy in banking. Inefficiency resulting from
operating at an inappropriate scale of operation
is probably in the range of 10-20 percent of
costs. However, by emphasizing the role of
scale, researchers have essentially overlooked a
major portion of bank inefficiency. The evi­
dence suggests that inefficiencies resulting
The authors are economists at the Federal Reserve
Bank of Chicago. Helpful com m ents on earlier
drafts by Herb Baer, Paul Bauer, Allen Berger, Dave
Humphrey, Curt Hunter, Carl Pasurka, and Sherrill
Shaffer are gratefully acknowledged. The views
expressed, however, are those of the authors and
are not necessarily shared by others.


from the suboptimal utilization of inputs is
larger than that resulting from other factors.
According to a majority of studies, banks
operate relatively efficiently with respect to the
optimal combination of inputs, yet many are
very inefficient in converting these inputs into
outputs. This inefficient utilization of inputs
accounts for an additional 20-30 percent of
costs. This is particularly interesting because it
implies that, to a great extent, the future viabili­
ty of an individual bank is under its own con­
trol. To the extent that bank inefficiency can be
accurately measured, it appears that the largest
inefficiencies are not the result of regulation or
technology, but result directly from an under­
utilization of factor inputs by bank manage­
ment. This inefficiency will most likely decline
in the future as bankers respond to increased
competitive pressures and strive to become
more efficient. Failing this, the inefficient firms
will become prime merger candidates to be
acquired and restructured.
The article proceeds as follows. In the
next section we define, discuss, and illustrate
the components of production efficiency. We
then evaluate the alternative means to generate
measures of efficiency. A review of the litera­
ture on bank efficiency is then presented. The
final section summarizes and evaluates policy
concerns. We have also included an extensive
reference list for readers interested in more
detailed analysis of productive efficiency.
P ro d u ctio n e ffic ie n c y

The economic theory of the firm assumes
that production takes place in an environment
in which managers attempt to maximize profits
by operating in the most efficient manner
possible. The competitive model suggests that
firms which fail to do so will be driven from
the market by more efficient ones. However,
when natural entry barriers or regulation weak­
en competitive forces, inefficient firms may
continue to prosper. That is, true firm behav­
ior may vary from that implied by the competi­
tive model as managers attempt to maximize
their own well-being instead of profits, or find
that they are not required to operate very
efficiently to remain in business.
Variations from productive efficiency can
be broken down into input and output induced
inefficiencies. By input inefficiency we mean
that, for a given level of output, the firm is not
optimally using the factors of production .


Overall input inefficiency resulting from the
suboptimal use of inputs can be decomposed
into allocative and pure technical inefficiency.
Allocative inefficiency occurs when inputs are
combined in sub-optimal proportions. Regula­
tion is typically given as a major reason for this
occurrence. Pure technical inefficiency occurs
when more of each input is used than should
be required to produce a given level of output.
This occurrence is more difficult to explain, but
is typically attributed to weak competitive
forces which allow management to “get away”
with slackened productivity. Combining these
two notions of inefficiency we get the overall
inefficiency resulting from the improper use of
inputs.3 The distinction between the two types
of inefficiency is important because they may
be caused by totally different forces.
Productive efficiency requires optimizing
behavior with respect to outputs as well as
inputs. With respect to outputs, optimal behav­
ior necessitates production of the level and
combination of outputs corresponding to the
lowest per unit cost production process. An
optimal output level is possible if economies
and diseconomies of scale exist at different
output levels. Economies of scale exist if, over
a given range of output, per unit costs decline
as output increases. Increases in per unit cost
correspond to decreasing returns to scale. A
scale efficient firm will produce where there are
constant returns to scale; that is, changes in
output result in proportional changes in costs.
Because it involves the choice of an inefficient
level, scale inefficiency is considered a form of
technical inefficiency. Thus total technical
inefficiency includes both pure technical and
scale inefficiency; that is, inefficient levels of
both inputs and outputs.
Additional cost advantages may result from
producing more than one product. For example,
a firm may be able to jointly produce two or
more outputs more cheaply than producing
them separately. If the cost of joint production
is less than the cost resulting from independent
production processes, economies of scope are
said to exist. Diseconomies of scope exist if
the joint production costs are actually higher
than specialized or stand-alone production of
the individual products.
A final point should be mentioned concern­
ing the various categories of inefficiency. Pure
technical inefficiency is entirely under the
control of, and results directly because of, the


behavior of the producer. Output inefficiency
and allocative inefficiency may, from the
perspective of the firm, be unavoidable. For
example, a firm optimally using factor inputs
may find that per unit cost declines over the
entire range of market demand. While increas­
ing production would generate cost savings or
efficiencies, the characteristics of market
demand may not justify it. Failure to exploit
scope advantages may also result from factors
outside of the control of the firm. In banking,
the array of allowable activities is obviously
constrained by regulation. This may preclude
potential gains from the joint production of
various financial services. Finally, as men­
tioned earlier, allocative inefficiency may occur
as a direct result of regulation. For example,
during the 1970s, banks were restricted with
respect to the explicit rates they could pay
depositors. As market rates rose above allow­
able levels, banks frequently substituted implic­
it interest payments in the form of improved
service levels; for example, more offices per
capita or per area, see Evanoff (1988). This
resulted in an over-utilization of physical
capital relative to other factor inputs. In this
case, regulation was the driving force behind
the resulting allocative inefficiency. The point
is that much inefficiency may be beyond the
control of the individual firm.
In the following sections we illustrate the
inefficiencies described above and discuss
alternative methods used to empirically capture
them. The reader who is most interested in an
analysis of efficiency in banking may skip
directly to the section entitled “The role of
production inefficiency in banking: A survey
of the literature.”
Illu s tra tin g in p u t e ffic ie n c y

The notions of input inefficiencies can be
illustrated as shown in Figure 1. Assume that jc,
and x2 are two factor inputs required to produce
a single output, y. Isoquant /-/’ depicts various
efficient combinations of the two inputs which
can be used to produce a specific level of
output, y r Isoquants further to the right corre­
spond to higher levels of output, those to the
left to lower levels of output. For example, the
output level associated with isoquant //-//’ is
less than y r For a given set of input prices, the
isocost line, P-P\ represents the various combi­
nations of inputs which generate the same level
of expenditures. Isocost lines further to the right



correspond to higher level of expenditures on
inputs. The slope of the isocost line is, obvi­
ously, determined by input prices.
If the objective of the producer is to pro­
duce a particular level of output at minimum
cost then the optimal input combination in
Figure 1 is at point E. That is, given factor
prices, output y ] can be optimally produced by
employing x\ units of input x { and xe units of
input x2 Any other combination of the inputs
along the P-P’ isocost line would generate less
output for the same cost. For example, the
input combinations corresponding to points W
or Z would result in similar expenditures on
inputs, but generate the lower level of output
associated with isoquant //-//’. Alternatively,
the production of y ] using any combination of
inputs other than that corresponding to point E
would cost more. Therefore, at point E, input
efficiency exists.4
To illustrate input inefficiency, suppose
that the observed combination of inputs used by
a particular firm to produce y, is at point A in
Figure 1. We know that inefficiency exists
because E was shown above to correspond to
the most efficient combination of inputs to
produce y,. Comparing the input utilization at
point A to that at E we can derive the level of
inefficiency resulting from the suboptimal use
of inputs. In order to illustrate allocative and
pure technical inefficiency, we have drawn a
line from the origin to point A. Along this line
different levels of factor inputs are employed
but the ratio between the two inputs is fixed at


the actual ratio (that is, the ratio at point A).
Reference points along this line and on isoquant
/-/’ and isocost line P-P' are highlighted.
Consider allocative inefficiency first. Point C
represents a level of costs equal to that of the
efficient production process at point E because
it is on line P-P'. Point B corresponds to an
output level equal to y, because it is on isoquant
IP . Therefore, the distance CB corresponds to
additional production expenses resulting from
the suboptimal allocation of inputs. That is,
allocative inefficiency exists because we are
not on the isocost line, P-P'. Formally, OC/OB
is a measure of allocative efficiency. Values
less than 1.0 reflect inefficiency.5
For this same example, we can also depict
pure technical inefficiency resulting from
producing at point A. We have seen that
producing y, using .v“ and x“ involves allocative
inefficiency because point A is to the right of
line P-P' and ray 0A does not go through point
E. However, there is additional inefficiency
because point A is above isoquant IP . That is,
the combination of inputs associated with point
A should enable the firm to produce a level of
output greater than y,. (It should be able to
produce output y3 corresponding to isoquant //////’.) Given that the isocost line depicts total
expenditures used in production, distance CA
constitutes a less than optimal usage of all
inputs and corresponds to additional production
expenses. Therefore, overall input inefficiency
is measured as OC/OA. Because OC/OB is
attributed to allocative inefficiency, the remain­
ing portion, OB/OA, can be attributed to pure
technical inefficiency. Since these are radial
measures, overall input inefficiency is the
product of the two subcomponents, that is,
OC/OA = (OC/OB) ■(OB/OA).
The pure technical inefficiency shown in
Figure 1 can also be illustrated in terms of
output, instead of input, using a total output or
total product relationship as depicted in Figure
2. The ratio of input usage, x jx 2, is held fixed
by assumption in Figure 2 to represent input
combinations along the ray OA in Figure 1.
Since the fixed input ratio precludes the analy­
sis of allocative efficiency, we are analyzing
only pure technical efficiency. Because changes
in inputs result in proportional changes in
output (the total product curve is linear) we
have constant returns to scale as was assumed
in Figure 1. Employing x\ units of input jc, we



Pure technical efficiency measured
in terms of outputs

could produce an output level y, //the inputs
were fully utilized. This corresponds to point B
in Figure 1. Similarly, using x( units of input jc(
we should be able to produce yr Again, this
corresponds to point A in Figure 1. However, if
inputs are not used effectively, that is if techni­
cal inefficiency exists, the resulting production
point will be below the total product curve.
That is, pure technical inefficiency occurs when
we operate beneath the total product relation­
ship. For example, the pure technical ineffi­
ciency depicted in Figure 1 corresponds to that
found at point G in Figure 2, where inputs are
under-utilized and x( only generates an output
level of yr If we are producing y, at point A in
Figure 1 or, equivalently, at point G in Figure
2, pure technical inefficiency is measured with
respect to inputs as OB/OA and with respect to
outputs as AG/AM. The inefficiency measures
are equivalent. This illustration is important
because it indicates that technical inefficiency
can be measured in terms of either inputs or
outputs. Below we drop the constant returns to
scale assumption and expand on this output
inefficiency measure.
Illu s tra tin g o u tp u t e ffic ie n c y

Point E in Figure 1 corresponds to the least
cost, most efficient means to produce y,. How­
ever, because of particular characteristics of the
production technology, this level of output may
not be the optimal one to produce. For exam­
ple, it may be that over a certain range of
outputs, economies of scale exist. Production


efficiency, therefore, requires optimal decisions
concerning both input and output levels. In
Figure 3 we have dropped the assumption of
constant returns to scale. The production
process is now characterized by increasing
returns up to point R, constant returns at R, and
decreasing returns at output levels above R.
Now the firm corresponding to point G in
Figure 3 is technically inefficient for two
reasons. First, there is pure technical ineffi­
ciency resulting from the under-utilization of
inputs; that is, we are beneath the total product
curve. If inputs are fully utilized, input jc“
should produce the higher output level corre­
sponding to point A/, that is, yy Second, we
have decreasing returns to scale at the current
level of output since the production process is
not represented as the linear relationship OH.
The output not produced because of scale
inefficiency can be measured as HM. This
output is what could have been produced if
inputs were used efficiently and constant
returns to scale existed at this output level.
Therefore, for the input usage depicted at point
A, the input efficient firm could produce at
point M, and the input and scale efficient firm
could produce at point H. As explained above,
scale inefficiency is generally considered a
form of technical inefficiency because it in­
volves the choice of an inefficient level. Thus,
total technical inefficiency includes pure
technical and scale inefficiency; that is, ineffi­
ciency in the use of both inputs and outputs.

Figure 4 depicts the reference points just
discussed in Figure 3 in terms of production
cost. The total product relationship in Figure 3
corresponds to the average cost relationship
depicted in Figure 4. Points H and R each
correspond to constant returns to scale and,
therefore, correspond to minimum points on
average cost relationships. Total technical

inefficiency can be depicted here as the ratio of
the average costs. For the example just dis­
cussed, total technical inefficiency is equal to
ACJACg. The alternative measures of ineffi­
ciency illustrated in Figures 1 through 4 are
equivalent and correspond to the alternative
means of calculating inefficiency estimates
commonly cited in the literature.
In the above discussion we assumed the
production of a single output for illustrative
purposes. Additional cost advantages may
result from multiproduct production. For
example, economies may exist for the joint
production of two or more outputs, relative to
the stand-alone production of the individual
products. That is, scope advantages may exist.
More formally, economies of scope exist in the
joint production of (2, and Q, if
(1) [C, + CJ > C|2
where C, and C, are the cost of producing Q]
and Q, independently (that is, as stand-alone
processes), and Cj2 is the cost of joint produc­
tion. With multiproduct production, some



fixed cost of production can be spread across
the outputs and there may be synergies when
the two products are produced jointly. A
multiproduct cost relationship which exhibits
production synergies between the two outputs,
y, and y2, is illustrated in Figure 5. Joint pro­
duction moves the cost off the “lip” of the
relationship onto the inner surface. Potential
cost gains obviously exist.

M easu rin g p ro d u c tio n in e ffic ie n c ie s

The relationships depicted in the above
figures, as well as all standard textbook presen­
tations of the production process, present
extreme values; that is, the maximum output
that can be produced from a given set of inputs,
or the minimum cost required to produce a
given level of output. However, when attempts
are made to generate estimates of the produc­
tion process we typically abstract from the
extreme values. The traditional approach to
evaluating the production process is to assume
the standard competitive model is appropriate
and to estimate an average production, cost, or
profit function.6 Realizing that this restrictive
model may not adequately describe the produc­
tion process (and definitely avoids efficiency


issues), methods have been developed which
allow for variations in this approach. We
discuss these variations in this section. The
methodologies differ from each other in a
number of ways, not the least of which is a
result of differences in assumptions imposed in
the analysis. The restrictiveness of these
assumptions is determined by the individual
data sets. Each of the methods discussed here
is superior to the basic competitive model as
long as the assumptions employed are correct.
More will be said about this later.
While the concept of firm efficiency is
rather straightforward, various difficulties are
encountered when attempting to measure it.
Essentially, one needs to derive the best prac­
tice firm, or the production frontier which
depicts the maximum performance possible by
firms, and contrast existing firms to this stan­
dard. Ideally, we would compare firm perfor­
mance to the true frontier, however, the best
that can be achieved is an empirical frontier or
best practice firm generated from the observed
data. Once the best practice firm is established,
input related pure technical and allocative
efficiency, and output related scale and scope
efficiency, can be analyzed. For example,
assuming constant returns to scale in Figure 1,
all firms can be compared to one producing at
point E.
Differences in estimates of firm efficiency
typically result from different means of gener­
ating the best practice firm. There are two
general approaches used to model this relation­
ship. First, the parametric or econometric
approach employs statistical methods to esti­
mate an efficient cost frontier. Second, the
nonparametric or deterministic approach is
based on the linear programming approach for
optimal allocation of resources called data
envelopment analysis (DEA). This technique is
used to directly generate individual frontiers for
each firm. Below we discuss alternative meth­
odologies within these two broad categories.
It should be emphasized that empirical
measures of inefficiency are no different from
estimated parameters in any economic model.
The model may mistakenly reflect measure­
ment errors or specification errors for produc­
tive inefficiency. As the literature on banking
develops, more comprehensive models should
be analyzed.


P a ra m e tric approach: S h a d o w
price m odels

To generate estimates of allocative effi­
ciency, one can use the parametric approach
developed by Lau and Yotopoulos (1971) and
refined by Atkinson and Halvorsen (1980,
1984).7 This method assumes that firms are
combining the factor inputs correctly, but that
the combination is not necessarily based on
observed prices. Rather, there are factors in
addition to explicit market prices which enter
the firm’s employment decision process. These
additional factors are combined with the explic­
it prices to generate shadow prices which are
more comprehensive and which determine
factor utilization. These additional factors
typically include distortions induced by union­
ism, regulation, or managerial goals other than
profit maximization. These alternative goals
may include profit satisficing or expense
preference behavior.8
More formally, a basic contention of
economic theory is that, in competitive mar­
kets, the optimal level of employment for each
factor of production can be determined by
employing additional units until the last dollar
spent on each factor yields the same amount of
productivity. That is,
f f
(2) — — , for i * j = 1,..., m,
P> Pj
where/ = 8 // 8X. is the marginal product of
input j, and P is the price of input /, or




(4) J__!l , for i * j = 1,... , m,
where P* is the effective or shadow price of
input /, and the marginal rate of technical
substitution between the inputs is set equal to
the ratio of the shadow prices of the inputs.
Given competitive markets and the absence of
additional binding constraints, shadow and
actual prices are equal and the employment
decision is not affected.
Because the shadow prices of the inputs are
not directly observable, Lau and Yotopolous
developed a means to estimate them along with
other parameters of the cost relationship.
Assuming shadow prices are proportional to
market prices, shadow prices can be approxi­
mated by
(5) P* = k P , for i = 1,... , m,
where k. is input-specific.’ Again, if the addi­
tional constraints are not binding, all shadow
prices equal the respective market prices, that
is, k. = 1 for all i.
Standard econometric techniques can then
be used to generate cost estimates employing
the additional information. That is, the stan­
dard cost structure
(6) C = C(P, Q, Z),
where C depicts costs, Q outputs, P explicit
factor prices, and Z additional pertinent exoge­
nous variables, is replaced with


, for i * j = 1,... , m,

where / / / i s the marginal rate of technical
substitution between the inputs. This relation­
ship corresponds to the tangency of the isoquant
and the isocost curve (point E) in Figure 1.
Given input prices and the predetermined
level of output as the only constraint, the
optimal combination of inputs, as in Equation
(3), can be derived to minimize cost. However,
if additional constraints exist (for example,
regulatory constraints), they need to be ac­
counted for and incorporated into the optimiza­
tion process. Concerning the employment
decision, Equation (3) becomes


(7) Cs = Cs(kP, Q, Z),
where kP denotes shadow factor prices, and k is
estimated along with the other parameters in
the cost function.1
The shadow price model also allows one to
calculate the optimal (unobserved) input combi­
nation given observed prices, P. This combina­
tion is relevant for measuring the cost differ­
ences resulting from production under competi­
tive conditions and those when additional
binding constraints exist. In the banking
industry, these additional constraints are typi­
cally thought to be regulatory induced. The
cost differences can be determined by contrast­
ing costs when market prices equal shadow


prices (k = l ) to that found using the estimated
shadow prices (k = k where k denotes the
estimated value for k). The difference between
the two cost values will be the result of combin­
ing inputs in a suboptimal manner.
Estimation of the cost function will yield k
values which can be considered to reflect the
effect of binding constraints on average. Ideal­
ly, the k. measure would be firm specific.
However, statistical problems typically make
this prohibitive in terms of the degrees of
freedom required for the estimation procedure.
All the parametric approaches cited below have
this same shortcoming. Some progress toward
resolving this shortcoming has recently been
made; see Evanoff and Israilevich (1991,
One of the advantages of the shadow price
model approach is that it allows for the estima­
tion of returns to scale and scope along with
allocative efficiency. However, pure technical
efficiencies can not be measured by this ap­
proach although, as discussed later, this short­
coming can also be partially resolved.
P a ra m e tric approach: S to ch a stic
co s t fro n tie rs

Another more comprehensive parametric
approach for measuring efficiency is to use
stochastic frontier models. With this approach,
the cost frontier is empirically estimated and
firm specific deviations from the frontier are
attributed to productive inefficiencies. A
number of alternative parametric techniques
can be used to generate the frontier. The major
difference between these techniques is in the
maintained assumptions which, obviously, can
produce significantly different results. The
restrictiveness of these assumptions is deter­
mined by the individual data sets. Here we
summarize alternative parametric methods used
to develop the frontier.
Using a parametric approach, the standard
cost structure is typically generated by impos­
ing a specific functional form on the data and
obtaining the “best fit” by minimizing devia­
tions from the estimated structure. For exam­
ple, the estimated total cost relationship may be
fitted to the data to produce a relationship such
as TC in Figure 6. However, when evaluating
efficiency, we are interested in the best practice
firm or the cost frontier. We are not interested
in the average relationship, rather we are
looking for a minima in the data. Therefore,



Total cost relationship

adjustments to the standard estimation proce­
dures are required. Typically the standard
parametric procedure is adjusted by employing
a more complex error structure. A “composed”
error can be used which consists of two compo­
nents: one is the standard statistical noise
which is randomly distributed about the rela­
tionship, and the other consists entirely of
positive deviations from the cost structure (that
is, a one-sided disturbance term) and represents
inefficiency.* Stated crudely, the resulting
frontier is simply a transformation of TC in
Figure 6 (shifted downward) to generate the
best cost relationship instead of the average
For example, and more formally, assume a
stochastic frontier model which consists of the
following cost and share equations:
(8) In C* = In q + lnTh+ InA + uh;
(9) M* = VF + b.i + u„’ for i = 1,..., m;
v 7

where In denotes the natural log, and CAand MA
are observed cost and factor shares for firm h.
CFis the lowest production cost relationship or
the cost frontier, lnTh reflects additions to cost
resulting from pure technical inefficiency, InA
reflects additions to cost resulting from alloca­
tive inefficiency, and uh is a random error. M F
is the efficient share equation, /? depicts share
distortions resulting from allocative inefficien­
cy, u.h captures random distortions from effi-


cient shares, and B h is the composed error term.
Measures of technical inefficiency are
calculated as firm specific deviations from the
frontier and are derived from the additional
error term discussed above. Since technical
inefficiency can result only in increases to total
cost, this error structure must consist entirely of
non-negative values. That is, this component of
the error structure is one-sided relative to the
frontier. Choice of a specific one-sided distri­
bution could obviously influence the empirical
As with the shadow price model, allocative
inefficiency is computed as an average for the
sample and is not firm specific. InA is non­
negative as deviations from use of the optimal
combination of inputs can lead only to addi­
tions to cost. However, b. can be positive or
negative suggesting over- or under-utilization
of a particular input.
Obviously, InA and b. are related because
suboptimal combinations of factor inputs (b ^
0) result in additions to cost. However, empiri­
cally modeling this relationship is problematic.
One standard means to do it is to impose
restrictions on the relationship reflecting prior
knowledge. For example, assuming increased
costs occur only when mistakes are made (A =
0 only when /? = 0), and that large mistakes
cost more than small ones, one can impose a
relationship between allocative mistakes and
cost increases:1
(10) In A = b’ F b;
where F is a diagonal matrix with nonnegative
elements. Positive elements of F represent
weights for each br For example,/j. represents
the relative effect of allocative distortions from
factor / on the increased production costs. To
summarize, the additional cost of allocative
inefficiency is a weighted sum of squared
mistakes from the misallocation of each input.
The (nonnegative) weights are additional
parameters to be estimated.
An alternative approach to generate a cost
frontier is to utilize a cost structure consisting
of cost and share equations, but to sever the
link between the error terms of the cost and
share equations. That is, the share equations
are used only for efficiency gains in parameter
estimation; not to link suboptimal combinations
of inputs to increases in cost. Under this


approach, both allocative and technical ineffi­
ciencies are depicted as one-sided errors from
the cost frontier. Therefore, the estimated
system of Equations 8 and 9 becomes
(11) lnCA= l n C F+ vh+ uh,
(12) MA = MF + u., for i = 1,..., m,
where the error term depicting inefficiency, vh,
can be decomposed into its two components
(that is, InT + InA) using techniques developed
by Kopp and Diewert (1982) and refined by
Zieschang (1983). This approach essentially
ignores information concerning the relationship
between disturbances in the cost and share
equations, but is easier to work with than the
above approach and does not necessarily
generate results inferior to more complicated
linkage approaches. This is particularly true if
the more complicated approach, which is
typically based on a set of untested assump­
tions, incorrectly models the linkage.
This attempt to simplify the methodology
brings us to the most recent approach intro­
duced by Berger and Humphrey (1990). These
authors take the view that the preceding meth­
odologies impose rather restrictive ad hoc
assumptions concerning the data, the validity of
which are questionable. For example, the
assumed linkage between the error structure in
the share cost equations, discussed above, could
be inaccurate as could the assumptions con­
cerning the one-sided error distribution. To
partially remedy these problems the authors
developed a “thick frontier” approach. Instead
of imposing restrictive characteristics on the
cost relationship to generate a true frontier or
frontier edge, a thick frontier is estimated from
a subsample of the data which, based on a
priori information, is considered to be an
efficient subgroup. This group is then com­
pared to another group which, based on a priori
information, is considered an inefficient sub­
group. Therefore the authors are able to relax
the restrictive assumptions employed in the
methodologies discussed above, but at the cost
of using a somewhat ad hoc means to catego­
rize the data into efficient and inefficient
This approach was implemented using
banking sector data by assuming subgroups
could be delineated based on their average cost


per dollar of output. The data were then strati­
fied by size and divided into quartiles and the
lowest and highest cost quartiles were contrast­
ed. After accounting for differences resulting
from market characteristics, the remaining
differences between the two groups were
assumed to constitute inefficiency. This can be
distributed into its allocative and technical
components using procedures similar to those
of Kopp and Diewert discussed above. Obvi­
ously, this approach lacks precision and also
imposes some rather ad hoc assumptions to
develop the subgroups and produce the frontier.
However, the assumptions may be less restric­
tive than those made in the more elaborate
models discussed above. In fact, some of the
maintained assumptions in these models were
statistically tested and rejected by Berger and
Humphrey. As a result, the relatively easy-toimplement approach may perform quite well in
generating a rough measure of the extent of
production inefficiency in an industry.1
N o n p a ra m e tric approach

While intuitively appealing, and somewhat
similar to the procedures commonly used to
estimate standard cost relationships, the para­
metric approaches have been criticized for
requiring more information than is typically
available for estimation of the cost frontier. In
an attempt to decrease the required information,
some have chosen to use a nonparametric,
linear programming approach known as data
envelopment analysis (DEA).
Although there are various permutations to
the DEA approach, the basic objective is to
“envelop” the data by producing a piecewise
linear fit via linear programming techniques.
That is, instead of using regression techniques
to fit a smooth relationship, a piecewise linear
surface is produced which borders the observa­
tions, for example, the broken line qo in
Figure 7. The technique identifies observations
for which the firm is producing a given level of
output with the fewest inputs. These will be
observations on the frontier. All other observa­
tions will be given an efficiency measure based
on the distance from the frontier and indicating
the extent to which inputs are being effectively
utilized. This is comparable to the measure of
pure technical inefficiency, OB/OA, for obser­
vation A in Figure 1.
The technique allows for the derivation of
a frontier for each firm in the sample based on



DEA measured efficiency

the output and input utilization of all firms in the
sample. As a simple example for the two input,
one output case, the linear programming problem
for technical inefficiency could be written as
(13) Min 0 A
subject to qA< |f q1+ (I2 q2+ .... + jnnqn
0 A > in1x| + JJ.2 x!, + .... + fin x1
0 A > ft' x^ + |i2x2+ .... + |lnx
M > 0,
where 0* is the fraction of the actual inputs
which could be used to optimally produce the
given level of output, qA for observation A;
and x, are quantities of the two inputs; |Ts are the
weights generated for each observation via the
linear programming optimization process to
obtain the optimal value for 0; A is the observa­
tion we are evaluating, and superscripts denote
individual firms. Again, 0^ = OBIOA for firm
A in Figure 1 or Figure 7. Therefore, we are
finding the lowest fraction of the inputs used
which would produce an output level at least as
great as that actually produced by firm A.
Additional linear programs can be solved to
derive allocative inefficiency. A more complete
description and an example of DEA analysis
which has been applied to the banking industry
is presented in the Box.


Example of a data envelopment analysis (DEA) program applied to banking
technical inefficiency, and then taking the difference
between the two. To determine overall inefficiency,
take the observed input prices vv4 faced by the bank
A and assume cost minimizing behavior:

(2) M i n xA £ wA • xA

i= i


< X

(ih •

q \,

i =

1 , ... , m ,



j = l , -


|T h • X h ,

, n,


’ S *’




(1) Min



Technical inefficiency is measured as the
difference between the observed behavior o f bank A
to that which would occur if bank A were on the
production frontier. Therefore, the unobserved
frontier must be projected. This is done via DEA
analysis by developing a program which determines
the minimum required amount of inputs necessary
for bank A to produce as much, or more, of each of
the outputs currently being produced. The input
vector is chosen based on the observed behavior of
the sample firms. Again, this reduces to a linear
programming problem. For example, for bank A , the
technically efficient combination of inputs is deter­
mined as


|i h • z h ,

1 ,






i = 1, ..., m,

qA< Z ph- q1,



!th • Z* ,

... , S,



0 A • XA> £ |Ih • x},

j = 1 ,... , n,

> 0 ,

h =

1 ,... , H ,


|T h =




s = 17,... , sr7

z * < Z M ’ Zs>

s = sr+ ... , S,

zA> Z M ' Zs>

The optimization process determines the mini­
mum input vector, jc for the observed price vector
w4. Scalar ve4- jc is the minimum production cost
for the vector of outputs q A. Overall inefficiency for
any firm, h , is therefore the ratio o f cost of the
observed and the best practice bank:2


ph> 0 , h = 1 ,... ,H , Z ph = 1,
where Q4 is our radial measure of technical efficiency for firm A , q . is the output vector, \ i h is a vector of
weights assigned to each observation (an intensity
vector) which determines the combination of tech­
nologies of each firm to form the production frontier,
Jt*is the observed amount o f input j used by firm h,
and z is a vector o f additional exogenous variables.1
There are two types of these exogenous variables;
those that need to be maximized, z h for s = 1 ,... , s ,
and those that should be minimized, z h for s = s + 1 ,
..., S . An example of these exogenous variables in
banking would be the number of branch offices.
Banks would, c e t e r i s p a r i b u s , want to minimize the
number of branch offices required to provide a given
level o f output. The output o f each firm in the
sample is weighted in such a way that the combina­
tion o f observed outputs, i, is not less than the output
actually produced by firm A . Thus the frontier for
firm A is constructed as a weighted technology from
the sample. If O'4 = 1, then firm A is as efficient as
any firm in the sample, that is, firm A is on the
frontier. If Gf4 < 1 then firm A is inefficient.
Allocative inefficiency for firm A can be
derived by determining overall inefficiency and


(3) Oh = (wh ■xh / ( w h • xh ) - l .
The difference between the costs o f technically
efficient production and overall efficient production
determines the cost resulting from allocative ineffi­
ciency. That is, A h = [(wA• 0 h* • xf1 / ( w h ■y 1 -1 is
an index o f allocative inefficiency for firm h , and
& * is the optimal value of Q h determined in
Equation ( l) .3

•The sum of the weights \ i h used in the optimization
process is restricted to unity to allow for varying
returns to scale. See Afriat (1972). The appropriate
number of constraints for exogenous variables is
difficult to determine and the estimated inefficiency
for a given model typically varies inversely with the
number chosen.
2The inequality in the linear program implies free
disposability of both inputs and outputs.
•Technical inefficiency, determined in Equation 1,
obviously is the difference between overall and
allocative inefficiency: T h = O h - A h.


C o m p ariso n o f th e p a ra m e tric and
n o n p a ra m e tric approaches

The role o f p ro d u c tio n in e ffic ie n c y in
b anking: A survey o f th e lite ra tu re

Using either the parametric or DEA ap­
proach, the goal is to generate an accurate
frontier. However the two methods use signifi­
cantly different approaches to achieve this
objective. Because the parametric approach
generates a stochastic cost frontier and the DEA
approach generates a production frontier, and
because the methodologies are fundamentally
different, one should expect differences in the
efficiency projections. Which methodology is
There are advantages and disadvantages
with each of the procedures. The parametric
approach for generating cost relationships
requires (accurate) information on factor prices
and other exogenous variables, knowledge of
the proper functional form of the frontier and
the one-sided error structure (if used), and an
adequate sample size to generate reliable
statistical inferences. The DEA approach uses
none of this information, therefore, less data is
required, fewer assumptions have to be made,
and a smaller sample can be utilized.1 Howev­
er, statistical inferences cannot be made using
the nonparametric approach.
Another major difference is that the para­
metric approach includes a random error term
around the frontier, while the DEA approach
does not. Consequently, the DEA approach
will account for the influence of factors such as
regional factor price differences, regulatory
differences, luck, bad data, extreme observa­
tions, etc., as inefficiency.1 Therefore, one
would expect the nonparametric approach to
produce greater measured inefficiency.1 The
importance of this difference should not be
understated because single outliers can signifi­
cantly influence the calculated efficiency
measure for each firm using the DEA approach.
Obviously, one would like to be able to
take comfort in the fact that either approach
generates similar results. This is more likely to
occur if the sample analyzed has homogeneous
units which utilize similar production process­
es. However, similar results have not been
found in the literature. In fact, it is common for
studies contrasting results produced from the
two methodologies to find no correlation
between the efficiency estimates. This has also
occurred in studies of efficiency for the banking
sector. We next review some of that literature.

In this section we review the literature on
productive efficiency for financial institutions.
Most of the studies reviewed, particularly those
analyzing input efficiency, were conducted
recently and involve flexible functional forms
and state of the art research techniques. For a
more comprehensive review of much of the
earlier literature on output efficiency, which
typically utilized somewhat restrictive function­
al forms and single output measures, the reader
is referred to Gilbert (1984).


O u tp u t e ffic ie n c y

The production process has been one of the
most extensively investigated topics in banking.
A major purpose of most of these studies has
been to obtain estimates of scale elasticities,
that is, to evaluate how bank costs change with
changes in the level of output.1 More recently,
efforts have also been made to estimate econo­
mies of scope; that is, advantages from the joint
production of multiple outputs.
Concerning scale economies, if changes in
bank costs are proportional to changes in output
then the scale elasticity measure equals 1.0 and
all cost advantages resulting from the scale of
production are being fully exploited. If the
changes are not proportional, that is, varying
returns to scale exist, then efficiency gains
could be obtained by leaving the production
process unchanged, but altering the quantity of
output produced. Scale elasticities less than
one imply that increases in output would
produce less than proportional increases in
costs. Efficiency gains, therefore, could be
obtained by increasing the scale of production.
This is typically a justification given for bank
merger activity. Efficiency gains could be
obtained by reducing production levels if
decreasing returns to scale exist; that is, the
scale elasticity is greater than 1.0.
Although much effort has been spent
evaluating scale economies, it is one of the
most disagreed upon topics in banking. For
example, a number of studies find cost advan­
tages from size to be fully exhausted at relative­
ly low levels of output. Even when potential
economies exist they appear to be relatively
small. Some of these studies are summarized in
Table 1 which presents the estimated scale
elasticity for the average bank in the sample,
the range of the estimates for all banks, and the



Economies of scale estimates for small banks


Scale elasticity
at sample m ean3

Range of
scale elasticity

Relevant range for
significant scale
(disecon om ies1

Benston, Hanweck
and Humphrey (1982)



0.89- 1.24
0.97- 1.16

Diseconomies above $25 m illion
Diseconomies above $25 m illion

Berger, Hanweck
and Humphrey (1987)



0.87- 1.21
1.00- 1.03

Diseconomies above $100 m illion
No significant (dis)economies

Cebenoyan (1988)



0.88- 1.39
0.92- 1.03

Diseconomies above $50 m illion
Economies above $100 m illion



0.98- 1.10

Economies above $10 m illion
and diseconomies above
$50 m illion



0.93- 1.27

Economies below $25 m illion
and diseconomies above
$100 m illion



0.94- 1.17

Economies below $25 m illion
and diseconomies above
$100 m illion



0.99- 1.02
0.88- 0 .93

No significant (dis)economies
Economies below $100 m illion


0.91 - 0.99

Economies below $100 m illion

Gilligan and
Sm irlock (1984)c

Gilligan, Sm irlock
and Marshall (1984)

Kolari and Zardkoohi
Lawrence and Shay

“Calculated as (d InC/d InQ) for single output measures or I (d InC/d InY.) for all /=outputs. Benston, Hanweck and
Humphrey (1982) calculated a scale elasticity augmented for output expansion via office expansion.
bln these studies the banks are grouped by deposit size for calculation of the scale elasticity measure. The figures
presented are for the minimum bound on the group where statistically significant (dis)economies were realized.
cGilligan and Smirlock did not use the FCA data, as did the other studies, but did evaluate institutions similar in size
to those in the FCA sample.
^Denotes statistically significant difference from 1.0.
Note: U and B represent unit and branch bank subsamples, respectively. Many of the studies provided results for
a number of years and/or are based on alternative output measures. When multiple sets of findings were provided,
the results reported here are for the most recent year, based on earning assets as the output measure, and use the
intermediation approach (i.e., dollar value of funds transformed to assets).

level of output at which significant advantages
or disadvantages from the scale of production
occurs. Basically, the results imply that scale
advantages are fully exhausted once an institu­
tion achieves a size of approximately $100-200
million, a relatively small bank in the United
States.1 Higher output levels result in either
constant or decreasing returns to scale.
The implications from these results are that
very small banks are inefficient because they
operate under increasing returns to scale, and
inefficiencies may exist for banks above ap­
proximately $100-200 million in deposits. The
extent of the inefficiency, however, would not
appear to be very large: scale elasticities
typically range from .95 to 1.05. These find­


ings would appear to run counter to the argu­
ments typically found in the popular banking
press which imply that merger activity, desires
to expand geographically, and product expan­
sion are all driven by the desire to reap cost
advantages; for example, see Moynihan (1991).
However, this may partially result from the
fact that, until very recently, most of the bank
cost studies excluded large institutions; the very
ones which are most interested in expanding.
Most of the studies presented in Table 1 uti­
lized the Federal Reserve’s Functional Cost
Analysis (FCA) survey data which typically
includes only institutions with less than one
billion dollars in assets. Although banks in this
size group constitute over 95 percent of all


banks in the United States, they
constitute only about 30 percent of
Results from large bank cost studies
the nation’s banking assets. It
Range of
Size at which
excludes the larger banks which are
economies of
most active in merger activity
scale are exhausted0
scale elasticities8
(Rhoades 1985) and most vocal
about expanded product and geo­
Berger and
0.3 billion®
graphic expansion powers.
Humphrey (1990)
0.98- 1.03e
0.08 billion '
0.92- 1.06'
Table 2 provides a summary of
results from recent studies which
C la rk(1984)
through $500m illionh
have analyzed larger financial
institutions; typically in excess of
Evanoff and
one billion dollars. The evidence
5.5 billion
Israilevich (1990)'
suggests that scale advantages exist
Hunter and
well beyond the $100-200 million
Tim m e (1986)'
$4.2 billion®
range. While typically significant
$12.5 billion"
in a statistical sense, the scale
Hunter, Tim m e
elasticity measure is close to 1.0.
$25.0 billion
and Yang (1990)
Again, the measures tend to range
$6.0 billion
from .95 to 1.05. Therefore, the
Noulas, et al. (1990)
studies employing data for larger
Shaffer (1988)
banks tend to argue against the
through $140 billion"
finding that inefficiencies resulting
Shaffer (1984)
from diseconomies of scale set in at
through $50 billion"
relatively low levels of output.
However, the most typical conclu­
Shaffer and David
$37.0 billion
sion the authors draw from these
bank cost studies is that potential
The reported values are based on elasticity calculations for alternative
asset size groups (when available). Statistical significance is not taken
gains from altering scale via inter­
into account for figures reported in this column—that is, the calculated
nal growth or merger activity are
values may not be significantly different from 1.0 in a statistical sense.
relatively minor.2
The values should be considered approximations. The authors
frequently reported scale elasticity measures for a group of banks
It should be emphasized,
covering a relatively broad size range, for example, 10-25 billion. If the
however, that the scale elasticity
calculated value was insignificantly different from 1.0, then banks up
to $25 billion were said to have constant returns to scale. Unlike the
measure is not a measure of ineffi­
figures reported in the previous column, whether or not a calculated
ciency. This may partially explain
scale elasticity is significantly different from 1.0 in a statistical sense
is all important for figures in this column.
some of the disagreement between
cFor one bank holding companies. The value is probably biased
past research studies claiming
downward. This is the sample mean value at which the calculated
scale elasticity was insignificantly different from 1.0.
potential savings from growth are
“For multibank holding companies. See note c.
not very great because scale elastic­
eBranch bank results for the low cost banks.
ity measures are not very different
'Unit bank results for low cost banks.
from 1.0, and the popular banking
8For a $10 billion bank.
press which typically claims that
hNon-exhausted for the entire sample.
The values are calculated at the sample mean.
significant cost savings could be
Note: Many of the studies provided results for a number of years
gained by expanding the bank scale
and/or are based on alternative output measures. When multiple sets
of operation. Relatively minor
of findings were provided, the results reported here are for the most
recent year, are based on earning assets as the output measure, and
scale elasticity deviations from 1.0
use the intermediation approach.
can actually result in nontrivial
inefficiency.2 To determine poten­
tial gains from scale advantages,
$500 million banks compare to that resulting
the relative comparison is the production costs
of existing banks to that of the most efficient
from the one large bank? The scale elasticity
measure is required to estimate the cost differ­
sized bank. For example, assuming scale
advantages are exhausted at a $5 billion bank,
ence, however it by itself is not a measure of
how does the production cost for ten existing




Estimated scale inefficiencies in banking

scale inefficiency
(p e r c e n t)

Aly, et al. (1990)


Berger and Humphrey (1990)


Clark (1984)


Elyasiani and Mehdian (1990a)


Evanoff and Israilevich (1990)


G illigan, Sm irlock and Marshall (1984)

Hunter, Tim m e, and Yang (1990)


Lawrence and Shay (1986)


Noulas, et al. (1990)


Shaffer (1984)


Shaffer (1988)


The scale elasticity for the "efficient" firm was .9637 since scale
advantages were not exhausted in the data sample. The calculated
inefficiency would be larger if we extrapolated outside the study
data sample.
bDenotes branch banks.
Taken directly from the cited study.
T h e inefficiency measure is biased downward because data
limitations necessitated using an "inefficient" size bank which was
not the most inefficient in the sample.
“Denotes unit banks.
Note: The reported inefficiencies were derived assuming prices,
exogenous variables, and product mix were constant across banks
(for example, at the sample mean), and that the cost representation
could be approximated by lnC= a + b (InQ) + .5cilnQ)2 (where Q
represents output). Evaluating only inefficiency resulting from
production in the range of increasing returns to scale, the data were
centered about the values of the inefficient bank. Hence, for this
bank, the scale elasticity measure is simply the coefficient b. The cost
of production for the scale efficient bank is InC = a + b ln(F ■Q) + .5c[/n(F
•Q))2where Fis the size of the efficient firm relative to the inefficient one.
The scale elasticity for the efficient bank is d InC/d ln{F • Q) = b + c ln{F ■
Q) = 1.0. Scale inefficiency is the difference between cost values of the
two banks relative to F, that is, [ F-s- CJCt ] -1, where C, and CEdenote
costs of the inefficient and efficient bank, respectively. The same
methodology could be used to calculate inefficiency resulting from
production in the range of decreasing returns to scale. In the studies
considered, scale measures are typically reported for various size ranges.
Unless noted, the calculated inefficiency is based on the smallest bank in
the size group in which statistically significant economies of scale
existed, relative to the largest bank in the size category in which
minimum efficient scale existed (that is, the scale measure was not
significantly different from 1.0 in a statistical sense). Details are
available from the authors. By holding product mix constant we restrict
the cost savings to scale effects only; precluding any savings resulting
from altering the mix. This implicitly assumes either that the mix is
actually invariant over the banks considered or that the scale efficient
bank analyzed is equal in size to the scale efficient bank observed in the
data. Given the assumptions employed and the relatively broad size
categories reported in the studies considered, the reported inefficiencies
should be considered rough approximations.


Using the actual scale esti­
mates and sample data from a
number of bank cost studies,
measures of scale inefficiency
were calculated and are presented
in Table 3. The reported ineffi­
ciencies are for banks producing
in the range of increasing returns
to scale. They suggest that poten­
tial gains resulting from scale
inefficiency are not trivial. While
some of the studies suggest ineffi­
ciencies in the range of five
percent, estimates in the 10-20
percent range are not uncommon,
and they range up to nearly 40
percent. The major point is that
although their importance is
typically played down in the bank
cost literature, scale inefficiencies
appear to be significant enough to
warrant efforts by banks to
achieve an efficient scale.2
The evidence concerning
efficiency gains from economies
of scope is not conclusive. Studies
to date typically focus on the
outputs currently produced and
find very slight or no potential for
efficiency gains; for example, see
Benston, et al. (1982), Cebenoyan
(1990), Clark (1988), Hunter,
Timme, and Yang (1990), Law­
rence and Shay (1986), and Mester
(1987).2 However, the methodol­
ogies used to evaluate advantages
from joint production have typi­
cally been criticized on the
grounds that most functional
forms utilized for bank cost
analysis are ill suited for analyzing
economies of scope. Additionally,
the evaluation of potential effi­
ciency gains is commonly preclud­
ed as a result of regulation. Since
numerous products cannot be
provided by banks, there is no
available quantitative means to
evaluate the joint cost relationship
or potential efficiency gains.
In p u t e ffic ie n c y

While much research has
been conducted evaluating output


efficiency, only recently has input efficiency
been considered. The evidence suggests that the
assumption of input efficiency, common in most
studies of bank production, is typically violated.
Table 4 presents summary findings for recent
studies evaluating input efficiency in banking.
While substantially different techniques were
used in the studies reviewed, the results are
surprisingly similar. Total input inefficiency is
commonly in the range of 20-30 percent, and is
as high as 50 percent in one of the studies. This
implies that significant cost savings could be
realized if bank management more efficiently
utilized productive inputs.
Breaking down the study findings into more
detail, allocative inefficiency is typically found to
be relatively minor and, with one exception,
dominated by technical inefficiency.2 Evanoff
and Israilevich (1990a, 1990b, 1991) found that
the allocative inefficiency that does exist results

from the overuse of physical capital relative to
other inputs. As mentioned earlier, this is
consistent with expectations since past bank
regulation did not allow price competition in
the market for deposits. As a result, it appears
that banks simply responded by competing
using alternative means such as service levels.
The introduction of numerous branch offices
resulted in brick-and-mortar competition
instead of price competition. While the typical­
ly small allocative inefficiency estimate cannot
be ignored as a potential source of future cost
savings in banking, it does suggest that the
frequent criticism of bank regulation based on
the burden it imposes on the bank production
process may be somewhat exaggerated. Appar­
ently the optimal mix of factor inputs is only
marginally affected by regulation.
The results presented in Table 4 suggest
that the major source of input inefficiency is

T A B LE 4

Input inefficiency in banking


Overall input


Pure technical

---------------- ------ p e r c e n t ------ ---------------- )

M inim al
M inim al

Berger and Humphrey (1990)b
Berger and Humphrey (1990)u


Elyasiani and Mehdian (1990b)d


Evanoff and Israilevich (1990)a


Evanoff, Israilevich, and Merris


Ferrier and Lovell (1990)





Aly, et al. (1990)®





Elyasiani and Mehdian (1990a)


Ferrier and Lovell (1990)


Gold and Sherman (1985)c









aForthe 1972-87 period. Subperiods produced different results.
bFor branch banks.
cFor the most inefficient decision making unit.
dFor 1980. Scale inefficiency was also calculated to be 38.9%.
eScale inefficiency was also calculated to be 3.1%.
“For unit banks.
Note: The figures presented are the level of inefficiency relative to the firm using its inputs efficiently. The
studies frequently reported inefficiency relative to the observed firm or efficiency as a percentage of input
utilization (see Figure 1 for an illustration of input inefficiency measures). These measures were converted to the
measure presented here. Gaps in the results are due to the fact that not all studies considered all components
of inefficiency.



pure technical inefficiency. Simply put, firms
use too much input per unit of output. Com­
bined with the finding of relatively small
allocative inefficiency, this implies that bank
managers do a relatively good job of choosing
the proper input mix, but then simply under­
utilize all factor inputs.2 This inefficiency
obviously cannot be sustained over time if the
banks are subject to competitive forces. Com­
paring the findings summarized in Tables 3 and
4, it is apparent that the inefficiencies resulting
from the suboptimal use of inputs are somewhat
larger than those resulting from producing
suboptimal levels of output.2
Causes o f in e ffic ie n c y and im p lic a tio n s
fo r th e fu tu re

There are a number of possible explana­
tions for the inefficiency in banking. Basically,
the expected causes should be the same as those
found in any industry. As discussed earlier,
economic theory suggests that allocative ineffi­
ciency is driven by market distortions from
factors such as regulation. Pure technical
inefficiency may be the result of weak market
forces (induced by market structure or regula­
tion) which allow bank management to become
remiss and to continue their inefficient behav­
ior. Scale and scope induced inefficiency may
be the result of either market or regulatory
forces which make the optimal level and mix of
outputs unachievable. Some analysts would
also argue that bank size should be a determi­
nant of efficiency. According to this argument,
larger banks may have more astute manage­
ment and/or be more cost conscious because of
greater pressure from owners concerning
bottom-line profits. Additionally, these banks
are typically located in the larger, more com­
petitive markets which may induce a more
efficient production process.
The evidence suggests that these forces are
indeed operative in determining efficiency
levels in banking. Analyzing data for large
banks over the 1972-87 period, Evanoff and
Israilevich (1990a, 1990b, 1991) found alloca­
tive inefficiency to be related to alternative
measures of regulatory stringency. It was also
found to be greater in regions characterized by
more restrictive state level regulation, and
significantly less after industry deregulation
occurred in the early 1980s. Allocative effi­
ciency has not been found to be related to bank
size (for example, Aly, et al. 1990). This,


however, should not be surprising since ineffi­
ciency may occur although the bank is operat­
ing efficiently in response to shadow market
prices (that is, those including market distor­
The evidence also suggests that pure
technical inefficiency is induced by regulation,
and some evidence exists suggesting that it
results from elements of market structure.
Berger and Humphrey (1990) found that the
inefficiency was greater, on average, for banks
located in the more restrictive unit banking
states than those in states allowing branching.
Additional analysis of data used in Evanoff and
Israilevich (1990b) produced similar findings.2
Pure technical inefficiency has also been found
to be negatively associated with bank size
[Berger and Humphrey (1990), Aly, et al.
(1990), Elyasiani and Mehdian (1990a), Rangan, et al. (1988)]. To the extent that small
institutions are located in the smaller, least
competitive markets, the absence of market
pressures could be producing the higher levels
of inefficiency. Aly, et al. (1990) tested this
contention more directly by relating pure
technical inefficiency to bank location. Banks
located in large metropolitan areas were found
to be significantly more efficient than those in
smaller markets, suggesting market structure
may influence efficiency levels. The evidence
here, however, is also not conclusive. It may be
that cost savings realized by urban banks may
exist because the increased population density
makes possible less costly delivery systems.
This cost savings may be interpreted as being
driven by greater market competition in the
urban markets while actually it is simply a
function of demographics.
Scale and scope diseconomies are also
expected to be partially determined by regula­
tory forces. Unit banking restrictions force
banks to expand at one physical location in­
stead of allowing them to expand by opening
additional offices to serve customers. Disecon­
omies of scale may set in at the individual
office causing higher cost for larger single
office institutions. Expansion via new offices
has been shown to be more cost effective.
Review of the findings presented in Tables 1
and 2 indicates that diseconomies of scale are
typically larger in unit banking markets. Anal­
ysis which combines data for both unit and
branch banks usually find that the larger unit
banks typically operate under conditions of


diseconomies of scale; see for example,
Evanoff, Israilevich, and Merris (1990). LeCompte and Smith (1990) also found that
inefficiency resulting from not producing the
proper mix of outputs, and therefore failing to
take advantage of economies from joint produc­
tion, was greater under conditions of more
stringent regulation.
What are the implications of these find­
ings? Given the important role regulation
apparently serves in determining efficiency
levels, the recent trend toward industry deregu­
lation should result in improved efficiency.
Reductions in entry barriers resulting in less
regulatory-created market protection, and fewer
regulatory-induced market distortions should
significantly increase competitive pressures.
The beneficial aspects of increased competition
will be accomplished by weeding out the less
efficient firms. Obviously, in an environment
of deregulation and increased competition,
reducing pure technical inefficiency could be a
major determinant of firm survival.
Merger activity in the financial services
industry will probably increase in the future as
banks strive to compete in the deregulated
market. The deregulation will provide banks
with both the desire and the ability to expand
acquisition activity. Scale inefficient firms will
be absorbed to exploit cost advantages. Firms
whose management does an inadequate job of
utilizing factor inputs may soon find it difficult
to survive in the more competitive market.
They will be required to eliminate the ineffi­
ciency or become prime targets for acquiring
firms looking to “trim the fat” from new acqui­
sitions.2 Given that pure technical inefficiency
is so significant in banking, and given that it is
the one aspect of efficiency over which the firm
has direct control, one would expect significant
increases in bank productive efficiency in the
coming years.
S u m m ary and conclusions

The purpose of this study has been to
discuss productive efficiency at a conceptual
level and to review the relevant literature for
the banking industry. We categorize efficiency
into input and output related measures. Output
inefficiency results from producing suboptimal


output levels or a suboptimal combination of
outputs. Input inefficiency results from produc­
tion using a sub-optimal input mix (allocative
inefficiency) and not effectively utilizing the
inputs employed (pure technical inefficiency).
A review of existing bank cost studies suggests
that banks have substantial room to increase
productive efficiency and, as a result, to signifi­
cantly lower costs. Although the range of
findings in the studies surveyed is relatively
broad, it is not uncommon to find 10-20 percent
bank scale inefficiency generated by producing
at suboptimal output levels. Allocative ineffi­
ciency is typically found to be relatively minor;
usually less than five percent. Pure technical
inefficiency, however, is apparently quite
significant; in the range of 20-30 percent.
Combining these three effects results in sub­
stantial potential cost savings for banks.
What are the major causes of bank ineffi­
ciency? The evidence suggests that industry
regulation is a dominant source. Allocative
inefficiency, although relatively minor, is
directly induced by regulation. Inefficiencies
resulting from not producing the optimal
combination of outputs have also been shown
to be related to regulation. However, the major
source of inefficiency, pure technical ineffi­
ciency, is managerially induced. That is, the
absence of competitive forces, which is also
influenced by industry regulation, has allowed
banks to continue to operate in spite of the fact
that management has not effectively utilized
the resources available to them.
Given that the industry is undergoing a
process of significant deregulation, the findings
from these studies have both positive and
negative implications for banking. As deregu­
lation continues, the increased competitive
pressures will force banks to operate more
efficiently. Those unable to do so by adjusting
to the new competitive environment will have
difficulty surviving. However, one of the major
sources of inefficiency, pure technical ineffi­
ciency, is directly under the control of the
banks themselves. Therefore, they will have
control of their own destiny. In light of the
recent significant number of branch closings
and cost saving campaigns aimed at reducing
payrolls, it would appear that efforts to improve
bank efficiency are already underway.


'Drucker (1991).
2However, the research process continues because of
differences in results from previous studies, methodologies,
assumptions, output definitions, etc. For a review of some
of these studies see Gilbert (1984), Clark (1988), or
Humphrey (1990).
3These definitions of productive inefficiency were
introduced by Farrell (1957). They are radial measures and
coincide with much of the discussion that follows. For
alternative measures of (in)efficiency see Fare and Lovell
4At this stage we are ignoring the potential for economies
of scale. Farrell assumed a linearly homogeneous
production process; that is, constant returns to scale. We
assume, as discussed below, that any gains from scale
advantages would result in a higher level isoquant for a
given efficient combination of inputs.
5For example, if production was allocatively efficient the
measure would, obviously, be O E I O E = 1.0; that is, points
6, C , and E would coincide.
Typically we assume profit maximization as the objective
under competitive conditions, that is, frictionless markets
and the absence of monopoly power and regulatory
distortions. In this model the production, cost, and profit
functions are essentially alternative means of evaluating the
same optimization process, that is, the production process.
The cost relationship is frequently evaluated instead of the
production function because less information is required.
When discussing frontier analysis, it is irrelevant whether
the cost or production side is considered. However,
empirically, the choice of a cost, production, or profit
representation may generate different results because the
researcher is required to use approximations for the true
functional forms of these representations.
7A s a byproduct, the methodology also allows for the
estimation of scale and scope induced inefficiency. It does
not, however, allow for estimates of pure technical

8However, this is an empirical approach, therefore the true
cause of the distortion in factor prices could be generated
by a number of things, including data measurement errors,
’Using this methodology, the shadow price approximations
can be interpreted as first-order Taylor’s series expansions
of arbitrary shadow-price functions. It should be
emphasized that there is nothing special about the linear
relationship. Alternative specifications can and should be
considered; for example, see Evanoff and Israilevich (1991,
'Typically, factor share equations are derived from the cost
relationship via Shephard’s Lemma and the system of cost
and share equations are jointly estimated. The additional


share equations provide increased efficiency of the
estimates. The share equations are derived from the
s h a d o w cost relationship.
"For a lucid description of these models, see Bauer (1990).
The foundation for this approach was developed by Aigner,
Lovell and Schmidt (1977). See also Fare, Grosskopf, and
Lovell (1985).
12One sided distributions which have been used in
estimation include the half-normal, exponential, truncated
normal and the Gamma distribution. Again, see Bauer
(1990) and the sources cited.
'T h is is the linkage employed in Ferrier and Lovell (1990)
in their study of bank efficiency.
'T h e approach can also be combined with others to
incorporate additional information. For example, Evanoff
and Israilevich (1990b) augmented the thick frontier
approach by estimating shadow cost functions for high and
low cost banks. In this way, estimates of allocative
inefficiency could be obtained directly from the model
instead of using an auxiliary, somewhat arbitrary,
procedure to decompose the total inefficiency into its
component parts.
'Technical inefficiency can be calculated ignoring this
information. Measures of allocative efficiency require
information on factor prices.
l6Evanoff and Israilevich (1991) found significant regional
differences in bank production techniques a n d levels of
"Surprisingly, Ferrier and Lovell (1990) find exactly the
opposite in their analysis of banks.
'"More formally, the scale elasticity measure is the
percentage change in cost relative to the percentage change
in output, or d ln C / d l n Q . One major issue in bank cost
studies is determining what constitutes output. Although
defining output is difficult in any service oriented industry,
there seems to be more controversy with respect to
banking. However, of the measures used to date, the
findings tend to be similar regardless of the measure
employed; see Humphrey (1990).
19In 1989, over three-quarters of U.S. banks had less than
$100 million in assets; see FDIC (1989). However, banks
over 1000 times this size also existed.
20It is possible that scale estimates could be biased as a
result of misspecifying the cost relationship. For example,
the standard assumption of efficient utilization of factor
inputs, if incorrect, could produce misleading findings
concerning scale (dis)advantages. However, Berger and
Humphrey (1990), Evanoff and Israilevich (1990b) and
Evanoff, Israilevich, and Merris (1990) found scale
estimates were n o t substantially different when input
inefficiency was accounted for.


2lThe distinction between the scale elasticity and
inefficiency measures has been emphasized in Shaffer
(1988) and Shaffer and David (1991). Using data from a
previous study, Shaffer and David show that a scale
elasticity of .99 could result in a 25% cost savings if
production was shifted from small to large banks.
22Actually the scale inefficiency will be determined by the
difference in average cost between the efficient and
inefficient firm. The elasticity at the output level
corresponding to the inefficient firm gives us information
about cost changes at slightly larger or slightly smaller
output levels. Neither of these levels is relevant for
determining inefficiency since we never produce at these
levels. For efficiency analysis, production takes place
either at the efficient or the inefficient firm; therefore only
the corresponding two average cost values are relevant.
Whereas the elasticity measure gives percentage changes in
cost induced by incremental changes in output, in the
banking studies analyzed in Table 3 the difference in output
between the efficient and inefficient firm is not incremental.
23Caution should be taken in deriving policy implications
from findings concerning scale efficiency alone. There
may be alternative factors which partially offset these
potential gains. In fact, in viewing bank data, Humphrey
(1990) finds the average cost across all bank size groups to
be amazingly similar. That, combined with the potential
efficiency gains from scale economies discussed here,
suggest that there may be some factors counteracting these
potential efficiencies. However, with respect to scale
efficiency alone, there would appear to be significant
potential gains for banking.
24Some studies have, however, found significant advantag­
es resulting from joint production; for example, Gilligan,
Smirlock, and Marshall (1984), and Evanoff and Israilevich
(1990b). However, the finding of relatively small or no
scope economies is most typical. The methodologies
utilized to generate estimates of scope economies have
been critiqued in Pulley and Humphrey (1990). This is
obviously a rich area for future research.
25Although Aly et al.(1990) find evidence of greater
allocative inefficiency than most of the studies reviewed,
the major exception to the norm is the study by Ferrier and
Lovell (1990). Using a parametric approach the authors

found significant allocative inefficiency (over 17 percent).
However, as mentioned earlier, the reliability of these
techniques decreases significantly when non-homogeneous
decision making units are considered. The data for this
study included mutual savings banks, credit unions, savings
and loan associations, and “noncommercial” institutions.
Nearly a third of the sample was made up of noncommer­
cial banks. Given that the technology for these institutions
may differ from that of commercial banks, one would
expect these observations to have a substantial influence on
the error structure of the estimates. The authors themselves
even state that some of these observations d o significantly
influence their results (Ferrier and Lovell, p. 243). Since
the distribution of the errors is the major determinant of the
efficiency measure, this may bias the results concerning
commercial bank efficiency. The study also found that the
allocative efficiency resulted from an over-utilization of
labor relative to the other factors. This is precisely the
opposite of what one, intuitively, would expect in banking
(see Evanoff and Israilevich 1990b). Finally, the measures
used for factor prices may bias the results toward finding
allocative inefficiency resulting from over-utilization of
labor (Berger and Humphrey 1990, p. 21).
26This finding has implications for the bank expense
preference literature, for example, Mester (1989).
Typically it is assumed that managers of the expensing
bank prefer one input to others—usually labor. The results
presented here suggest that a more restricted form of
expense preference, a preference for all the inputs to the
same degree, may best describe the situation in banking.
27However, this excludes any inefficiencies resulting from
scope disadvantages which cannot be empirically captured.
28The evidence on this, however, is not conclusive. Aly, et
al. (1990) found no significant efficiency difference across
unit and branch banks.
29This does not imply that there will no longer be small
banks. While most of the bank cost literature has assumed
homogeneous outputs, recent research suggests that banks
frequently find a market niche in an attempt to differentiate
themselves from others. Efficient banks which are able to
fill a needed market niche should continue to prosper in a
deregulated environment. See Amel and Rhoades (1988).

Afriat, Sydney, “Efficiency estimation of
production functions,” International Economic
Review, 13, 1972, pp. 568-598.
Aigner, Dennis, C.A. Knox Lovell, and Peter
Schmidt, “Formulation and estimation of
stochastic frontier production function models,”
Journal of Econometrics, 6, 1977, pp. 21-37.
Aly, Hassan Y., Richard Grabowski, Carl
Pasurka, and Nanda Rangan, “Technical, scale,
and allocative efficiencies in U.S. banking: an


empirical investigation,” The Review of Econom­
ics and Statistics, 72, 1990, pp. 211-218.
Amel, Dean F., and Stephen A. Rhoades,
“Strategic groups in banking,” The Review of
Economics and Statistics, 70, 1988, pp. 685-689.
Atkinson, Scott E., and Robert Halvorsen,
“Parametric efficiency tests, economies of scale,
and input demand in U.S. electric power genera­
tion,” International Economic Review, 25, 1984,
pp. 647-662.


Atkinson, Scott E., and Robert Halvorsen, “A
test of relative and absolute price efficiency in
regulated utilities,” The Review of Economics and
Statistics, 62, 1980, pp. 185-196.

Elyasiani, Elyas, and Seyed M. Mehdian,
“Efficiency in the commercial banking industry, a
production frontier approach,” Applied Econom­
ics, 22, 1990b, pp. 539-551.

Bauer, Paul W., “Recent developments in the
econometric estimation of frontiers,” Journal of
Econometrics, 46, 1990, pp. 39-56.

Evanoff, Douglas D., “Branch banking and
service assessibility,” Journal of Money, Credit,
and Banking, 1988, pp. 191-202.

Benston, George, Gerald A. Hanweck, and
David B. Humphrey, “Scale economies in
banking,” Journal of Money, Credit, and Banking,
14, 1982, pp. 435-456.

Evanoff, Douglas D., and Philip R. Israilevich,
“Cost economies and allocative efficiency in large
U.S. commercial banks,” Proceedings of a
Conference on Bank Structure and Competition,
26, 1990a, pp. 152-169.

Berger, Allen N., Gerald A. Hanweck, and
David B. Humphrey, “Competitive viability in
banking: Scale, scope, and product mix econo­
mies,” Journal of Monetary Economics, 16, 1987,
pp. 501-520.
Berger, Allen N., and David B. Humphrey,
“The dominance of inefficiencies over scale and
product mix economies in banking,” forthcoming
in Journal of Monetary Economics, 28, 1991.
Also in Finance and Economics Discussion
Series, 107, Board of Governors of the Federal
Reserve System, 1990.

Evanoff, Douglas D., and Philip R. Israilevich,
“Deregulation, cost economies and allocative
efficiency of large commercial banks,” Issues in
Financial Regulation, Federal Reserve Bank of
Chicago Working Paper 90-19, 1990b.
Evanoff, Douglas D., and Philip R. Israilevich,
“Regional differences in bank efficiency and
technology,” The Annals of Regional Science, 25,
1991, pp. 41-54.

Cebenoyan, A. Sinan, “Multiproduct cost
functions and scale economies in banking,” The
Financial Review, 23, 1988, pp. 499-512.

Evanoff, Douglas D., Philip R. Israilevich, and
Randall C. Merris, “Relative efficiency,
technical change, and economies of scale for large
commercial banks,” Journal of Regulatory
Economics, 2, 1990, pp. 281-298.

Cebenoyan, A. Sinan, “Scope economies in
banking: the hybrid box-cox function,” The
Financial Review, 25, 1990, pp. 115-125.

Fare, R.J., Shawna Grosskopf, C.A. Knox
Lovell, The measurement of efficiency of produc­
tion, Boston, Kluwer Academic Publishers, 1985.

Clark, Jeffrey A., ‘‘Estimation of economies of
scale in banking using a generalized functional
form,” Journal of Money, Credit, and Banking,
16, 1984, pp. 53-67.

Fare, R.J., and C.A. Knox Lovell, “Measuring
the technical efficiency of production,” Journal of
Economic Theory, 19, 1978, pp. 150-162.

Clark, Jeffrey A., “Economies of scale and scope
at depository financial institutions: A review of the
literature,” Economic Review, Federal Reserve
Bank of Kansas City, 1988, p. 16-33.
Drucker, Peter F., “Don’t change corporate
culture—use it,” The Wall Street Journal, March
28, 1991, p. A 14.
Elyasiani, Elyas, and Seyed M. Mehdian, “A
nonparametric approach to measurement of
efficiency and technological change: The case of
large U.S. commercial banks,” Journal of
Financial Services Research, A, 1990a, pp. 157168.

Farrell, M.J., “The measurement of productive
efficiency,” Journal of Royal Statistical Analysis,
A, 120, 1957, pp. 253-281.
FDIC, “Statistics on banking,” Federal Deposit
Insurance Corporation, Washington, GPO, 1989.
Ferrier, Gary D., and C.A. Knox Lovell,
“Measuring cost efficiency in banking: Economet­
ric and linear programming evidence,” Journal of
Econometrics, 46, 1990, pp. 229-245
Gilbert, R. Alton, “Bank market structure and
competition,” Journal of Money, Credit, and
Banking, 16, 1984, pp. 617-644.
Gilligan, Thomas W., and Michael L. Smirlock,
“An empirical study of joint production and scale



economies in commercial banking,” Journal of
Banking and Finance, 8, 1984, pp. 67-77.
Gilligan, Thomas W., Michael L. Smirlock, and
William Marshall, “Scale and scope economies
in the multi-product banking firm,” Journal of
Monetary Economics, 13, 1984, pp. 393-405.
Humphrey, David B., “Why do estimates of bank
scale economies differ?,” Economic Review,
Federal Reserve Bank of Richmond, 1990, pp. 3850.
Hunter, William C., and Stephen G. Timme,
“Technical change, organization form, and the
structure of bank production,” Journal of Money,
Credit, and Banking, 18, 1986, pp. 152-166.
Hunter, William C., Stephen G. Timme, and
Won Keun Yang, “An examination of cost
subadditivity and multiproduct production in large
U.S. banks,” Journal of Money, Credit, and
Banking, 22, 1990, pp. 504-525.
Kolari, James, and Asghar Zardkoohi, “Bank
cost, structure, and performance,” Lexington,
D.C., Heath Publishers.
Kopp, Raymond, and W. Erwin Diewert, “The
decomposition of frontier cost function deviations
into measures of technical and allocative efficien­
cy,” Journal of Econometrics, 18, 1982, pp. 319331.
Lau, L. J., and P. A. Yotopoulos, “A test for
relative efficiency and application to Indian
agriculture,” American Economic Review, 61,
1971, pp. 94-109.
Lawrence, Colin, and Robert Shay, “Technolo­
gy and financial intermediation in a multiproduct
banking firm: an econometric study of U.S. banks,
1979-82,” in Colin Lawrence and Robert Shay
(ed.), Technological Innovation, Regulation, and
the Monetary Economy, Cambridge, Ballinger,
1986, pp. 53-92.
Lecompte, Richard, L. B., and Stephen D.
Smith, “Changes in the cost of intermediation:
The case of savings and loans,” Journal of
Finance, 45, 1990, pp. 1337-1345.
Mester, Loretta J., “A multiproduct cost study of
savings and loans,” Journal of Finance, 42, 1987,
pp. 423-445.


Mester, Loretta J., “Testing for expense
preference behavior: Mutual and stock savings
and loans,” Rand Journal of Economics, 20, 1989,
Moynihan, Jonathan P., “Banking in the 90s—
where will the profits come from?,” Proceedings
of a Conference on Bank Structure and Competi­
tion, Federal Reserve Bank of Chicago, 27, 1991.
Noulas, Athanasios G., Subhash C. Ray, and
Stephen M. Miller, “Returns to scale and input
substitution for large U.S. banks,” Journal of
Money, Credit, and Banking, 22, 1990, pp. 94108.
Pulley, Lawrence B., and David B. Humphrey,
“Correcting the instability of bank scope econo­
mies from the translog model: A composite
function approach,” paper presented at the
Financial Management Association meetings,
Orlando Florida, October, 1990.
Rangan, Nanda, Richard Grabowski, Hassan
Aly, and Carl Pasurka, “The technical efficiency
of U.S. banks,” Economic Letters, 28, 1988, pp.
Rhoades, Stephen A., “Mergers and acquisitions
by commercial banks,” Staff Studies, 142, Board
of Governors of the Federal Reserve System,
Shaffer, Sherrill, “Scale economies in multiprod­
uct firms,” Bulletin of Economic Research, 1,
1984, pp. 51-58.
Shaffer, Sherrill, “A revenue-restricted cost study
of 100 large banks,” Federal Reserve Bank of
New York, unpublished research paper, 1988.
Shaffer, Sherrill, and Edmond David, “Econo­
mies of superscale in commercial banking,”
Applied Economics, 23, 1991, pp. 283-293.
Sherman, H. David, and Franklin Gold, “Bank
branch operating efficiency,” Journal of Banking
and Finance, 9, 1985, pp. 297-315.
Zieschang, Kimberly D., “A note on the decom­
position of cost efficiency into technical and
allocative components,” Journal of Econometrics,
23, 1983, pp. 401-405.



Public Information Center
Federal Reserve Bank of Chicago
P.O. Box 834
Chicago, Illinois 60690-0834


D o N o t F orw ard
A d d ress C o rrec tio n Requested
R eturn Postage G uaranteed