View original document

The full text on this page is automatically extracted from the file linked above and may contain errors and inconsistencies.

COMMODITY PRICES AS PREDICTORS
OF AGGREGATE PRICE CHANGE*
Roy H. Webb

Many analysts have advocated using commodity
prices as a guide for monetary policy.1 The basic
reasoning can be simply put: “Money creation is
intended to promote price stability and is best
guided by an index of prices set in real markets.”
[ Wall Street Journal, 1988] The rationale for stabilizing commodity prices can also be expressed in
three propositions. First, inflation is a monetary
phenomenon that should be eliminated. Second,
commodity prices are determined in auction markets;
they will therefore change quickly in response to
monetary policy actions. Third, changes in commodity prices are good predictors of future aggregate price
change. If all three propositions are accepted, then
commodity prices might well be a useful guide for
monetary policy, possibly serving as an intermediate
target or at least as an important indicator variable.
This paper examines the third proposition: commodity prices are good predictors of aggregate price
change. Other economists have reported varying
results. Alan Garner [1988, p. 12], for example, found
“broad commodity price indexes are always useful
in predicting consumer price inflation.” Joseph Whitt
[1988] found that commodity price indexes had
substantial predictive value in the volatile post-1975
period, and Philip Klein [1985] found a commodity
price index to be a useful leading indicator. Aguais
et al. (1988, p. 14], however, found “there is no
evidence that [commodity price indexes] provide any
information [for predicting movements in the general
price index] beyond what is already contained in
wages and supply conditions.” Bennett McCallum
* An earlier version of this paper was presented to the Southern
Economics Association, November 20, 1988. The author received helpful comments from T. Humphrey, F. Joutz, R.
Keleher, Y. Mehra. and B. Portes. The views and opinions
expressed in this article are solely those of the author and are
not necessarily those of any other person employed by or
associated with the Federal Reserve Bank of Richmond or the
Federal Reserve System.
1

For example, Irving Fisher [1920] presented a detailed strategy
for stabilizing an index of 75 commodity prices. More recent
proposals that have attracted considerable attention have been
made by Wayne Angell [1987], James Baker [1987], and Manuel
Johnson [1988].

[1988] also found that two commodity indexes had
little predictive value. Most of the authors used
Granger causality tests to reach their conclusions.
This paper also examines the relation of commodity and aggregate prices by using Granger causality
tests. Those tests, however, are implemented
somewhat differently than in other studies in order
to avoid several potential pitfalls. In addition, this
study is broadened to include a multivariate forecasting procedure, to examine multistep forecasting,
and to investigate forecasting performance around
turning points. It therefore goes beyond related work
in examining the proposition that commodity price
indexes are useful predictors of aggregate price
measures.
Indexes Examined
Many indexes are used to measure aggregate and
commodity prices. The most useful measures for
analysis should have relatively long track records, so
that statistical results are not dominated by the
peculiarities that exist in short intervals. In addition,
the indexes should be well understood by economists
so that the results can be evaluated with respect to
the known strengths and weaknesses of particular
indexes.
The Consumer Price Index for all urban consumers
(CPI) is used below as the measure of the aggregate
price level. It is available monthly, is seasonally
adjusted by the Bureau of Labor Statistics, has been
calculated for 70 years, and has been subjected to
substantial professional examination and comment.2
One commodity price index that has attracted much
attention is the Journal of Commerce Materials
Index (JOCI), designed by Geoffrey H. Moore and
his associates at the Center for International Business
Cycle Research. They have constructed monthly
values as far back as 1948. It includes 18 industrial
commodities and was specifically designed to help
2
For further information, the U.S. Bureau of Labor Statistics
publishes numerous references, including [1978].

FEDERAL RESERVE BANK OF RICHMOND

3

predict changes in aggregate price measures.3 Another widely used index is the Spot Price Index (SPI)
published by Knight-Ridder’s Commodity Research
Bureau. It includes 10 foodstuffs and 13 raw industrial
commodities, and is also available monthly from
1948. Before 1981 it was compiled by the Bureau
of Labor Statistics4.
Charts 1 and 2 show twelve-month changes in both
commodity price indexes and the CPI. Both indexes
have been much more volatile than the CPI
throughout the postwar period. Casual interpretation
of commodity price movements is therefore difficult
and potentially misleading. Commodity price volatility should also be kept in mind when interpreting
the more formal statistical results below.
For further information, see Journal of Commerce [1986].

3

4

John Rosine [1987] provides a useful discussion of the construction of commodity price indexes.

Testing for Granger Causality
To test for Granger causality, one can examine
whether lagged values of one series add statistically
significant predictive power to another series’ own
lagged values for one-step ahead forecasts. If so, the
first series is said to Granger-cause the second. Consider the equation

where P and Q are series of macroeconomic variables,
is a white noise error term, and l is an integer
representing the lag length. If an F test finds the
estimated yis to be statistically significant, then the
series Q Granger-causes P.
Several decisions are necessary in order to implement a Granger causality test using equation (1).
What lag lengths should be used? Should the series

Chart 1
Percent

Note:

TWELVE-MONTH CHANGES IN PRICES

Each series contains the percentage change in the monthly value of the price index from the monthly value twelve months
earlier. The chart extends from January 1949 through October 1988.
ECONOMIC REVIEW, NOVEMBER/DECEMBER 1988

be differenced? What diagnostic test should be used
to determine whether the residuals are serially correlated? Are the results sensitive to the starting and
ending dates? The answers to each question are important since each choice can affect the final result.
First, consider the choice of the lag length. Nelson
and Schwert [1982] found that heavily paramaterized forms of equation (1)-that is, unnecessarily large
values of the lag length l- can result in a serious loss
of power in causality tests. To guard against
overly profligate parameterization, a model selection
statistic, the Schwarz Criterion, is used below to set
the lag length. Choosing the lag length for which the
Schwarz Criterion is minimized leads to a relatively
parsimonious specification in most cases below.5
5
Priestly [1981] discusses the relative merits of several model
selection statistics. The Schwarz Criterion (SC) is given by

freedom, q is the number of parameters estimated, and

is

The next choice, whether the series should be differenced, can be made by using tests designed to
examine series for unit roots. Yash Mehra [1988]
noted that the presence of a unit root in time series
can cause F statistics to have nonstandard distributions. In equation (1), therefore, if either series had
a unit root the typical F test might not be meaningful.
Unit Root Test To guard against that problem,
(logs of) the CPI, JOCI, and SPI series were first
tested for unit roots. The test, as proposed by Dickey
the residual variance. It can be seen that although adding an additional coefficient to an equation can lower the first term of the
SC by lowering the residual variance, the additional coefficient
also raises the second term.
In practice, the SC usually reaches a well-defined global
minimum with a fairly parsimonious parameterization. Yi and
Judge [1988] compare SC with two popular alternatives, finding
that both alternatives asymptotically overestimate the true size
of a model with a positive probability, whereas the SC’s asymptotic probability of overestimating the true size is zero.

Chart 2

TWELVE-MONTH CHANGES IN PRICES

Percent

1950
Note:

1955

1960

1965

1970

1975

1980

1985

Each series contains the percentage change in the monthly value of the price index from the month/y value twelve months
earlier. The chart extends from January 1949 through October 1988.

FEDERAL RESERVE BANK OF RICHMOND

and Fuller [1979], involves estimating the coefficients
in the following equation:

(2)
where X is the series being tested for a unit root,
A is the first difference operator, T is a time trend,
and et is a white noise error term. Under the null
hypothesis that there is a unit root in the series X,
the coefficient ß should be zero. The standard t
statistic is used for testing whether ß is significantly
different from zero; critical values, however, are not
standard but are given by Fuller [1976].
The results of unit root tests are
given in Table I. In each case the lag
length was set at the value that minimized the Schwarz Criterion. The first
three equations can be used to test
whether the series in log-level form is
appropriate. In all cases the null hypothesis-the existence of a unit rootis not rejected by examining the t statistic for the estimated coefficient ß.
It is possible that there are multiple
unit roots, and consequently differences of the series are not stationary.
The last three equations can be used
to test for a unit root when the series
in first difference form-that is, the
hypothesis is rejected; it therefore
appears that there is no unit root in first
differences of the series.
Since both commodity price indexes
are not seasonally adjusted, autocorrelations of the differenced series were examined. In neighborhoods of the 12th
and 24th lags the autocorrelations were
close to zero. The series therefore do
not appear to suffer from seasonal
autocorrelation.
Granger Causality Test Results The
tests for unit roots support testing for
Granger casuality with each series in
first differences (of logs). In equation
(1), let the series P be the first difference of the CPI and the series Q be
the first difference of either the JOCI
or the SPI. Table II contains the results
of those tests. For the SPI an F test
6

Note:

rejected the null hypothesis that the coefficients on
the lagged values of commodity prices are zero. In
other words, over the sample period the SPI Grangercaused the CPI. Since the F test is derived by assuming white noise residuals, a Lagrange multiplier test
proposed by Godfrey [1978] was used to look for
either autoregressive or moving average errors. The
null hypothesis, the absence of AR or MA errors,
was not rejected at conventional levels using a Chisquared test.
For the JOCI an F test also rejected the null
hypothesis that coefficients on lagged commodity
prices are zero. The Lagrange multiplier test did,
however, reject the null hypothesis and thus indicated
that the residuals were consistent with either an AR
or MA process. After experimentation equation (1)
Table I

UNIT ROOT TEST STATISTICS
Time bounds: January 1954 to July 1988

For the tests above the 5 percent and 1 percent critical values are -3.42
and - 3.98, respectively.

ECONOMIC REVIEW, NOVEMBER/DECEMBER 1988

Table II

was estimated assuming that residuals followed a
second order moving average process. Again an F
test rejected the null hypothesis, thereby indicating
that the JOCI Granger-caused the CPI. The
Lagrange multiplier test did not indicate significant
remaining residual correlation at the 5 percent level.
A note of caution is in order: several results mentioned above are sensitive to the lag lengths
employed. For example, with a lag length of twelve
is -2.37; the null hypothesis in that instance is not
rejected for first differences of the CPI. And in the
Granger causality test for the JOCI with a lag length
of eight, the F statistic is 1.24, thereby failing to reject the hypothesis that coefficients on the lagged
values of the JOCI are zero. Although the results of
Nelson and Schwert strongly support the relatively
parsimonious specifications reported in Tables I and
II, the sensitivity of the results to the lag length does
cause one to question the amount of information conveyed by these tests.
In addition, although both commodity price indexes add statistically significant explanatory power
to lagged values of the CPI, the actual reduction in
the standard error of estimated residuals (SEE) was

quite small. Comparing the
final regression equation
reported in Table II with
one omitting the lagged
JOCI, the annualized SEE
was increased from 2.72 to
2.82 by omitting lagged
JOCI terms. Similarly, with
nine lagged values, the SEE
was increased from 2.61 including the SPI to 2.77
without it. In short, the incremental predictive value
of both indexes was small
over the sample period.
Perhaps the incremental
predictive value has increased over time; the
results over the whole
sample would thus understate the current effect. In
particular, it is possible that
the incremental predictive
value increased after the
United States abandoned
the gold standard6. To test
that possibility the sample
was split at 1971 Q3 and
equation 1 was estimated
for the early and late subperiods as well as the
entire sample. An F test was then used to test the
hypothesis that regression coefficients were equal in
both subperiods. For the SPI the F value was 1.50;
the null hypothesis was therefore not rejected at the
5 percent level. But for the JOCI an F value of 2.90
indicates that the null hypothesis was rejected at the
1 percent level.
As anticipated, the JOCI did not Granger-cause
the CPI in the early period, but did Granger-cause
the CPI in the late period. The incremental predictive value of the JOCI remained small, however.
Omitting the JOCI from the late period equation only
increased the SEE to 3.13 from 2.95. Focusing
only on the later observations, therefore, does not
alter the conclusion that commodity prices add
little for predicting the CPI one step ahead.
A Broader Framework
That commodity prices Granger-cause aggregate
price change is not sufficient to establish their total
value in prediction. Granger causality traditionally
6

The author is indebted to Robert Keleher for this suggestion.

FEDERAL RESERVE BANK OF RICHMOND

7

measures one-step ahead prediction in a bivariate environment. As policy indicators, multistep predictions
would be much more valuable than one-month
forecasts. Also, it may be that other macroeconomic
variables add substantial predictive value; including
those other variables could alter the incremental
predictive value of commodity prices.
Model Description A vector autoregressive (VAR)
model provides a convenient framework for examining both properties. Small VAR models have been
found to provide forecasts of macroeconomic
variables that are often competitive with forecasts
from much larger models.7 Containing no exogenous
variables, VAR models can be used to produce
forecasts as many periods ahead as desired.
Three VAR models will be used in this section.
The first, VAR1, will include the CPI and JOCI plus
the 90-day Treasury bill rate (RTB), the capacity
utilization rate in manufacturing (CU), the foreign
exchange value of the dollar (EVD), and the
monetary base (MB).8 The CPI, JOCI, and MB are
logged and differenced to provide stationary series.
The second model, VAR2, substitutes the logged
and differenced SPI for the JOCI. The third model
omits any measure of commodity prices. Forecasts
from each model can then be compared to examine
any differences.
Just as overly paramaterized equations can reduce
the power of statistical tests, overly parameterized
VAR models can reduce the accuracy of forecasts.
Consider first the equation for the CPI from the
unrestricted form of the VARl model:

7

For examples using traditional measures of forecast accuracy,
see Lupoletti and Webb [1986] or McNees [1986].
8

MB is from the Federal Reserve Bank of St. Louis. EVD is
the Federal Reserve Board’s nominal trade-weighted index from
1967, extrapolated before 1967 using movements in dollar
exchange rates with the Canadian dollar, British pound, and
German mark. RTB and CU are both published by the Federal
Reserve Board.
Why were these particular variables chosen? MB, EVD, RTB,
and CU are part of a larger quarterly VAR model used by the
author to forecast GNP and its components on a regular basis.
It was suspected that each would help predict the CPI. The
only experimentation with the model’s composition was the
addition of the change in outstanding federal debt, which can
be thought of as a rough measure of fiscal actions. Adding that
variable to VAR3 did not improve forecasts of the CPI; model
statistics are therefore not included.

8

where
is a constant term, l is the common lag
length, i represents the coefficient for variable v
model contains six equations, each with the same
independent variables: with l= 6 for example, there
are six lagged values for each of six variables plus
a constant, resulting in 37 estimated coefficients per
equation.
In order to improve forecasting performance the
number of estimated coefficients is reduced by
using a simplified version of a strategy proposed in
Webb [1985]. Instead of using a common lag length
as in equation (3), lag lengths are set as in the equation below:

where l v is the lag length for variable v in the CPI
equation. The lag lengths are set in each equation
to minimize the Schwarz Criterion, yielding a
substantial reduction in the number of parameters
estimated. 9 VAR1 and VAR2 thus consist of six equations of the form of equation (4); lag lengths are
presented in Table III below. VAR3 is VAR1 minus
the equation for commodity prices and all lagged
commodity price terms in other equations.
Forecasting Results Each model was estimated
using data through June 1975; forecasts were computed for each month through June 1976. The
forecasts for July 1975 were compared with actual
data and the resulting one-step ahead errors were
recorded; forecasts for August were used for two-step
ahead errors; and similarly, forecast errors up to
twelve steps ahead were calculated. Then the process was updated one month, with the model
estimated through July 1975 and forecasts made
through July 1976. The process of estimation and
forecasting was repeated each month through May
1988. The resulting forecast errors were tabulated
and summary statistics are displayed in Table IV,
9
The exact strategy for selecting the lag lengths in an equation
is as follows. (1) Iterate over a large number of possibilities and
choose a pair of integers I and J that minimizes the Schwarz
Criterion, where I is the lag length for the dependent variable
and J is a common lag length for the independent variables. (2)
If there is at least one independent variable for which all lagged
values are not significantly different from zero at the 10 percent
level, drop the least significant independent variable from the
equation. (3) Repeat step (2) until all variables are significantly
different from zero or the Schwarz Criterion increases.

ECONOMIC REVIEW, NOVEMBER/DECEMBER 1988

Table III

LAG LENGTHS IN 2 VAR MODELS

value of including a measure of commodity prices
in this forecasting environment is therefore quite
small.11
11
Furlong [1988] found similar results. He first found that the
JOCI added statistically significant explanatory power in a regression equation for the CPI. In his VAR model (substantially
different from the models examined in this paper) the JOCI improved forecast accuracy by only a rather small increment.
Finally, he found that the SPI was inferior to the JOCI in
multiperiod forecasts.

Table IV

FORECAST RESULTS FROM 3 VAR MODELS

including the traditional mean absolute error statistic
(expressed in percentage points, annualized). As expected, those errors increase over the forecast
horizon.
Also included are Theil U statistics, which are
equal to the ratio of root mean squared error of the
model forecasts to the root mean squared error from
a no change forecast. Values less than unity indicate
that the model forecast outperformed a naive no
change forecast. One can meaningfully compare
forecast errors for a stationary series with the no
change forecast; however, for nonstationary series
it is trivial to achieve a low U value. As shown in
Table IV, both models do better than simple extrapolation of current conditions for all variables. In some
cases the relative accuracy increases with the forecast
horizon. Most importantly, the forecast statistics indicate little difference between the accuracy of CPI
forecasts from the three models. At each forecast
horizon, including those not shown in the table, the
difference in mean absolute error between VAR3 and
each of the larger models is less than 0.10 1 0. The
10
It is of course possible that this result is due to some feature
of the model used. In particular, some analysts prefer to use VAR
models in level form, even if some series appear to have unit
roots. To see whether this particular model might perform
better in level form, lag lengths in VAR3 were reset with the
CPI and monetary base in log levels. The forecasting experiment described in Table IV was then repeated. The accuracy
of forecasts of the percentage change in the CPI deteriorated:
one-month ahead forecasts had a mean absolute error of 2.23
(versus 2.15 in VAR3); six-months ahead, 2.83 (versus 2.71);
and twelve-months ahead, 3.11 (versus 2.95).

Note:

Each model was estimated from January 1967 to June 1975,
and forecasts generated for each month up to 12 months
ahead. Each model was then reestimated through July
1975 and a new set of forecasts was produced. The
procedure was repeated through May 1988. The resulting
forecasts were compared with the actual data, and the
resulting error statistics are displayed in this table.

FEDERAL RESERVE BANK OF RICHMOND

9

Accuracy Near Turning Points Traditional statistics such as those presented in Table IV may not
completely reveal the value of observing commodity prices. Proponents of commodity price indexes
often stress their value in predicting major changes
in the rate of inflation. With only a few major changes
of inflation trends in the sample period studied above,
however, it is possible that a substantial positive
effect at a few critical periods was obscured by the
noise from many other periods.
Analysts at the Center for International Business
Cycle Research have identified a set of turning points
for major changes in the rate of growth of aggregate
prices. The idea is similar to the traditional use of
peaks and troughs for separating expansions and
recessions in business cycle analysis. The resulting
set of inflationary turning points defines broad phases
of advancing and declining inflation rates. Unfortunately (for the analyst, that is) there are few turning points in the entire sample period. With VAR1
(which predicted the CPI more accurately than
VAR2) estimated through June 1975, the post-sample
forecasts can be evaluated over a period including
three turning points: troughs in June 1976 and April
1986, and a peak in March 1980.
Table V contains forecast results for the CPI from
the VAR1 and VAR3 models when forecasts were
made near inflationary turning points. While onemonth ahead forecasts made near turning points were
less accurate than those made over the entire
sample, the results are mildly surprising at longer
horizons. In both models, six-month ahead forecasts

Table V

FORECAST ACCURACY NEAR TURNING POINTS
Mean Absolute Errors
Turning
Point
June 1976

VAR3

VAR1
1 step

6 step

12 step

1 step

6 step

12 step

1.59

1.24

1.49

1.59

1.39

1.58

March 1980

2.45

3.92

3.87

2.48

4.02

4.41

April 1986

3.06

2.69

1.61

3.00

3.01

1.72

Average

2.37

2.61

2.32

2.36

2.71

2.57

Note:

10

Forecast errors were collected for forecasts made in the month of a
turning point, in the 6 previous months, and in the 6 following
months, for a total of 13 forecasts around each turning point.

were roughly as accurate near turning points as at
other times, and twelve-month ahead forecasts were
actually more accurate near turning points.
Comparing the VAR1 and VAR3 models, for onemonth forecasts the model without the JOCI was very
slightly more accurate. For six-month forecasts, the
model containing the JOCI was slightly more accurate. But for twelve-month forecasts, the model
containing the JOCI was more accurate by 0.25 percent. This is the largest gain from using the JOCI
found in this article; it is still rather small.
Conclusion

This article examined the ability of the Journal of
Commerce Materials Index and the Commodity
Research Bureau Spot Price Index to improve
forecasts of inflation, which was measured by changes
in the Consumer Price Index. Although Granger
causality tests indicated statistically significant effects,
the magnitude of improvement was very small and
the test result for the JOCI was sensitive to the lag
length employed.
Each commodity price index was next included in
a small VAR model designed to predict the CPI.
Again, while adding the JOCI to the model improved forecasts of the CPI at each horizon, the:
magnitude of improvement was small. Adding the
SPI to the model had mixed results, only improving
forecasts by a small amount for twelve-month forecasts. Examining errors made by forecasts dated near
inflationary turning points again revealed only a small
improvement in forecast accuracy when including the
JOCI.
Since only one aggregate price index and two commodity price indexes were examined, these results
are only suggestive. It would certainly be useful to
study other indexes, other time periods, and data
from other countries. With that important qualification in mind, it is difficult to see a major role for commodity prices in the conduct of monetary policy.
That commodity prices added a small amount of
predictive power suggests that a small improvement
in anti-inflation policy could be achieved by using
them as an indicator variable. None of the results
presented in this paper, however, suggest that slightly
more accurate inflation forecasts by themselves would
have allowed policymakers to avoid the sixfold increase in the CPI in the post-World War II period.

ECONOMIC REVIEW. NOVEMBER/DECEMBER 1988

References
Aguais, Scott D., Robert A. DeAngelis, and David A. Wyss.
“Commodity Prices and Inflation.” Data Resources U.S.
Review (June 1988) pp. 11-18.
Angell, Wayne D. “A Commodity Price Guide to Monetary
Aggregate Targeting.” Address to the Lehrman Institute,
New York, December 10, 1987.
Baker, James A. Speech at Annual Meeting of the International
Monetary Fund, Washington, September 1987.
Dickey, D.A., and W.A. Fuller. “Distribution of the Estimators
for Autoregressive Time Series with a Unit Root.” Journal
of the American Statistical Association 74 (June 1979): 427-31.
Fisher, Irving. Stabilizing the Dollar. New York: Macmillan, 1925.
Fuller, Wayne. An Introduction to Statistical Time Series. New
York: Wiley, 1976.
Furlong, Frederick T. “Commodity Prices as a Guide for
Monetary Policy.” Federal Reserve Bank of San Francisco
Economic Review (Winter 1989). Forthcoming.
Garner, C. Alan. “Commodity Prices: Policy Target or Information Variable?” Research Working Paper, RWP 88-10.
Federal Reserve Bank of Kansas City, November 1988.
Godfrey, L.G. “Testing Against General Autoregressive and
Moving Average Error Models When the Regressors Include
Lagged Dependent Variables.” Econometrica 46 (November
1978): 1293-1302.
“Goodbye Gift.” Wall Street Journal, June 17, 1988.
“Guide to Inflationary Trends.” Journal of Commerce, 1986.
Johnson, Manuel H. “Current Perspectives on Monetary Policy.”
Speech at the Cato Institute, Washington, February 25,
1988.
Klein, Philip A. “Leading Indicators of Inflation in Market
Economies.” Working Paper FB-85-04. Columbia University, September 1985.

Lupoletti, William H., and Roy H. Webb. “Defining and Improving the Accuracy of Macroeconomic Forecasts: Contributions from a VAR Model.” Journal of Business 59 (April
1986, part 1): 263-85.
McCallum, Bennett T. “Targets, Indicators, and Instruments
of Monetary Policy.” Carnegie Mellon University, September 1988. Typescript.
McNees, Stephen K. “The Accuracy of Two Forecasting Techniques: Some Evidence and an Interpretation.” Federal
Reserve Bank of Boston New England Economic Review
(March/April 1986), pp. 20-31.
Mehra, Yash P. “Velocity and the Variability of Money Growth:
Evidence from Granger-Causality Tests Reevaluated.”
Working Paper 87-Z. Federal Reserve Bank of Richmond,
August 1987.
Nelson, C.R., and G.W. Schwert. “Tests for Predictive Relationships between Time Series Variables: A Monte Carlo
Investigation.” Journal of the American Statistical Association
77 (March 1982): 11-18.
Priestly, M.B. Spectral Analysis and Time Series. California:
Academic Press, 1981, pp. 370-80.
Rosine, John. “Aggregative Measures of Price and Quantity
Change in Commodity Markets.” Working Paper 81. Board
of Governors of the Federal Reserve System, December
1987.
U.S. Bureau of Labor Statistics. The Consumer Price Index:
Concepts and Content Over the Years. Washington, May
1978.
Webb, Roy H. “Toward More Accurate Macroeconomic Forecasts from Vector Autoregressions.” Federal Reserve Bank
of Richmond Economic Review 71 (July/August 1985): 3-11.
Whitt, Joseph A. “Commodity Prices and Monetary Policy.”
Working Paper 88-8. Federal Reserve Bank of Atlanta,
December 1988.
Yi, Gang, and George Judge. “Statistical Model Selection
Criteria.” Economic Letters 28 (1988): 47-51.

FEDERAL RESERVE BANK OF RICHMOND

11

SIC: SWITZERLAND’S NEW ELECTRONIC
INTERBANK PAYMENT SYSTEM*
Christian Vital and David L. Mengle
E DITOR’ S P R E F A C E In the United States, bankers and the Federal Reserve System have attempted
to control risk on large-dollar wire transfer networks by means of quantitative limits. Net debit caps,
as the limit.. are called, restrict the extent to which an institution can incur daylight overdrafts on
Fedwire and net debits on the CHIPS network. The Federal Reserve is now considering additional steps
such as reducing caps and pricing daylight overdrafts.
In contrast, Switzerland took the bold step of prohibiting daylight overdrafts when it instituted its
new wire transfer system, Swiss Interbank Clearing (SIC), in mid-1987. The following article, which
details the Swiss experience and approach to daylight overdrafts, should be an important contribution
to payment system policy discussions in the United States.
Of course, certain institutional features of large-dollar wire transfer in the United States are different
from those in Switzerland. For example, the number of participating depository institutions is far larger
on Fedwire (almost 7,000) than on the Swiss system (156). In addition, Swiss banking is far more
concentrated than is banking in the United States. But even so, the Swiss experience does suggest a new
alternative that could be considered for the future of wholesale wire transfer in the United States.

Introduction
In Switzerland, as in other countries in which the
financial sector plays a prominent part, banks’ funds
transfer operations are characterized by large values
and a high rate of turnover. An average of over
250,000 payments per day totalling more than 100
billion Swiss francs (=$68.5 billion)1 are currently
processed through the interbank payment system.
The daily average turnover is over thirty times the
volume of banks’ deposits at the Swiss National Bank.
Until 1987 most funds transfers were carried out
through the Bank Clearing System developed by the
banks in the early 1950s.2 Payment orders were
sent by means of paper vouchers and magnetic tapes.
* The article is an adaptation of C. Vital, “Das elektronische
Interbank-Zahlungsverkehrssystem SIC: Konzept und vorläufige
Ergebnisse,” Wirtschaft und Recht, vol. 40, May 1988. It is
offered here by permission of the publisher. Dr. Vital is
Director of General Processing and Back Office Operations at
the Swiss National Bank in Zurich. Dr. Mengle is a Research
Officer with the Federal Reserve Bank of Richmond.
1
All conversions of Swiss francs to dollars assume an exchange
rate of 1.46 Swiss francs to one dollar.
2

See Bank for International Settlements (1985) or Lehmann
(1986) for a survey of the Swiss interbank payment system. At
the end of 1987, 342 banks with a total of 2,894 branch offices
participated in the Bank Clearing System. The remaining banks
executed their payments through the giro system of the Swiss
National Bank, through correspondent banks, or through the
Postal Giro System.

12

The orders were forwarded to the receiving banks
through a central computer center operated by
Telekurs AG, a company jointly established by the
banks. In the computer center, individual orders were
added up to arrive at credit and debit totals for each
individual bank; they were then entered in the giro
(or reserve) accounts of the participant banks at the
Swiss National Bank. (Banks’ giro accounts are the
equivalent of reserve accounts in the United States.
Funds in giro accounts do not earn interest.)
The transmission and processing stages of the Bank
Clearing System could extend over several days. This
created uncertainty in planning and monitoring liquidity and thus involved the risk of misguided decisions. In view of today’s substantial volumes of funds,
such decisions could entail considerable costs.3 Furthermore, the system could not keep pace with
rising demands for bank payment services. Finally,
it limited the ability to integrate the banks’ in-house
information systems with the external funds transfer
system. Such integration was essential to streamlining the processing of payments.
The call for virtually lag-free information transmission and processing could only be met by resorting
to electronic communication and processing technology. And because it was a centrally organized
3

Fischer and Hurni (1988).

ECONOMIC REVIEW, NOVEMBER/DECEMBER 1988

institutional framework, the Bank Clearing System
seemed well suited for the introduction of an electronic funds transfer system. First steps in this direction were undertaken in the 1970s. Owing to cost
factors and unsolved conceptual problems, however,
the efforts failed to achieve their end. In. 1980 a study
group of large Swiss banks initiated a new project
under the name of “Swiss Interbank Clearing” (SIC).
The new system was developed between 1981 and
1986 by Telekurs AG in cooperation with the banks
and the Swiss National Bank. It began operation in
June 1987. The remarks below provide an overview
of the conceptual problems in interbank payment
systems, the solution designed for SIC, and the experience gained with the new system during its first
year of operation.
Interbank Payment Mechanisms
Gross settlement and net settlement systems Funds
transfer systems are susceptible to credit and fraud
risks as well as to operational risks. In interbank payment systems the magnitude of the value of funds
to be processed poses special credit risk problems.
In this context, it is useful to distinguish between
“gross settlement” systems and “net settlement”
systems. 4
In gross settlement systems payment takes place
by means of an irrevocable and final transfer of
deposits from the sending bank’s account at the central bank to the receiving bank’s account. The payment act (the transfer of the payment medium) and
the settlement act (the transfer of central bank
money) are linked in these systems.
In the United States, Fedwire is an example of a
gross settlement system. Transfers of funds through
Fedwire are final, but executing a payment order does
not depend on the availability of the funds. Temporary overdrafts on accounts, also known as “daylight
overdrafts,” are on the order of $50 billion per day,
that is, about 10 percent of the average daily value
of funds processed through the system.
In net settlement systems the notification of payment received by the receiving bank represents a
claim on the sending bank. The claims are accumulated up to a specified time (for example, up
to the end of the day) and are subsequently settled
by means of a transfer of central bank money from
the net debtors to the net creditors. All payments
4

Not all wire transfer networks provide settlement of payments
among banks. The Society for Worldwide Interbank Financial
Telecommunications (SWIFT), for example, only transmits payment instructions. Actual payments take place by means of
transfers of correspondent balances.

effected during the settlement period are made subject to the final settlement transfers. They are thus
also termed “provisional” payments.
In the United States, the Clearing House Interbank Payments System (CHIPS) is an example of
a net settlement system. Payments made through
CHIPS are subject to the condition that at the end
of the day participants’ net positions are settled
through accounts held at the Federal Reserve Bank
of New York. Should a participating bank not be in
a position to meet its net liabilities, CHIPS regulations provide for the reversal of all payments executed
in the course of that particular day affecting the
defaulting participant. If such a situation were to
occur, other participants might also become unable
to pay. To date, such an eventuality has never arisen.
If and when it does, the Federal Reserve System as
lender of last resort might feel compelled to come
to the aid of the defaulting participant by granting
it credits. Total daily net credits recorded in the
CHIPS system are of the same magnitude as the
daylight overdrafts in Fedwire.
The Swiss Bank Clearing System was also a net
settlement system. Payments made through this
system were settled several times a day via participants’ giro accounts at the Swiss National Bank.
The accounts could be overdrawn during the day at
no cost and to a practically unlimited extent. In contrast to the CHIPS system, the Swiss National Bank
explicitly guaranteed settlement up to the limit of the
collateral held by Bank Clearing participants with the
Swiss National Bank. But the collateral, which
served as the sole security against losses, was modest
compared with the volume of daily overdrafts which
averaged 20 to 30 billion Swiss francs ($13.7 to $20.5
billion).
Risk aspects Since payments in gross settlement
systems are final, a receiving bank may dispose of
the funds credited to its account without incurring
a risk. A sending bank incurs a credit risk when it
executes payments on behalf of a customer in excess of the customer’s credit balance. The central
bank runs a credit risk if it allows a sending bank
to overdraw its reserve account. As a rule, gross
settlement systems have permitted overdrafts that
are both free of charge and unlimited in quantitative
terms during the day (but not overnight). Measures
designed to avoid or limit overdrafts are a problem
insofar as they could severely disrupt payment flows
(given the large volumes of funds recorded in interbank payment transactions). Further, such measures
could impose a cost burden on system participants
and thereby induce them to switch to alternative
funds transfer networks.

FEDERAL RESERVE BANK OF RICHMOND

13

In net settlement systems like CHIPS, all
payments are made subject to the condition that
settlement take place at a predetermined time, usually
before opening of the next business day. Despite this
reservation, a participant may allow his customers
to use incoming funds prior to settlement; the receiving bank thus assumes a credit risk vis-à-vis the bank
ordering the payment. If a participating bank is not
in a position to meet its net liabilities at the end of
a day, it may affect the ability to pay of other participants, their customers, and ultimately the entire
economy. The risk of such a chain reaction is known
as systemic risk.
In gross settlement systems like Fedwire, finality
of payment is guaranteed in formal terms by the relevant regulations and in actual practice by the central
bank’s money-creating powers. No systemic risk is
inherent in such systems because participating banks
do not enter into credit relationships with one
another. Any credit relations arising in gross settlement systems in connection with the processing of
payments are overdrafts on reserve accounts; the risks
involved have to be borne by the central bank and
do not affect the other participants.
Elimination of systemic risk is a decided advantage that gross settlement networks have over net
settlement networks. But the practice usually followed in traditional gross settlement systems of allowing overdrafts on accounts without penalty restricts
the flexibility of the central bank, as the extent of
such overdrafts can only be monitored and controlled imperfectly. Moreover, gross settlement
systems lack the incentives inherent in net settlement systems for a participant to take into account
the solvency of other participants and to reduce credit
risks by means of credit limits. It must therefore be
expected that the total amount of overdrafts in a gross
settlement system is greater than the total of net
credits in a net settlement system under otherwise
identical circumstances.
Regulatory measures Balance sheets drawn up according to conventional methods show the level of
assets and liabilities at the end of the day. They do
not show credit risks arising in the interbank payment system through daylight overdrafts and net
debits because they are only incurred during the day
and disappear by the end of the day. Moreover,
owing to a lack of suitable data such risks can only
be vaguely assessed in traditional funds transfer
systems. Accordingly, in most countries supervisory
authorities have so far paid little attention to such
risks. One exception is the United States, where the
14

question has been the subject of extensive studies
for a number of years.5
In recent years the credit exposures observable in
the large-dollar networks increased to such an extent that they were considered a threat to the stability
of financial markets.6 In 1986 the Federal Reserve
System therefore issued a policy statement requiring Fedwire and CHIPS participants to use a system
of net debit caps to restrict any further expansion
(in quantitative terms) of the credit relationships
resulting from payment processes.7 Moreover,
endeavors are being made to establish and ensure
the finality of CHIPS payments through rules that
require participants to somehow guarantee
settlement.
Main Features of the Swiss Interbank
Clearing System8
Demands on the system In general, the introduction
of electronic systems for interbank payment transactions has three goals:

1) creating optimum conditions for the planning
and monitoring of liquidity by providing realtime information transmission and processing,
2) expediting and improving the quality of payment transactions, and
3) rationalizing the processing and settlement of
payments by means of large-scale automation.
The SIC system had a fourth goal: creating a gross
settlement system-a funds transfer system in which
each payment is made irrevocably and finally through
participants’ accounts at the Swiss National Bankthat would guarantee a smooth processing of the payment flow even if no overdrafts were allowed on
reserve accounts. This would make it possible to
avoid the credit risks connected with overdrafts on
gross settlement systems or provisional payments on
net settlement systems. The solution arrived at was
simple: Do not release a payment that will cause an
overdraft until covering funds have arrived.
Account overdrafts can be the result of insufficient
reserve account balances in relation to the participants’ volume of payments or a lack of synchronization of incoming and outgoing payments.
5

Stevens (1984), Mengle (1985), Smoot (1985) Dudley (1986),
Humphrey (1986, 1987), Mengle et al. (1987), Corrigan (1987),
and Belton et al. (1987).
6

Corrigan (1987).

7

Belton et al. (1987).

8

Buomberger (1987), Granziol (1986), Lehmann (1984, 1986),
Meyer (1985), Müller (1986), SIC (1986), Telekurs (1987).

ECONOMIC REVIEW, NOVEMBER/DECEMBER 1988

Given the current daily volume of payments to the
tune of over 100 billion Swiss francs ($68.5 billion),
prohibitively high costs would be imposed on participants if they were required to increase their noninterest-bearing reserve account balances or to coordinate the timing of incoming and outgoing payments
so as to prevent any overdrafts from occurring. The
experience of the United States seems to indicate
quite clearly that the problem of overdrafts cannot
be properly solved on the basis of caps or payment
coordination by participants alone9. A less costly
solution might result if the funds transfer system itself
were to help solve the synchronization problem. The
SIC attempts to relieve participants as far as possible from the synchronization task by automatically
guaranteeing an optimum synchronization of incoming and outgoing payments.
In order to take due account of increasingly
sophisticated customer requirements and to lower the
cost of each individual payment transaction, it was
further planned to send not only large-value payments
through SIC but also to provide for the processing
of a substantial proportion of bulk payments. Because
under these conditions a total of more than 400,000
payment transactions might have to be reckoned with
on peak days, it was specified that SIC should have
a settlement capacity of 90,000 payments per hour.
Components The requirement that each payment
must be settled finally and irrevocably on SIC implies that participants’ clearing accounts must be the
reserve accounts managed by the Swiss National
Bank. But actual operation of SIC by the Swiss National Bank would have meant a fundamental change
in the allocation of responsibilities among the banks,
Telekurs AG, and the Swiss National Bank from the
pattern existing in the Bank Clearing System. It
would have been impossible for the Swiss National
Bank to implement such a major project within a
reasonable time because it lacked the necessary
technical capabilities and experience. The major
banks and Telekurs AG, however, had gained
ample experience in the course of their own research
and development work. For this reason, it was
decided that SIC would operate on the computer
systems of Telekurs AG. The objective of administering participants’ reserve accounts held with the
Swiss National Bank with the aid of this system was

achieved by means of an agreement on the allocation of functions: Telekurs AG would operate the
SIC computer center on behalf of the Swiss National
Bank, while the Bank would manage the accounts.10
The chief components of SIC are shown in
Figure 1.11 At the center is the computer system
in which participants’ “SIC accounts” are administered. The computer systems of the Swiss National Bank and of the participants are linked to the
SIC computer either directly or by communication
computers.12 SIC also has a magnetic tape interface
with the postal checking system permitting transfers
from postal checking accounts to the reserve accounts
and vice versa. Moreover, magnetic tape interfaces
to service applications provide for the processing of
customer-related payment transactions (such as check
clearing and cash dispensers) and for securities clearing with traditional net settlement methods.
In accordance with contractual agreements, participating banks’ SIC accounts take the form of
reserve accounts at the Swiss National Bank. In
addition, every participant has a traditional reserve
account which is administered on the computer
system of the Swiss National Bank and bears the
designation “master account.” Legally, both accounts
form a single unit and carry the same rights and
obligations, though physically they are managed
separately.
The SIC account is used for processing SIC transactions, while the master account is used for all other
transactions (such as cash withdrawals). At the beginning of a clearing day the Swiss National Bank
transfers balances from the master account to the SIC
account. At the end of the day the total debits and
credits on the SIC account are transferred to the
master account so the master account again shows
the full reserve balance of a participant. The participating bank decides how the balance is to be
divided up between the two accounts. In so doing,
it must bear in mind that payments are made from
the two accounts only if there are sufficient funds
in the accounts. Transfers from the master account
to the SIC account and vice versa are possible at any
time during the day.
Processing of payments SIC is a credit transfer
system. That is, it does not in principle allow debit
transactions. Payment transactions entered by the
Swiss National Bank on the instructions of a participant are an exception, but take place only in unusual

9

“Since there probably are limits as to how far efforts to reduce
daylight overdrafts can go, the current daylight overdraft control program would have to be augmented by some combination of clearing balance requirements for major users of Fedwire
and explicit charges for daylight credit . . .” Corrigan (1987),
p. 31. See also Belton et al. (1987).

10

See Hess (1988) on the contractual basis.

11

See Telekurs (n.d.) on the hardware concept.

12

See Birchler (1987) on the communication concept.

FEDERAL RESERVE BANK OF RICHMOND

15

circumstances such as computer breakdowns. In
addition, payments for the special services shown in
Figure 1 are settled by debit transactions.
The planned volume of transactions makes high
demands on the processing capacities of the participating banks’ systems and the SIC computer
system. SIC therefore provides a 24-hour service on
bank working days. Payment orders may be entered
around the clock, either through the network or on
magnetic tape, for settlement on the day of input or
on one of the following ten bank working days. Payment orders not due for settlement on the day of
input are stored in the “pre-value date file” and are
automatically executed on the due date. Settled
payments are delivered to the recipient through the
network, on magnetic tape, or on paper. The
magnetic tape and paper interfaces are reserved
primarily for backup purposes.
The processing of payments to be settled on the
day of input is shown in Figure 2. The payment
message entered by Bank A is first “validated” by
SIC. That is, the system checks whether the message
complies with the formal requirements listed in the
SIC standards, whether it has not already been
input (double entry check), and whether it is compatible with the master data stored for the bank. If
the validation result is positive the sending bank
receives an “OK” message and the payment message
continues to be processed. Otherwise the sending
bank receives an “NOK” message (not OK) and the
payment message must be entered again. Validated
payment messages are then passed on to the SIC
settlement mechanism. This is the central component that automatically ensures synchronization of
incoming and outgoing payments.
A payment order is settled, that is, the account
of the sending bank ordering the payment (Bank A)
is debited and the account of the receiving bank
(Bank B) is credited if there are sufficient funds
(“cover”) in the sending bank’s account to be debited.
If desired, the sending bank is advised of the result
of the check by means of an “EX” or a “NEX’
(executed or not executed) message. Settled
payments are delivered to the receiving bank, which
in turn has to acknowledge receipt to the SIC system.
If sufficient cover is not available the payment order
is transferred to a “waiting queue” and kept pending
until sufficient funds have accumulated in the clearing account as a result of incoming payments. Once
sufficient funds are available the settlement process
is initiated automatically. The sequence of settlement is determined by the “first-in-first-out” (FIFO)

principle, that is, by order of input.13 No daylight
overdrafts can occur.
It is possible that some payments cannot be
settled by the end of the day owing to lack of cover.
In such an event the payments involved are cancelled during end-of-day processing and must be
entered again by the sending bank on the following
day.
The settlement of a payment is final and irrevocable. The receiving bank can thus dispose of the incoming amounts without incurring any risk. But
unlike settled payments, payment transactions stored
in the waiting queue or in the pre-value-date file may
be cancelled by the sending bank at any time. The
purpose of allowing cancellation is to discourage
receiving banks from releasing pending payments
(similar to provisional payments on CHIPS) prior to
settlement.14 That is, receiving banks are less likely
to allow customers access to provisional funds if there
is a possibility the payment could be cancelled before
settlement.
Inquiries A bank participating in SIC can monitor
any settled incoming and outgoing payments or
payments stored in the waiting queue and pre-valuedate file that concern it. Similarly, it can monitor the
actual balance in its SIC account and the balance including any payments not yet settled for all valid value
dates. All information entered in the system is thus
immediately available to the participant concerned.
The Swiss National Bank has access to the same
information, but for all SIC accounts. For individual
payment messages, access is restricted to settlementrelated data (sending and receiving bank, amount,
date).
Daily schedule A SIC day begins at around 6 p.m.
and ends at approximately 4:15 p.m. of the following bank working day. Between 6 p.m. of the first
working day and 3 p.m. of the following day the
entering of payment messages is not restricted.
At 3 p.m. “Cutoff One” takes place. Any payments
entered after Cutoff One for same-day settlement
automatically have their value date changed to the
next day. The sole exceptions are “cover payments,”
which may be entered until “Cutoff Two” (4 p.m.)
for same-day settlement. The intervening hour
between Cutoff One and Cutoff Two is intended to
13 The settlement mechanism described applies to all SIC payment transactions including payments between two branches of
the same bank.
14
Incentives of this kind are also reduced to a minimum by the
rule that payments are not delivered to the recipient immediately
after being entered but are withheld until settlement has taken
place.

FEDERAL RESERVE BANK OF RICHMOND

17

Figure 2
Processing of Payments for Same-Day Settlement

INPUT OF PAYMENT
TRANSACTION

VALIDATION
l Standards
l Double entry check
l Master data

WAITING QUEUE
l Inquiry by A and B
l Cancellation by A
l Retry settlement after
A has received funds

CHECK BALANCE OF A
l Sufficient funds
in Bank A’s
reserve account?

SIC ACCOUNTS
l Debit A, credit B
l Inquiry of balance
l Inquiry of payments

DELIVER PAYMENT TO B
CONFIRM RECEIPT

permit participants whose payments have not been
carried out prior to Cutoff One owing to lack of cover
to procure the funds necessary for settlement. After
Cutoff Two only cover payments entered by the
Swiss National Bank are accepted for same-day
settlement until end-of-day processing begins. This
is a backup measure in case a participating bank is
not able to enter cover payments itself because of
technical difficulties.
At around 4:15 p.m. end-of-day processing begins.
All pending transactions are cancelled and the total
credits and debits on each SIC settlement account
are transferred to the master accounts. A new SIC
day begins at approximately 6 p.m.; the settlement
process for the new day starts with the transfer of
reserve account balances from the master accounts
to the SIC settlement accounts at approximately
7:30 p.m.
It cannot be ruled out that a participant might fail
to enter all its transactions for same-day settlement
prior to Cutoff One because of, say, technical difficulties. Nor can it be ruled out that payments may
remain in the waiting queue due to lack of cover
until end-of-day processing begins. In either case
considerable costs may arise, both for the participant
concerned and other participants, in the form o f
interest on delayed payments. If the amounts
involved are substantial and if there is any possibility of the problems finding a solution within a
reasonable time, a postponement of cutoff times and
of end-of-day processing will be considered.
Security and reliability measures15 In addition to
measures for limiting credit risks, the architecture
of an interbank payment system includes measures
to protect against fraud and operational risks. In particular, operational difficulties can set off chain reactions that may jeopardize payment processing and
therefore the timely fulfillment of obligations running into billions of Swiss francs. Understandably,
then, an interbank payment system must provide a
high degree of security and reliability.
There are two types of security measures to
protect against infiltration, falsification, and tapping
of messages by unauthorized third parties. First,
authentication protects message transmission between participants and the SIC computer by means
of a mathematical procedure that verifies the authenticity and integrity of a transaction. Second, encryption is available to prevent messages from being
tapped. Encryption is not compulsory, but all participants are advised to use it.
15

See also Walder (1987).

With regard to operational reliability, there are
backup facilities in the SIC computer center and in
a remote backup center to serve as standbys in the
event of failures of the SIC computer system or the
central network equipment. But SIC encompasses
not only the central SIC system, but more than 150
participant computer systems as well. While the
reliability of data processing and communication
facilities has reached a high standard in recent years,
a system with such a large number of complex components cannot be expected to operate without any
failures at all. The robustness of the overall system
thus depends largely on the availability of suitable
backup facilities in the event of breakdowns.
If time were needed to recover from failures of a
participant’s computer system or of the central
system, cutoff could be postponed. In addition, any
participant who is unable to communicate with SIC
can resort to an exchange of data by means of
magnetic tapes. In the event of serious disruptions
provision is made for the Swiss National Bank to
input large-value payments or totals of payments
into the SIC system or enter them through the master
accounts.
Introduction of SIC16
SIC was developed between 1981 and 1986 and
was subjected to extensive tests from September
1986 to May 1987. The introduction of such a
system could be costly and could involve high risks,
since it would not be possible to test such a complex facility for every detail under all conceivable circumstances. Moreover, participating banks would
have to install their systems and have them functioning on schedule. Finally, the banks would have to
reorganize operational procedures that had become
firmly established over the years.
In order to limit the risks involved in the introduction of the system and also because it was hardly to
be expected that all participants would be able to
complete all the preparations and conversions by a
certain date, it was decided to introduce the new
system step by step within the space of a year. The
introduction was to be gradual in regard to both
number of links and volume of transactions. But this
led to the problem of payments accumulating on the
accounts of participants not yet linked to the system.
Potentially, this could cause settlement to come to
a virtual standstill. The problem was solved by requiring that all “large” payments (exceeding one
million Swiss francs) be processed through SIC as
16

For a first progress report see Vital (1987).

FEDERAL RESERVE BANK OF RICHMOND

19

soon as it began operation. The Swiss National Bank
assumed responsibility for entering such payments
on behalf of any institutions not yet linked to the
system.
SIC began operation on 10 June 1987. During the
following months of operation the functional viability of the overall system was established. If one
were to take into account the system’s complexity
and that its development meant breaking new
ground, the introduction may be regarded as smooth
and successful. As was to be expected, a few technical
difficulties did occur both in the central system and
with a number of participants. Each day, however,
the settlement books were properly closed. No conceptual shortcomings were revealed in the course of
the practical operations of the SIC system. The
technical problems that did arise showed that the
backup plans provided the necessary immunity from
operational disruptions.
It was further revealed in the first few months
that the SIC settlement mechanism worked satisfactorily. Transaction volume fluctuated between 60 and
140 billion Swiss francs ($41.1 to $95.9 billion) every
day during that time. Even so, reserve account overdrafts, which had amounted to between 20 and 30
billion Swiss francs daily in the old Bank Clearing
System, were permanently eliminated at one stroke
when SIC came into operation without causing any
disruptions in the interbank payment flow.
Experience since the Introduction of SIC17
Participants and payment volumes When SIC began
operation on 10 June 1987, eight participating institutions were linked to the system. On that first day,
13,300 payments totalling 80 billion Swiss francs
($54.8 billion ) were processed. By the end of
November 1988 the number of participants linked
on-line to SIC had risen to 156. (In comparison,
CHIPS has 136 participants and Fedwire serves
almost 7,000 depository institutions.) Further, the
number of transactions per day approached 170,000
and the maximum peak day volume had increased
to over 300,000 payments (Figure 3). But the expansion of average daily value of payments over the
same period seems less dramatic in comparison
because large payments, which account for the
major part of the volume of funds, have been executed through SIC from the very first day (Figure
4). Still, peak day volume surpassed 200 billion Swiss
francs for the first time in November 1988.
17

The Appendix treats the subject of this section in more detail.

20

Figure 3
Number of SIC Payments Per Day
July 1987-November 1988
thousands
of payments

J

A

S

O

1987

N

D

J

F

M

A

M

J

J

A

S

O

N

1988

SIC will completely replace Bank Clearing in
January 1989. If a bank not linked on-line to SIC
wishes to make a payment through SIC, it does so
through a correspondent linked to SIC.
Distribution of payment size While all large (one
million Swiss francs or more) payments have been
processed through SIC since June 1987, the proportion of small (up to 5,000 Swiss francs) payments
has increased in terms of number of transactions as more participants have been added to SIC.
In September 1987 small payments constituted
almost SO percent of transactions, but by November
1988 their proportion had grown to about 77 percent. At the same time, the proportion of large
payments had fallen from 23 percent to about 5 percent of the total number of transactions.
But in terms of value, only large payments are of
any importance. Further, the distribution of values
of payments has not changed markedly over time.
Specifically, in September 1987 large payments comprised about 99 percent of total payment value, while
by November 1988 the proportion had only fallen
slightly to just under 98 percent.

ECONOMIC REVIEW, NOVEMBER/DECEMBER 1988

Figures 5 and 6. The level of reserve account
balances held by SIC participants with the Swiss National Bank declined from over 7.0 billion Swiss
francs in January 1988 to 3.2 billion Swiss francs by
November 1988 (or from $4.8 billion to $2.2 billion).
The ratio of daily value of SIC payments to the level
of reserve account balances (that is, daily turnover)
increased during the same period from approximately
twelve to well over thirty.

Figure 4
Value of SIC Payments Per Day
July 1987-November 1988

Changes in input and settlement times Since the
introduction of SIC input behavior has changed in
favor of earlier times of input as additional participants
have been linked to the system and payment volumes
have expanded. Further, on 1 April 1988 a new
transaction price stucture was introduced. The
receiving bank pays a flat fee for each message
received, and the fee does not change during the day.

Figure 5
Reserve Balances of SIC Participants
July 1987-November 1988

J A S O N D J F M A M J J A S O N

1987

1988

On United States holidays, SIC payment volumes
in terms of value fall to levels of less than 10 percent of average daily volumes. This shows that large
payments derive chiefly from foreign exchange transactions. It also shows that comprehensive risk
analyses and risk measures must take into account
the interdependence of the various national funds
transfer systems.
Use of reserve account balances It is difficult to determine the effect of SIC on the demand for reserve
account balances because new liquidity regulations
took effect on 1 January 1988.18 Essentially,
reserve requirements in Switzerland are now fulfilled by banks holding cash along with deposits with
the Postal Giro System. Thus the deposits banks hold
with the Swiss National Bank are for all practical purposes excess reserves. The results are shown in
18

Birchler (1988).

J

A

S

O

1987
Note:

N

D

J

F

M

A

M

J

J

A

S

O

N

1988

Balances are monthly averages of daily figures.

FEDERAL RESERVE BANK OF RICHMOND

21

Figure 6
Turnover of Reserve Balances
of SIC Participants Per Day

July 1987-November 1988

Note: Turnover is the ratio of average daily payment value to
average daily reserve balances.

In addition, the sending bank is charged a two-part
price for each transaction, and each part increases
at specified times during the day. One part of the
price is based on time of input, the other on time
of settlement. For example, a payment entered and
settled before 8 a.m. would carry the lowest price,
while a payment entered before 8 a.m. but not settled
until after 8 a.m. would carry a higher price. The
highest price would be charged for payments input
and settled after 2 p.m.
By charging sending banks lower prices for
payments entered and settled early in the day, it was
hoped that participants would enter their payments
a little sooner and thereby contribute to improved
coordination of incoming and outgoing payments.
While the new prices may have helped the move to
earlier input times, settlement times have not become
appreciably earlier. In fact, as reserve balances are
reduced the settlement times are increasingly
squeezed toward the end of the day.
Speed of processing Outgoing payments have to wait
22

for incoming payments unless the bank synchronizes
payments in such a way that available reserve account
balances are sufficient for immediate settlement. The
“waiting time” is the intervening period between the
receipt of a payment by SIC and its settlement. If
sufficient funds are available to settle a payment, the
waiting time is about 30 seconds. If sufficient funds
are not available, payments can be stored in the
waiting queue for minutes or even hours.
The speed with which processing takes place in
SIC depends on the value distribution of the payment flow, the level of participants’ reserve account
balances, and the degree of synchronization of incoming and outgoing payments. Speed may be increased for a given payment flow by raising the level
of reserve account balances, by improving coordination between outgoing and incoming payments, or
by exchange of intraday funds among the participants.
But such measures involve costs that must be
weighed against the advantages of a higher processing speed.
In November 1988, approximately 30 percent of
all transactions were settled within ten minutes and
approximately 55 percent within two hours of
having been entered. This is a decrease from the
corresponding figures of 43 percent and 79 percent
a year earlier. More noticeable has been the drop in
payments settled within five hours of input. While
99 percent of payments were settled within five hours,
in November 1987, the proportion had declined to
about 85 percent by a year later.
In electronic funds transfer systems that execute
payment orders unconditionally, payments are processed without any significant delays. In the SIC
system, in contrast, delays of up to a few hours may
occur. This is the price to be paid for avoiding account overdrafts in the payment process. Compared
with the Bank Clearing System, however, processing through SIC is much quicker. Consequently,
delays have never been mentioned as a shortcoming of SIC.
Payment gridlock Related to use of reserve balances
and speed of processing is the issue of payment
gridlock, a situation in which no payments move over
a system because they are all awaiting incoming funds
for cover. Gridlock becomes more likely as reserve
balances fall. The level of SIC reserve balances at
which gridlock becomes a frequent problem depends
on the number and value of large payments and the
input behavior of participants. The question is: Are
there incentives that prevent the transaction demand
for reserves from dropping to the gridlock level? If
not, then SIC could conceivably degenerate into a

ECONOMIC REVIEW, NOVEMBER/DECEMBER 1988

system with input in real-time but settlement in batch
mode at the end of the day unless administrative
measures were taken to force participants to hold
sufficient reserves.
But there are factors that should prevent reserves
from dropping to levels that threaten gridlock. First,
since payments are not delivered to receiving banks
unless settlement has occurred, receiving banks and
their customers may exert pressure for higher reserve
balances. Second, the costs associated with the
squeezing of settlement times toward the end of the
clearing day-or, in the extreme case, the costs
associated with a gridlock-should deter banks from
allowing their reserve balances to decline to unsafe
levels. In addition, the Swiss National Bank’s lending policies and the way in which rules (such as delays
of cutoffs) are enforced will help shape banks’ reserve
demand.
Overall Assessment
SIC is a centralized gross settlement system
created to process interbank payment transactions

with no daylight overdrafts and therefore no systemic
risk or Swiss National Bank intraday credit risk.
Experience shows that the objectives of implementing the system have been achieved: First, it provides an infrastructure that supports liquidity planning and monitoring in real time. Second, it expedites
and improves the quality of payment transactions,
Finally, it rationalizes the processing of payments by
means of the unretarded transmission and processing of information.
Compared with the traditional Bank Clearing
System, SIC offers considerable advantages both
to participants and to the Swiss National Bank.
Experience has shown that at least in Switzerland
the main problem arising in connection with gross
settlement systems, the elimination of account overdrafts, can be solved. Liquidity problems cannot be
avoided even with this system. It does ensure,
however, that in such cases the Swiss National Bank
has the flexibility to decide whether or not it wishes
to provide support as lender of last resort.

References
Bank for International Settlements (1985): Payment Systems in
Eleven Developed Countries. Rolling Meadows, Illinois: Bank
Administration Institute.
Belton, T.M., M.D. Gelfand, D.B. Humphrey, and J.C.
Marquardt (1987): “Daylight Overdrafts and Payments
System Risk,” Federal Reserve Bulletin 73, November,
pp. 839-52.
Birchler, U.W. (1988): “Neue Liquiditatsvorschriften und
Geldpolitik,” Geld, Währung und Konjunktur, Quartalsheft
der Schweizrischen Nationalbank 6, pp. 75-81.
Birchler, W. (1987): “Swiss Interbank Clearing bewältigt bis
zu 50’000 Transaktionen pro Stunde,” Fides Mitteilungen,
no. 54, March.
Buomberger, P. (1987): “SIC-eine positive Innovation auf
dem Finanzplatz Schweiz,” SBG Wirtschafts-Notizen, July,
pp. 11-13.
Corrigan, E.G. (1987): “Financial Market Structure: A Longer
View,” Federal Reserve Bank of New York, Annual
Report, 1986, pp. 3-54.
Dudley, W.C. (1986): “Controlling Risk on Large-Dollar Wire
Transfer Systems,” in Technology and the Regulation of
Financial Markets, pp. 121-35. Edited by A. Saunders and
L.J. White. Lexington, Massachusetts: Lexington Books.
Fischer, F. and W. Hurni (1988): “Erste Erfahrungen mit dem
Schweizerischen Interbank-Clearingsystem (SIC) aus der
Sicht einer Grossbank,” Wirtschaft und Recht 40, pp. 50-62.
Granziol, M.J. (1986): “Notenbankpolitische Aspekte des
Zahlungsverkehrs,” Geld, Währung und Konjunktur, Quartalsheft der Schweizerschen Nationalbank 4, pp. 263-69.

Hess, M. (1988): “Die Rechtsgrundlagen des Swiss Interbank
Clearing (SIC),” Wirtschaft und Recht 40, pp. 31-49.
Humphrey, D.B. (1986): “Payments Finality and Risk of
Settlement Failure,” in Technology and the Regulation of
Financial Markets, pp. 97-120.
. (1987): “Payments System Risk, Market
Failure, and Public Policy,” in Electronic Funds Transfers
and Payments: The Public Policy Issues, pp. 83-109. Edited
by E.H. Solomon. Boston: Kluwer-Nijhoff.
Lehmann, G.D. (1984): “Das neue schweizerische Bankenclearing System,” Oesterrichisches Bankarchiv 32, pp.
423-28.
. (1986): Zahlungsverkehr der Banken. Zurich:
Verlag des Schweizerischen Kaufmännischen Verbandes.
Mengle, D.L. (1985): “Daylight Overdrafts and Payments
System Risks,” Federal Reserve Bank of Richmond,
Economic Review 71, May/June, pp. 14-27.
Mengle, D.L., D.B. Humphrey, and B.J. Summers (1987):
“Intraday Credit: Risk, Value, and Pricing,” Federal Reserve
Bank of Richmond, Economic Review 73, January/February,
pp. 3-14.
Meyer, H. (1985): “Die Rolle der Schweizerischen Nationalbank im Zahlungsverkehr,” Bank und Markt, no. 9, pp. 5-9.
Müller, R. (1986): “SIC als Meilenstein im Zahlungsverkehr,”
Schweizer Bank, January, pp. 53-57.
SIC (1986): “Das Online-Clearingsystem der Schweizer
Banken,” IBO 915010, October.

FEDERAL RESERVE BANK OF RICHMOND

23

. (n.d.): “Das Rechenzentrum der Banken.”

Smoot, R.L. (1985): “Billion-Dollar Overdrafts: A Payments
Risk Challenge,” Federal Reserve Bank of Philadelphia,
Business Review, January/February, pp. 3-13.

Vital, C. (1987): “Das neue Interbank-Zahlungsverkehrssystem
SIC,” Neue Zürcher Zeitung, 26 August 1987.

Stevens, E.J. (1984): “Risk in Large-Dollar Transfer Systems,”
Federal Reserve Bank of Cleveland, Economic Review, Fall,
pp. ‘Z-16.

Walder, R. (1987): “SIC: Das Online-Clearingsystem ist in
Betrieb,” Der Schweizer Treufänder, October, pp. 421-24.

Telekurs (1987): “Das Online-Clearingsystem SIC, Update,”
Informationsbu/letin der Telekurs AG, January.

APPENDIX: Survey of SIC Transactions
The growth of SIC transactions and of participation are shown in Table I. The large spread between
daily average volume and peak day volume is
attributable to the bulk payment transactions that are
concentrated at the end of the month. By November
1988, peak day value of transactions passed 200
billion Swiss francs. Table II shows that the distribution of both number and value of payments transacted through SIC has been very uneven. While small
payments have grown as a percentage of number of

transactions, large payments have predominated in
terms of value from the beginning of the system.
Tables III and IV give an overview of input
behavior and the settlement of daily SIC payment
flows from September 1987 to October 1988 in the
form of monthly averages of daily figures. Table III
lists percentages of daily volume in terms of the
number of entered and settled payments for various
times of the day. Table IV lists the corresponding
percentages in terms of value. The tables show that

Table I

SIC PARTICIPANTS AND TRANSACTIONS
July 1987-October 1988

Table II

VALUE DISTRIBUTION OF SIC PAYMENT FLOW
(Proportions in terms of number and value)

Table III

NUMBER OF PAYMENTS BY TIME OF DAY
(Percentage share of total)

Note:

Monthly average of daily figures.
FEDERAL RESERVE BANK OF RICHMOND

25

Table IV

VALUE OF PAYMENTS BY TIME OF DAY
(Percentage share of total)

Note:

Monthly average of daily figures.

almost half of all payments are entered before 8 a.m.
on the settlement day, either at night or on previous
days in the form of payment orders with pre-stated
value dates. But up to that time less than half these
payments have actually been settled. By 2 p.m. (one
hour prior to Cutoff One) over 90 percent of the
transaction volume and almost 95 percent of the value
have been input, although only about 70 percent of
the transactions have been settled.
Waiting time is the period between receipt of a payment by SIC and its settlement. Figures A. 1 and A.2
show percentage shares of the overall volume in terms
of number and value for different waiting time classes.
Since the processing speed observed during normal
working hours is of primary interest, all payments

26

settled before 8 a.m. are considered to have a waiting
time of zero.
Figure A. 1 shows that approximately 30 percent
of all transactions are settled within ten minutes and
approximately 55 percent within two hours of
having been entered. Some transactions may remain
in the waiting queue for several hours. The figures
for value of transactions are lower (Figure A.2). Processing time for large payments is a little longer than
that for small payments. Note that the percent of
transactions taking more than five hours to settle increased during the second half of 1988. This corresponds to the decline in reserve balances held with
the Swiss National Bank.

ECONOMIC REVIEW, NOVEMBER/DECEMBER 1988

Figure Al

Figure A2

Time Lag Between Input and Settlement
of the SIC Payment Volume

Time Lag Between Input and Settlement
of the SIC Payment Value

Percentage Share of Total

Percentage Share of Total

100

100

90

90

80

80

70

70

60

60

50

50

40

40

30

30

20

20

10

10

0

0

FEDERAL RESERVE BANK OF RICHMOND

27

INTERNATIONAL RISK-BASED
CAPITAL STANDARD:
HISTORY AND EXPLANATION
Malcolm C. Alfriend*

Introduction

Historical Perspective

A business firm’s capital is expected to serve a
variety of purposes. In the case of a bank, capital
helps establish a level of confidence sufficient to
attract enough deposits to fund its operations. Further, capital serves as a cushion to absorb unforeseen losses so that the bank can continue in business.
Agreement on what constitutes sufficient capital,
however, is not always easy to reach. In fact, from
the earliest attempts to measure capital adequacy
bankers and regulators have disputed what constitutes
“capital” and what is “adequate.”
During the last two decades banks have expanded into new activities. There have also been
inroads by nonregulated, nonbank financial institutions into traditional banking activities and increased “globalization” of banking and finance. These
developments have made the proper measurement
of capital adequacy an urgent matter.
In late 1987, the Basle Committee on Banking
Regulations and Supervisory Practices, composed of
representatives of the central banks of major industrialized countries under the aegis of the Bank for
International Settlements (BIS), developed a riskbased framework for measuring capital adequacy.
The Committee’s objective was to strengthen the
international banking systems and to reduce competitive inequalities arising from differences in capital
requirements across nations.
This article sketches the historical evolution of
attempts to measure capital adequacy leading to the
Basle accord. It also reviews how capital measures
of U.S. banks would change under the risk-based
framework and how the new guidelines would affect
the larger banking organizations headquartered in the
Fifth Federal Reserve District.

Until World War II, the Federal bank regulatory
agencies1 measured capital adequacy as a percent
of total deposits or assets. Prior to the great depression of the 1930s, the capital-to-deposit ratio was
used. This ratio measured bank liquidity. During the
depression the emphasis shifted to measures of
solvency, centered around the capital-to-asset ratio.
During World War II bank assets expanded rapidly,
primarily as a result of investments in U.S. government bonds. The Federal Reserve, in seeking a way
to avoid penalizing banks for investing in these lowyield and “riskless” assets, devised a new ratio of
capital to risk assets. For this purpose, risk assets
were defined as total assets excluding cash, balances
due from other banks, and U.S. government
securities. Initially, a 20 percent standard for this ratio
was established as “sufficient” capital. Thus, beginning in the mid-1940s the concept of capital adequacy
became associated with the risks inherent in the
earning-asset portfolio.
In 1952 the Federal Reserve adopted an adjusted
risk asset approach to measuring capital. All assets
were categorized according to risk with separate
capital requirements assigned to each category. The
minimum total capital required was the sum of the
capital requirements of each category. Banks that exceeded this minimum by 25 percent rarely had their
level of capital questioned.
In 1956 the Fed further refined its capital standard by coupling the adjusted risk asset approach with
a liquidity test. The FDIC and OCC followed the
lead of the Fed and also adopted this principal for
measuring capital. This test required more capital
from less liquid banks. It also considered some offbalance sheet items. The new standard assigned dif1

* Malcolm C. Alfriend is Examining Officer at the Federal
Reserve Bank of Richmond.
28

The three Federal regulatory agencies having responsibility
for commercial banks are the Federal Reserve System (Fed),
Federal Deposit Insurance Corporation (FDIC), and the Office
of the Comptroller of the Currency (OCC).

ECONOMIC REVIEW, NOVEMBER/DECEMBER 1988

ferent percentages of capital to the various categories
of assets and liabilities. These percentages were
used to derive the total amount of capital needed to
protect the bank from losses on investments and from
reductions in deposits and other liabilities. A ratio
of actual capital to required capital was calculated and
if the ratio was less than 80 percent, a bank was
generally considered undercapitalized.
In 1962 the Comptroller of the Currency abandoned the risk assets standard on the grounds that
it was arbitrary and did not consider factors such as
management, liquidity, asset quality, or earnings
trends. Moreover, the Fed, FDIC, and OCC
disagreed over what constituted capital. The Fed
continued to define capital as equity plus reserves
for loan losses. In contrast, the FDIC and OCC
allowed some forms of debt to count as capital. Thus,
in the early 1960s regulatory opinion on capital adequacy became divided. The FDIC relied on a capital
to average total asset ratio excluding fixed and
substandard assets. The Federal Reserve continued
to use risk assets as the denominator in its capital
ratios although it frequently revised its definition of
risk assets. For the remainder of the 1960s and ‘70s,
the Federal bank regulators continued to use different
definitions of capital and methods of measuring capital
adequacy.
In 1972 the Fed capital standard was revised again.
Asset risk was separated into “credit risk” and “market
risk” components. In addition, banks were required
to maintain a higher capital ratio to meet the test of
capital sufficiency. Further, the Fed reintroduced
both the capital to total asset and capital to total
deposit ratios. This time, however, the former ratio
was based on total assets less cash plus U.S. government securities, a rough “risk asset” adjustment. In
practice, bankers and analysts used the FDIC and
Fed standards more than those of the OCC.
None of the agencies established a firm minimum
capital ratio. Instead, the capital positions of banking institutions were evaluated on an individual bank
basis. Particular attention was directed toward smaller
banks whose loan portfolios were not as diversified
and whose shareholders were fewer in number than
those of larger institutions. It was reasoned that small
or “community banks” might have a hard time raising capital in times of difficulty and therefore should
be more highly capitalized at the start than larger institutions. Table 1 shows the banking industry’s
capital-asset ratios from 1960 to 1980. The table
shows that there was a steady downward drift in the
ratio, which can be explained by a number of factors. Chief among these would be the attractiveness
of increased leverage in banking and reliance on other

Table I

RATIO OF EQUITY CAPITAL
TO TOTAL ASSETS
1960-1980
(Percent)
Year-end

All banks

1960
1965
1970
1975
1980

8.1
7.5
6.6
5.9
5.8

techniques to manage balance sheets, e.g., liability
management.
In late 1981 the three Federal bank regulatory
agencies announced a new coordinated policy related
to bank capital. The policy established a new definition of bank capital and set guidelines to be used in
evaluating capital adequacy. The new definition of
bank capital included two components: primary and
secondary capital.
Primary capital consisted of common stock,
perpetual preferred stock, surplus, undivided profits,
mandatory convertible instruments (debt that must
be convertible into stock or repaid with proceeds
from the sale of equity), reserves for loan losses, and
other capital reserves. These items were treated as
permanent forms of capital because they were not
subject to redemption or retirement. Secondary
capital consisted of nonpermanent forms of equity
such as limited-life or redeemable preferred stock and
bank subordinated debt. These items were deemed
nonpermanent since they were subject to redemption or retirement.
In addition to the new definition of capital, the
agencies also set a minimum acceptable level for
primary capital and established three zones for classifying institutions according to the adequacy of their
total capital. As shown in Table II, different standards were applied to “regional” and “community”

Table II

ACCEPTABILITY ZONES FOR TOTAL CAPITAL
ESTABLISHED IN 1981
Zone

Regional
organizations

Community
organizations

1

Above 6.5%

Above 7%

2

5.5% to 6.5%

6% to 7%

3

Below 5.5%

Below 6%

FEDERAL RESERVE BANK OF RICHMOND

29

banking organizations. “Multinational” banks were
excluded from the measurement system altogether.
Multinational organizations were defined as those
with consolidated assets above $15 billion. There
were seventeen such organizations in 1981. Regionals
were defined as organizations with assets from $1-$15
billion while community organizations included all
companies under $1 billion.
The Fed and OCC established minimum ratios of
primary capital to total assets of 5 percent and 6 percent for the regional and community organizations,
respectively. If an institution’s primary capital exceeded the minimum and total capital was in
Zone 1, its capital was assumed to be adequate. For
organizations with capital ratios in Zone 2, other factors such as asset quality and the level and quality
of earnings entered the determination of capital
adequacy.
The FDIC’s capital adequacy guidelines set a 5
percent minimum for the equity capital ratio, defined as capital minus 100 percent of assets classified
as loss and 50 percent of assets classified as doubtful at the most recent examination. In addition, the
FDIC excluded limited-life preferred stock or subordinated debt from its definition of capital. These
items must be repaid and-unlike true capital, they
are not available to absorb losses.
In 1983 the Fed amended its guidelines to set a
minimum capital ratio of 5 percent for multinational

organizations. It also expanded the definition of
secondary capital to include unsecured long-term debt
of holding companies and their nonbank subsidiaries.
In 1985 the Fed guidelines were amended once again
when the uniform minimum primary capital ratio was
set at 5.5 percent and uniform total capital at 6 percent. In addition, new zones for measuring the adequacy of total capital were adopted, namely, greater
than 7 percent, 6 to 7 percent, and less than 6
percent.
In reaction to the use of a simple capital-to-asset
ratio, banks began to adjust their portfolios increasing the share of higher yielding assets but requiring
no more capital than lower yielding assets. In particular, some banks switched from short-term, lowyield, liquid assets to higher yielding but riskier assets
(i.e., loans). Also, since the capital requirements only
applied to assets carried on the balance sheet, banks
began to expand off-balance sheet activities rapidly.
Some institutions attained their ratios by packaging
assets and selling them to investors, reducing their
risk in the process.
While the ratio of capital to total assets served as
a useful tool for assessing capital adequacy for a time,
it became increasingly apparent that the type of risks
30

being assumed by banks required a new approach
to measuring capital. Accordingly, in February 1986,
the Fed proposed standards for measuring capital
on a risk-adjusted basis. The proposal, followed
shortly by a similar proposal from the OCC, was
designed to: 1) address the rapid expansion of offbalance sheet exposure; 2) reduce incentives to
substitute higher-risk for lower-risk liquid assets; and
3) move U.S. capital policies more closely into line
with those of other industrialized countries.
Under the Fed proposal, assets and certain offbalance sheet items were assigned to one of four
broad risk categories and weighted by their relative
riskiness. The sum of the weighted asset values
served as the risk asset total against which primary
capital was to be compared. The resulting ratio was
to be used together with the existing primary and
total capital-to-total asset ratios in determining capital
adequacy.
Before the 1986 proposal could be put into effect,
however, the U.S. bank regulators requested public
comment on a revised risk-based capital framework
for banks and bank holding companies. This proposal, announced in January 1987, was developed
jointly by U.S. and Bank of England authorities.
During the comment period on the revised proposal,
the U.S. bank regulators continued to seek international agreement on the proposal, an effort that led
in December 1987 to still another framework for riskbased capital that had been developed jointly with
representatives from 11 other leading industrial countries.2 This proposal has undergone continued
refinement and final guidelines were adopted officially
in December 1988.
The Risk-Based Capital Framework
The risk-based capital (RBC) framework, which
was adopted as an international standard addresses
primarily credit risk. It has four broad elements as
follows:
1. A common international definition of capital.
Core or Tier 1 capital consists of permanent
shareholders’ equity. Supplemental or Tier 2
capital is a “menu” of internationally accepted
non-common equity items to add to core
capital. Each country has some latitude as to
what supplemental components will qualify as
capital.
2. Assigning one of four risk weights (0, 20, 50,
and 100 percent) to assets and off-balance sheet
2

Belgium, Canada, France, Germany, Italy, Japan, Netherlands,
Sweden, United Kingdom, United States, Switzerland, and
Luxembourg.

ECONOMIC REVIEW, NOVEMBER/DECEMBER 1988

items on the basis of broad judgments of
relative credit risk. These categories are used
to calculate a risk-based capital ratio. Offbalance sheet items are also assigned a credit
conversion factor that is applied before the
risk weight.3
3. A schedule for achieving a minimum 7.25
percent risk-based capital ratio by the end
of 1990 (3.625 percent from Tier 1 items)
and 8 percent by the end of 1992 (4 percent
from Tier 1 items).
4. A phase-in period, from 1990 to 1992, during
which banking organizations can include some
supplemental capital items in Tier 1 capital
on a temporary basis.
The RBC framework focuses on credit risk only.
As such, the proposal does not take into account
other factors that affect an organization’s financial condition, such as liquidity and funding. Also overlooked are factors such as interest rate risk, concentrations of investments and loans, quality and level
of earnings, problem and classified assets, and quality
of management. These factors must also be considered in measuring financial strength and they will
continue to be assessed through the examination process. Further, the Fed Board of Governors has indicated that it may consider incorporating interest
rate risk before the new RBC takes effect.
Risk-based and traditional capital policies T h e
international risk-based capital standard differs in
some respects from all the previous risk-based capital
proposals made by U.S. regulators. It reflects changes
suggested by banking supervisors in foreign countries and comments received from the public. An
important aspect of the implementation of the RBC
standard in the United States is that it will apply to
all banks, not just international banks as required by
the Basle accord. Further, the Fed has determined
that a risk-based ratio similar to the risk-based capital
framework for banks will be applied to bank holding
companies on a consolidated basis. The difference
in the capital framework for banks and the framework
for bank holding companies rests with a slightly
broader definition of capital for bank holding companies. The following is a brief review of the principal differences between the RBC framework and
3
Each balance sheet item is multiplied by the appropriate risk
weight to arrive at the credit equivalent amount. For example,
cash is assigned a zero weight. Similarly, off-balance sheet items
would be multiplied by a credit conversion factor and then by
the appropriate risk factor. For example, a long-term loan commitment to a private corporation has a conversion factor of 50
percent and a risk category of 100 percent.

traditional capital guidelines that have been used in
the United States.
Core and supplemental capital components The RBC
standard like the 1987 U.S./U.K. proposal, divides
capital into two components: core capital (Tier 1)
and supplemental capital (Tier 2). After an initial
phase-in period, core capital will consist entirely of
permanent shareholders’ equity, which is defined in
Table III. This is in contrast to the current definition used by U. S. banking regulators which includes
both common and perpetual preferred stock, mandatory convertible debt instruments, and allowance for loan and lease losses. While mandatory convertible debt instruments may be included in core
capital to a limited degree during the phase-in period,
after 1992 these components can be used only as
supplemental capital.
In the case of bank holding companies, both
cumulative and noncumulative perpetual preferred
stock are included in core capital. The aggregate
amount of perpetual preferred stock included cannot exceed 25 percent of core capital, however.
Perpetual preferred stock in excess of this percentage can be included in Tier 2 without limit.4 B y
allowing bank holding companies to include some
cumulative perpetual preferred stock in core capital,
the Fed is giving bank holding companies more flexibility in raising capital while recognizing the value
of perpetual preferred stock in the holding companies’
capital structure. At the same time, the limits on the
maximum amount of preferred stock included in
Tier 1 are meant to protect the integrity of a holding
company’s common equity capital base.
The Fed also may designate certain subsidiaries
whose capital and assets may be excluded from capital
requirements. Securities affiliates of bank holding
companies fall into this category. However, to be excluded the Fed has specified that strong barriers between affiliates, adequate capitalization of nonbank
subsidiaries, and any other protections that it deems
necessary must first be in place to safeguard the
health of affiliated banks.
Table IV shows the results of applying the concept of RBC core capital to the 35 largest banking
organizations in the Fifth District, i.e., those organizations with total assets greater than $500 million
as of mid-1988. The calculations are estimates
only, inasmuch as the information necessary for
4

“Dutch Auction” preferred stocks are those types of preferred stock (including remarketable preferred and money market
preferred) on which the dividend is reset periodically to reflect
current market conditions and an organization’s current credit
rating. These stocks are excluded from Tier 1 but may be
included in supplemental capital without limit.

FEDERAL RESERVE BANK OF RICHMOND

31

Table III

RISK-BASED CAPITAL COMPONENTS
Core Capital
Common stock, at par value
Perpetual preferred stock (preferred stock having no stated maturity date and which may not be redeemed at the option
of the holder)
Surplus (amounts received for perpetual preferred stock and common stock in excess of its par or stated value but
excluding surplus related to limited-life preferred stock, capital contributions, amounts transferred from retained
earnings and adjustments arising from Treasury stock transactions)
Minority interest in consolidated subsidiaries
Retained earnings
Less: Treasury stock (the cost of stock issued by the institution and subsequently acquired, but that has not been
retired or resold)
Goodwill (excess of cost of an acquisition over the net asset value of the identifiable assets and liabilities acquired)
Supplemental

Capital

Limited-life preferred stock including related surplus
Reserve for loan and lease losses
Perpetual debt (unsecured debt not redeemable at the option of the holder prior to maturity, but which may participate
in losses, and on which interest may be deferred)
Mandatory convertible securities (equity commitment and equity contract notes-subordinated debt instruments
maturing in 12 years or less. Holders may not accelerate the payment of principal.
Must be repaid with common or preferred stock or proceeds from the sale of such issues)
Subordinated debt (with an original maturity of not less than 5 years)

precise calculation of the ratios is not currently
available. For example, some of the items including
capital components are not currently reported by
banking organizations and a breakdown of risk assets
and off-balance sheet items is not currently available.

Table IV

ESTIMATED RISK-BASED CAPITAL POSITION
BY SIZE GROUP FOR FIFTH DISTRICT
BANK HOLDING COMPANIES
(Percent weighted average)
June 30, 1988

Tier 1

Tier 1
plus
Tier 2

Primary Capital
to

Asset Size

Total Assets

Over $15 billion

7.5

7.0

9.5

$5-$15 billion

7.7

7.3

9.8

$1-$5 billion

8.5

10.2

12.0

$500 million-$1 billion

8.0

10.1

11.7

32

Further, data are not available to calculate the relative
share of first mortgages on 1-4 family properties in
the loan portfolio and there is not enough information to measure the percentage of loan commitments
having original maturities exceeding one year.
Likewise, a breakdown of standby letters of credit
by use is unavailable. With these limitations in mind,
the estimates show that all 35 of these organizations
are currently above the 4 percent minimum guideline
for Tier 1 capital and the 8 percent minimum standard for total capital required by the end of 1992.
Allowance for loan losses The RBC Standard defines general loan loss reserves as charges against
earnings to absorb future losses on loans or leases.
Such reserves are not set aside for specific assets.
Under the RBC guidelines, the general reserve for
loan losses is relegated to supplemental capital, but
no limit is placed on the total general loan loss
reserve. After 1990, however, the reserve is limited
to 1.5 percent of weighted risk assets. After 1992
the reserve may not represent more than 1.25 per-

ECONOMIC REVIEW. NOVEMBER/DECEMBER 1988

cent of weighted risk assets.5 This represents a
major departure from earlier U.S. capital guidelines
in which the reserve for bad debts counted as primary
capital.
When originally proposed, the limitation on the
amount of eligible reserves seemed critical for U. S.
banks, some of which had used the one-time provision in 1987 in connection with loans to less
developed countries (LDCs) to build up reserves well
in excess of the allowable RBC percentages. Based
on June 30, 1988 data, seven of the 35 Fifth District
companies included in the study would not be able
to fully use their reserve for loan losses. All seven
companies would, however, still be above the proposed final minimum total capital standard of 8 percent. Thus, it appears the limitation may only affect
the large multinational companies,
Treatment of intangibles Intangible assets arise
when the stock of a company is acquired for cash.
In a cash transaction, accounting rules require that
the assets of the acquired company be assigned a
market value. In banking, a value is also assigned to
core deposits (demand deposits and interest bearing deposits under $100,000) under the rationale that
these deposits are valuable to the acquiring company.
The values assigned to core deposits and balance
sheet assets are denoted as identifiable intangibles.
The amount paid for a bank in excess of revalued
assets and identifiable intangibles is known as
goodwill.
Goodwill must be deducted from capital in computing the risk-based capital ratio. Identifiable intangibles, however, may or may not require the same
deduction. Different Federal bank regulators will treat
these items in compliance with their respective proposed guidelines.
For bank holding companies, the Fed will exempt
until December 31, 1992, any goodwill existing prior
to March 12, 1988, after which time it must be
deducted from capital. Any goodwill arising from an
acquisition on or after March 12, 1988, will be
deducted from capital immediately. An exception to
this rule may be made for goodwill arising from the
acquisition of a failed or problem bank. At the present time, the Fed does not plan to deduct automatically any other intangible assets from the capital
of state member banks or bank holding companies.
5

The Basle Committee on Banking Regulations and Supervisory
Practices has agreed to attempt to resolve the question of what
constitutes a general reserve for loan and lease losses. If an agreement can be reached, then general reserves would be included
in Tier 2 without limit. Otherwise, the limitations noted above
will apply.

It will, however, continue to monitor the level and
quality of intangibles, particularly where such intangibles exceed 25 percent of Tier 1 capital.
Term and subordinated debt Under current
guidelines, banks are allowed to count subordinated
debt with an original average maturity of seven years
as secondary capital. Similarly, bank holding companies may include as secondary capital unsecured
term and subordinated debt meeting the same criterion. Under the RBC standard, only subordinated
debt instruments with an original average maturity
of five years may be included as supplemental capital.
While initially there is no limitation on the amount
of such debt that may be included in Tier 2 capital,
after 1992 a limitation applies; instruments includable
in Tier 2 will then be limited to 50 percent of core
capital. According to the RBC standard, all unsecured
term debt issued by bank holding companies prior
to March 12, 1988, and qualifying as secondary
capital at the time of issuance, will be grandfathered
and included in supplemental capital. Bank holding
company term debt issued after that date must be
subordinated to qualify as supplementary capital for
the holding company.
By including subordinated debt in supplemental
capital, the Fed recognizes that subordination does
afford some protection for depositors in the event
of failure. At the same time, subordinated debt of
bank holding companies provides a cushion to senior
creditors, and thus promotes stability in funding
operations. The debt, however, is not permanent;
it must be repaid and is therefore not available to
absorb losses. In recognition of these factors the Fed
established a five-year original maturity requirement
as the minimum period necessary to provide stable
funding. In addition, a five-step amortization schedule
is used to discount subordinated debt and limitedlife preferred stock as they approach maturity.

Application to All banks
The Federal banking regulators have agreed that
the information necessary to calculate capital will be
collected routinely from institutions with assets over
$1 billion. Examiners will monitor the risk-based
capital positions of smaller institutions during on-site
examinations and inspections. Institutions with assets
under $1 billion may be required to report limited
information between examinations, but the plan is
to hold such reporting requirements to a minimum.
Summary
The adoption of an international risk-based capital
standard under the Basle accord reduces some of the
deficiencies in measurement of capital adequacy that

FEDERAL RESERVE BANK OF RICHMOND

33

have emerged in the 1980s. The new RBC standard
represents a major step in establishing uniform capital
standards for major international banks. The accord
should contribute to a more stable international banking system and help reduce competitive inequalities
among international banks stemming from differences
in national supervisory requirements. The application of the RBC standard to large Fifth District banking organizations shows that these organizations
exceed the minimum guidelines that will be required
in 1992. Therefore, it does not appear that Fifth
District banks organizations will be among those who
will need to undertake special efforts to either raise
more capital or shed assets to meet the new standard. In this regard, however, it should be noted that

34

the standards are intended as minimums and that
rapidly expanding organizations are expected to stay
above the minimums. A number of Fifth District
bank holding companies have grown rapidly in recent years and a continuation of this growth will
necessitate the generation of new capital. The RBC
standard does not, however, take account of all the
risks to which banking organizations are exposed,
specifically, risks associated with management, liquidity, funding, and asset quality. These risks will
continue to be assessed by examiners and will be
taken into account before a final supervisory assessment of an organization’s capital is made. Further,
the Federal Reserve is studying the feasibility of
expanding the standard to address interest rate risk.

ECONOMIC REVIEW, NOVEMBER/DECEMBER 1988