View original document

The full text on this page is automatically extracted from the file linked above and may contain errors and inconsistencies.

The Problem of Small
Change in Early Argentina
Huberto M. Ennis

M

any economies, during the early stages of monetary development,
experienced what appear to be sporadic relative shortfalls of small
denomination means of payment. These episodes have been
broadly documented in the literature under the label of “shortages of small
change.” Sargent and Velde (2002), for example, review in great detail the
evidence for Europe. Hanson (1979) provides an interesting survey of the
evidence for the British colonies in North America. The purpose of this paper
is to present evidence of similar events occurring in the early monetary history
of Argentina.
The provision of small change in modern economies has become almost
a nonissue.1 In fiat money systems, the monetary authority controls the aggregate supply of monetary balances (of all denominations) and stands ready
to exchange at par any denomination for an equivalent amount of any other
denomination. It is, then, demand that determines the relative amounts of the
different denominations that circulate in the economy. There are, of course,
costs of providing the demanded amounts. Low-denomination coins tend to
be relatively more costly to produce (at least, per unit of value). Yet, in general,
governments in modern societies have considered these costs worthwhile.
The author is a Senior Economist in the Research Department of the Federal Reserve Bank of
Richmond. E-mail: Huberto.Ennis@rich.frb.org. I would like to thank Elena Bonura, Martin
Gervais, Andreas Hornstein, Bob Hetzel, Todd Keister, Robert King, Diego Restuccia, Andrea
Waddle, and Steve Williamson for helpful comments. All translated quotations are my own.
All errors are also my own. Part of this work was done while I was visiting the Central
Bank of Argentina, whose hospitality is gratefully acknowledged. The views expressed here
do not necessarily represent those of the Federal Reserve Bank of Richmond or the Federal
Reserve System.
1 Some high-inflation economies sometimes experience imbalances in their denomination structure, resulting in relative scarcity of small change. The monetary authority becomes reluctant to
provide large quantities of low-denomination means of payment, which are relatively costly to
produce and, due to high inflation, have a short useful lifespan.

Federal Reserve Bank of Richmond Economic Quarterly Volume 92/2 Spring 2006

93

94

Federal Reserve Bank of Richmond Economic Quarterly

Under a commodity money system it is also possible, in principle, to solve
the problem of the relative supply of denominations. Basically, it requires issuing token coins that can be readily exchanged at par with the monetary
authority for full-bodied coins.2 This monetary arrangement is commonly
known as the “standard formula” (see Sargent and Velde 2002, 5). For some
time, the production of token coins represented a significant technological
challenge. Since token coins are worth more than their intrinsic value, counterfeiting is a very profitable activity under the “standard formula” system.
To avoid counterfeiting, the coins need to be fairly sophisticated, which increases their production cost. In the history of monetary systems, before the
technology for production of coins was well developed, token coins were at
best a very imperfect solution to the problem of small change.
When all coins are full-bodied coins, as was the case in Spanish America,
the potential for mismatches on the demand and supply of denominations
becomes more likely. The minting cost per value is normally lower for highdenomination coins. To the extent that coins circulate at par value (i.e., without
discounts), there are incentives to mint only high-denomination coins. The
historical record on the shortages of small change is the story of monetary
authorities that struggled to sustain the proper mix of (full-bodied) high- and
low-denomination coins.
Imperfect private solutions to the shortage problem are possible. For example, coins could be cut in portions to circumvent the indivisibility barrier.
Also, small change can be offered at a premium. However, all these partial solutions bring forth the problems that arise when high- and low-denomination
coins do not exchange in convenient and fixed ratios. These issues are most
relevant in cases where the need for small change originates in domestic transactions. Systematic negotiation over discounts and measurement of effective
coin weight, in this case, involve very low-value amounts, most likely not
worth the trouble.

In Search of a Formal Definition
Providing a precise theoretical definition of a shortage of small change is
no simple matter. Sargent and Velde (1999; 2002) consider a model where
low-denomination money can be used for “small” and “large” transactions but
high-denomination money can only be used for “large” transactions. They then
define a “shortage” as a situation where the agents in the economy have to
adjust their consumption pattern to their holdings of low-denomination money
while, at the same time, holding an “excess” stock of the high-denomination
2 A full-bodied coin is one that has metallic content worth as much as the coin’s face value.

Token coins are subsidiary coins that have lower intrinsic value than the value at which they can
be exchanged with the monetary authority for full-bodied coins.

H. M. Ennis: The Problem of Small Change

95

money. In other words, at the time of consumption decisions, the agent’s
holdings of low-denomination money act as a binding constraint (see Rolnick
and Weber [2003] for a good summary).
Such a definition provides an interesting prediction that can potentially be
confronted with the data. Basically, shortages of low-denomination money (as
defined by Sargent and Velde) coincide with the persistent depreciation of lowdenomination money relative to high-denomination money. The economic intuition for this result is simple. During shortages, low-denomination money is
effectively more useful in transactions than high-denomination money. However, both monies are being voluntarily held by agents. For this situation
to be an equilibrium, the low-denomination money has to be losing value
with respect to the high-denomination money. In other words, while lowdenomination money is more useful in transactions, its value is depreciating
with time. The two effects offset each other in equilibrium and leave agents indifferent (at the margin) between holding low- and high-denomination money.3
Wallace (2003) strongly criticizes Sargent and Velde’s definition. Wallace
points out that the historical “shortages are reports by contemporaries concerning the difficulty of carrying out trade in the face of a sudden disappearance
of some kinds of coins.” However, in Sargent and Velde’s model, agents are
“freely choosing quantities while taking prices as given,” and “nothing in the
model looks like a shortage or a disruption of trade.”
Wallace goes on to discuss what is, in his view, at the heart of the problem
of small change. He starts by specifying the four desirable properties of a
medium of exchange: portability, divisibility, durability, and recognizability;
and he describes the history of coinage as “mainly about the technological
difficulties of achieving a full-bodied coinage system that comes close to
having those attributes.” Formal economic analyses that explicitly model all
four attributes of a medium of exchange and study the denomination structure
of money are not readily available. Wallace (2003) provides an introduction
to the formal treatment of these matters within the framework provided by
the random matching models of money pioneered by Kiyotaki and Wright
(1989).4 He discusses how the indivisibility of coins, for example, could limit
the set of transactions that agents undertake and how such limitations would
clearly be perceived as shortages of small change by the agents in the model.
These are interesting theoretical avenues that should eventually improve our
understanding of the historical records.
The objective of this article is not to answer the theoretical questions
that surround the issue of the appropriate provision of small change. Those
3 Sargent and Velde (2002) do, in fact, find some supporting evidence for this hypothesis in
their review of the historical record of many European countries. Unfortunately, as we will later
see, such an evaluation is not possible with the data available for the colonial period in South
America.
4 See also Wallace and Zhou (1997) and Lee, Wallace, and Zhu (2005).

96

Federal Reserve Bank of Richmond Economic Quarterly

are difficult questions that involve theoretical concepts that are not yet well
understood in the literature and require much further study. Here, instead, we
will limit ourselves to reviewing the historical evidence that indicates that the
availability of small change was a problem around the turn of the 18th century
in the region that is today Argentina. As noted above, similar evidence is
available for other parts of the world. However, in view of the intricate nature
of the problem involved, it seems important to collect and analyze as much
evidence as possible on this matter.

A Caveat
Before turning to the main subject of the article, a general clarification is in
order. When reading the historical records, one has to be especially careful in
differentiating the scarcity of means of payment from the general scarcity of
resources prevailing in the area. The confusion between those two different
phenomena was common at the time (for example, official resolutions would
sometimes associate the general scarcity of resources with the inability to issue
coins).5 The territory of Argentina was relatively poor at the end of the 18th
and the beginning of the 19th centuries and it relied on imports from Europe
for many essential needs. Furthermore, the recurrent military conflicts at the
beginning of the 1800s only contributed to further reducing the availability
of economic resources. But the general stringency of resources is not the
subject of this article. Instead, I intend to report evidence that suggests that
the relative scarcity of certain means of payment, and in particular fractional
money, constituted a problem in itself.
The remainder of the article is organized as follows. In the next section, I
discuss the evidence from 1776 to the 1810 Revolution, that is, the latter part of
the colonial period. Classic manifestations of the scarcity of small change appear in this period: attempts by the government to ban the export of fractional
money; extreme difficulties in persuading the population to remint low-quality,
low-denomination coins; and the development of imperfect substitutes for use
in payment of small transactions are among the main ones being reviewed. In
Section 2, I present evidence that suggests that the shortage of small change
continued to be a problem during the 15 years after the revolution. I explain
how the government struggled with the decision to issue copper coins for 10
years, and how it finally injected the coins in 1823 with great initial success.
Also, in 1822 the first bank in the region was created and allowed to issue
notes of moderate denominations after some initial reluctance. I discuss the
genesis of that decision in some detail. To conclude, in Section 3, I provide
5 Supple (1957, 244–45) reports that this kind of confusion was also common in 17th century
England.

H. M. Ennis: The Problem of Small Change

97

some further discussion of the economic issues highlighted by the evidence
discussed.

1.

EVIDENCE FROM THE PERIOD OF THE VICEROYALTY
OF THE RIO DE LA PLATA

Spanish settlers were in the area that is today Argentina since the mid-1500s.
For the early colonial period, the available evidence indicates that there was
no widespread monetary exchange occurring in the region and that instead
barter was the predominant way of exchange (Prebisch 1921, 193; El´a 1942a,
ı
416). Since the beginning of colonization, Spain implemented a system of
international trade restrictions in the colonies. The port of Buenos Aires could
only trade with Spain and such trade was subject to heavy taxation. These
restrictions significantly slowed down economic development for more than
two hundred years. Only in 1776, with the creation of the Viceroyalty of the
Rio de la Plata, the representative of the king, Viceroy Ceballos, declared free
trade in the port of Buenos Aires.6 As a result, a significant increase in the
level of internal and external trade took place in Buenos Aires and its area of
influence.
During most of the early colonial period, Buenos Aires maintained a trade
deficit with Spain. This deficit resulted in a constant outflow of gold and
silver (in the form of coins, bars, and silverware).7 Coins came to Buenos
Aires from the regions of Upper Peru (the area occupied today by Bolivia and
Peru). The Royal Villa of Potos´ was the major center of economic activity
ı
in Upper Peru, with its adjacent silver mines and the regional mint house.
Smuggling of European linen and relatively inexpensive Brazilian products
was common in the port of Buenos Aires. Most of these products were sent
to Upper Peru to be sold in exchange for gold and silver coins. The proceeds,
especially high-quality coins, were then exported to Europe via Buenos Aires.
The constant outflow of coins from the port of Buenos Aires was perceived
as creating significant liquidity problems in the area of the Rio de la Plata (El´a
ı
1942a, 420–21). For example, as a result of numerous local complaints, in
October 1618 the King of Spain passed a resolution allowing these colonies to
use “products of the land” (instead of gold and silver coins) to pay the “Indies
taxes” (El´a 1942a, 418).8 Interestingly, there is very similar evidence of
ı
6 Before 1776, the colonies of the Rio de la Plata were under the control of the representative
of the king residing in the Peruvian area.
7 Halper´n Donghi (1972, 48) reports that exports of gold and silver from Buenos Aires
ı
amounted to about 80 percent of total exports in 1796.
8 Also, in 1622, in an attempt to stop the outflow of precious metals from Upper Peru, the
Spanish Crown created the customs of C´ rdoba, an inland post that was supposed to control all
o
trade between the port of Buenos Aires and the highland regions in the northwest (Upper Peru).
The effectiveness of this measure was undermined by rampant smuggling.

98

Federal Reserve Bank of Richmond Economic Quarterly

perceived shortages of specie in colonial Canada at the end of the 18th century.
Redish (1984) argues that these common complaints about a general scarcity
of specie in Canada should really be interpreted as reflecting the discomfort
of merchants with what was actually a scarcity of high-quality coins. Redish
provides evidence indicating that most of the coins in circulation in late 1700s
Canada were old coins whose weight had been reduced by intentional clipping
or sweating (and the normal wear and tear of very old coins), i.e., low-quality
coins. The idea behind Redish’s interpretation is that, in accordance with
modern versions of Gresham’s Law, low-quality coins tended to drive highquality coins out of circulation.9
In principle, it seems that Redish’s hypothesis could be applicable for
interpreting the complaints about general scarcity of specie during the early
colonial period of Argentina. However, more research is needed in this area
before reaching a more definite conclusion. Here, though, the focus will be on
trying to identify situations where the shortage could be associated exclusively
to low-denomination media of exchange, and I will restrict my study to only the
latter part of the colonial period (that is, since the creation of the Viceroyalty
of the Rio de la Plata in 1776).

The Monetary System
The foundations of the monetary system in the Viceroyalty of the Rio de la
Plata resembled those of the system in place in Spain at the time. Basically,
there were in circulation gold and silver coins minted in Spain, Mexico, and
Peru. The two main mints in the Spanish colonies of South America were
located in Lima and Potos´. Most of the coins circulating in the territories of
ı
the Rio de la Plata (today Argentina) were silver coins minted in Potos´ (El´a
ı ı
1942a, 429).
The Potos´ mint was under the direct control of the Spanish Crown, which
ı
held a monopoly for issuing coins. Mining, on the other hand, was a private
enterprise. The Crown, though, provided miners with most of the essential
inputs and heavily taxed their production. One of the most notorious institutions of the time was the mita, an annual recruitment of forced Indian labor
that was assigned to the different miners according to a system of concessions.
In 1779 the Spanish Crown created the Banco de San Carlos that provided
credit and other basic inputs to miners in Potos´ and monopolized the purchase
ı
9 Sargent and Velde (2002, 125) discuss the “bullion famine” of medieval Europe in the
context of their model and also conclude that talking about general shortages of coins is difficult
to rationalize. Instead, they maintain that the monetary anomalies involved were the consequences
of shortages of small change.

H. M. Ennis: The Problem of Small Change

99

Figure 1 Silver Pesos Minted in Potos´ Between 1767–1770
ı

Source: http://www.historiadelpais.com.ar/

of silver in the region (Tandeter 1992, 198–99). The diezmo, a 10 percent tax,
was charged on all silver production.
The mint issued coins according to general orders from the Spanish
Crown.10 The relative supply of different denominations of coins was in
principle determined by directives from Spain and decisions by the mint’s administrators (part of the Crown’s bureaucracy). Under this scheme the supply
of fractional money, in principle, did not automatically adjust to its demand,
and imbalances became common.11 During the second half of the 18th century, a system of two-year concessions was instituted for the administration of
the mints on behalf of the Crown. The concession contracts stipulated targets
for the cost of production and the proportion of fractional money to be minted
(Dargent Chamot 2005, Ch. 15).
There were two types of silver coins in circulation at the time of the creation
of the Viceroyalty: the coins called de cordoncillo and the older, hammered
coins (or cobs) called macuquina. The coin de cordoncillo had the edges
marked to prevent clipping and the macuquina had variations in thickness,
weight, and shape, making it a coin of mediocre quality. The macuquina
had been in circulation since the time when hammering was the common
minting practice (in Potos´, from 1575 until 1773). In principle, both coins de
ı
cordoncillo and macuquina of all existing denominations were in circulation.
10 Romano (1998, 133) reports that the original 16th century royal ordinances creating the
mints in Mexico and Peru, including the one in Potos´, stated explicitly the rules for the proportions
ı
of the different denomination of coins to be minted. In particular, only a fourth of the coins were
supposed to be of low denomination. These original orders, though, were not always strictly
followed and changed over time.
11 In an alternative scheme called “free minting,” private agents holding silver or gold are
able to go to the mint and exchange their metal for coins of the denomination of their choice.
This scheme was predominant in Europe (see Sargent and Velde 2002, 20).

100

Federal Reserve Bank of Richmond Economic Quarterly

Table 1 Silver Coins—Main Denominations
Cuartillo
Medio Real
Real
Real de a Dos
Medio Peso
Peso

1/4 real
1/2 real
2 reals
4 reals
8 reals

However, a high proportion of the stock of circulating low-denomination coins
were macuquina, as these were the coins that had been minted for a longer
period of time.12
The high-denomination, full-bodied silver coins were commonly called
plata doble and the low-denomination coins, plata sencilla (Bonura 1992,
40; Tandeter 1992, 157). Most plata sencilla was of the macuquina type
and not full-bodied (due to intentional clipping and the normal wear and tear
resulting from their use and age). The plata doble, on the other hand, were
mostly high-quality, full-bodied silver coins that were relatively scarce and
especially useful for payments of imported goods from Spain.
In terms of the denomination structure, the main denominations were the
peso, also called peso fuerte and the real with a nominal value of one-eighth
of the peso. There were also coins of half, two, and four reals (El´a 1942a,
ı
432). Cuartillos, coins of one-fourth of a real, were only minted in Potos´
ı
after 1794 and in very small quantities (Dargent Chamot 2005, Ch. 17).
Gold coins were very scarce in the area and of relatively high purchasing
power. The most common gold coin was the doubloon of eight, which was
equivalent to approximately 16 silver pesos and was mostly used for international trade and hoarding. Overall, gold coins were not used in small domestic
transactions, and for this reason they play no major role in the discussion that
follows.
In 18th century Spain, small change was partly provided by the issuing of vell´ n, a low-denomination coin made with a mixture of copper and
o
small quantities of silver. According to Bonura (1992, 39–40), the vell´ n did
o
not circulate in the region of the Rio de La Plata (see also, Cort´ s Conde
e
and McCandless 2001, 384). These token coins were not commonly minted
in Potos´, probably because of their high minting costs and an alleged (yet,
ı
somewhat surprising) reluctance of the general population to accept them in
exchange. Interestingly, Romano (1998, 133) reports extensively on a similar
12 Low-denomination coins were minted in relatively low proportions and, hence, most of

the stock in circulation was fairly old. High-denomination coins were minted more intensively and
were exported in large proportions, resulting in a stock with a lower average age.

H. M. Ennis: The Problem of Small Change

101

phenomenon taking place in colonial Mexico (see also Hamilton 1944, 35).
The reason for this phenomenon, however, is not yet well understood.13

The Problem of Small Change
On several occasions between 1770 and 1810, the local elites complained
to the Crown about the shortage of small change. In 1773 the king, partly
as an attempt to deal with the scarcity of small change, banned any export
of fractional money from the colonies. Specifically, the king prohibited the
shipment of pieces of half, one, and two reals to Spain and instructed the
Viceroys to intensify their efforts to ensure that the royal mints coin enough
silver in those denominations “for the vast commerce of America” (Hamilton
1944, 37).14
The macuquina was heavily used in domestic transactions, usually circulating at par value. However, its poor quality complicated its normal use and
generated many complaints among the locals.15 Its irregular shape made the
macuquina very susceptible to clipping, creating uncertainty about its intrinsic value. Furthermore, by 1784 all circulating macuquina was at least ten
years old (hammered coins were last produced in Potos´ around 1773) and,
ı
hence, in very bad shape. At that time, the king issued an order to collect and
remint all the macuquina in the colonies. After five years, in 1789 the order
was reissued, allowing for a fixed two-year period to complete the process. In
fact, after those two years, Viceroy Arredondo again postponed the recovery
period with no explicit time limit. This process suggests that the officials in the
colonies were reluctant to enforce the order to remint the macuquina as they
perceived that doing so would only aggravate the shortage of small change.
In fact, during the same period, Viceroy Arredondo proposed the creation of a
token coin to be used in domestic trade. He explicitly pointed to the scarcity
of small change as a justification. The Spanish Crown denied the proposal and
instead ordered that all mint houses in the area start minting cuartillos (El´a
ı
1942a, 425–26; Dargent Chamot 2005, Ch. 15).
13 Vell´ n was issued in colonial Mexico in 1542 and the general population refused to accept
o
it. The experiment was a complete failure. Hamilton (1944, 36) argues that the vell´ n was so
o
grossly overvalued that the population preferred to continue using cocoa beans as a mean of payment. Romano (1998, 134–35), instead, suggests that the local elites actually opposed the issuing
of vell´ n based on political motivations. The reason vell´ n was not issued in the Spanish colonies
o
o
in America is an unanswered question in the literature.
14 Butlin (1953, 81–82) reports that the government in Australia (a British colony) used similar legal instruments to avoid the export of coins in 1813. Sargent and Velde (2002) report legal
restrictions on the export of coins in medieval England (p. 132) and in Venice on May 1268 (p.
163). See Wallace and Zhou (1997) for a rationalization of this type of official restriction.
15 See Hamilton (1944, 25–26) for an account of similar problems in the area of colonial
Mexico.

102

Federal Reserve Bank of Richmond Economic Quarterly

The nature of the small change problem was twofold. First, lowdenomination coins (that is, coins of two reals or less) were minted in significantly smaller proportions than the silver pesos; and second, the purchasing
power of the lowest-denomination coin was very high.16
In terms of the relative amounts of low-denomination coins that were
minted in Potos´, Tandeter (1992, 157) reports that, in the mid-1700s, 85
ı
percent of the minting was done in plata doble (i.e., coins of four reals and
higher). By the same token, Romano (1998, 117) explains that most of the
coins minted in the Spanish colonies in America during the 18th century were
of high denomination, with the eight-real (peso) pieces amounting to at least
95 percent of the annual issues of silver coins both in Mexico and in Peru.17
Dargent Chamot (2005, Ch. 15) reports the reluctance of the Superintendent
of the mint of Potos´ in 1784 to issue large quantities of cuartillos, considering
ı
them too costly to produce. In general, cuartillos were issued in very limited
amounts, and only later in the period (in Potos´, starting only in 1794).18
ı
One way to get an idea of the high purchasing power of the denomination
structure that was predominant in the region is to compare the value of silver
coins with the level of nominal wages for unskilled rural workers. For example,
at the time, a slave in the rural areas near Buenos Aires would normally receive
an allowance of one real per week to buy “soap and tobacco.” A free rural
seasonal worker (a peon) had an average wage of around four pesos per month
(although monthly wages fluctuated significantly across workers, from two
to seven pesos; see Amaral 1987, 267–72). This monthly wage implied a
daily wage of around one-and-a-half reals that amounted to three coins of half
real, which was effectively the smallest denomination coin. A similar situation
took place in the early stages of other monetary systems. For example, Hanson
(1979, 283) reports that, of the common coins in circulation in Pennsylvania
and Massachusetts (both British colonies) in 1742, the lowest-denomination
coin represented about three days’ wages for an unskilled laborer at the time.

Some Consequences
The lack of small-denomination coins resulted in the use of unofficial means
of payment in everyday transactions (Bonura 1992, 40). One of these instruments, the contrase˜ as, became very popular. Contrase˜ as were small metal
n
n
(tin) discs with the initials of the issuer printed on them (El´a 1942a, 428;
ı
16 Bonura (1992, 39) recognizes the relatively high purchasing power of the cuartillo and
finds it puzzling that no lower-denomination coins were issued before 1794.
17 The proportion of low-denomination coins in circulation was probably higher since highdenomination coins were more intensively exported.
18 Starting in 1793, cuartillos were also minted in colonial Mexico. According to Hamilton
(1944, 38), the cuartillos were “too small for convenient use and struck in inadequate quantities”
and, hence, “did not end the disorder in the fractional coinage.”

H. M. Ennis: The Problem of Small Change

103

Prebisch 1921, 199). In everyday transactions requiring small change that the
parties (buyer and seller) lack, the buyer could make payments in two possible
ways. One way was to pay using contrase˜ as previously issued by the particn
ular merchant participating in the transaction, in which case the transaction
would terminate with the payment. The other way was for the buyer to pay
in high-denomination silver coins. In this case, when necessary, the change
resulting from the transaction would be provided in contrase˜ as issued by the
n
seller. Sometimes, even the contrase˜ as issued by a third party were used as
n
change. In general, the third party was a well-known merchant in the area and
the individuals engaged in the transaction were holding his contrase˜ as as a
n
result of previous transactions.19
The use of contrase˜ as did not, of course, solve all the problems. In fact,
n
their extensive use resulted in widespread fraud and falsification. Later on,
contrase˜ as were gradually substituted with private IOUs issued directly on
n
paper.20 These IOUs were inconvertible and also circulated widely in the
region (Prebisch 1921, 199). They are a precursor of the inconvertible paper
money that was introduced in the region more than a decade after the 1810
Revolution.21
Another way people circumvented the lack of small change was by developing even simpler credit arrangements. Customers would build up a debit at
the community store until it was possible to settle the payment using higherdenomination coins that were more readily available (Schmit 2003, 265). Obviously, the use of this kind of informal credit was limited to cases where
the owner of the store was relatively certain that the customer had reasons to
secure a permanent relationship with the store.

The Premium
There is some evidence that in the City of Buenos Aires during colonial times,
the hard peso sometimes circulated at a premium over fiduciary silver coins,
i.e., the low-denomination, usually not full-bodied plata sencilla (Prebisch
19 In colonial Mexico, it was popular to use for payments small wooden disks with the name
of the issuer (a merchant) printed on them. These disks were called tlacos and they emerged
in response to the recurrent shortages of small change that took place in the Mexican territory
during the 18th century (Romano 1998, 137; Hamilton 1944, 36–38). Tin-made tokens with similar
characteristics as the contrase˜ as circulated in England in 1576 (Sargent and Velde 2002, 266).
n
20 Butlin (1953, 26–27) describes privately issued promissory notes that circulated in Australia
during the colonial period and the rampant forgery that originated around them.
21 Hanson (1979, 285) convincingly argues that the origin of paper currency in the colonies
of North America was the result of the persistent shortages of low-denomination coins. He provides
evidence of the issuance of private circulating notes by merchants early in the process. Sargent
and Velde (2002, 203) discuss evidence from 1577 France that documents the widespread use of
private IOUs in response to the shortages of small change.

104

Federal Reserve Bank of Richmond Economic Quarterly

1921, 195; Bonura 1992, 40–41).22 Interestingly, the premium was lower (and
even zero) in the periphery (the interior), creating a flow of plata sencilla from
Buenos Aires to those regions.23 In 1790 the authorities in Buenos Aires asked
the Crown to introduce legal restrictions to abolish “the 3 percent premium
of the hard peso.” In 1798, after the Crown did not respond, the request was
reiterated. The main justification for the request was the constant flow of
fractional coins out of Buenos Aires to the interior. The official document
stated that this flow had “reduced the quantity of small-denomination coins,
. . . creating difficulties in the change or reduction of the plata doble to the
sencilla . . . the specie so necessary for making small daily purchases, which
are very indispensable transactions” (see Bonura 1992, 41).24
With respect to the evolution of the premium over time, it appears that the
premium was fairly constant. Bonura (1992, 49), for example, reports that the
premium was still around 3 percent in 1812, when the authorities in Buenos
Aires engaged in another legal attempt to reduce it. Sargent and Velde (2002)
associate periods of shortages of small change with periods of depreciation
in the value of fractional money. The evidence from Argentina is too sparse
to test this hypothesis (but, in principle, no clear trend in the premium was
observed in the region).

2.

EVIDENCE FROM EARLY ARGENTINA

In 1810, the Cabildo (the town council) of Buenos Aires declared autonomy
from the Spanish Crown. With the end of the Viceroyalty of the Rio de la Plata,
Buenos Aires lost Upper Peru from its area of influence; and with Upper Peru,
the mint of Potos´ and the silver mines.25
ı
This transitional period was associated with general monetary disarray in
the region. The confrontation with Upper Peru (which had remained loyal
to the Crown) and the necessary financing of military expenses (including
significant imports that needed to be paid in specie) created a sharp contraction
in the amount of available means of payment in Buenos Aires (Prebisch 1921,
198). During this period, many government officials proposed a compulsory
22 Tandeter (1992, 157) reports that the plata doble had a premium over the plata sencilla in
the Villa of Potos´ in the mid-1700s. He attributes the premium to the fact that the plata doble
ı
was the one preferred in long-distance trade.
23 This kind of geographic dispersion in the exchange rate of coins was also observed across
the French territory during the 1570s, a period of monetary “chaos” (Sargent and Velde 2002,
200).
24 In principle, one would expect shortages of small change to be associated with, if anything,
a premium on low-denomination coins. This is the opposite of what is reported here. It seems
likely, however, that the premium was not uniform across transactions, and that the 3 percent
premium on hard pesos was predominant only in large-value transactions and international trade.
25 For a good overview of the economic factors that led to the breakup of the Viceroyalty
into different countries, see Cort´ s Conde and McCandless (2001).
e

H. M. Ennis: The Problem of Small Change

105

remint of all the old coins in circulation. Furthermore, some of these proposals
included the imposition of a steep proportional tax upon reminting. In view of
these proposals, it seems that hiding and hoarding coins was a natural reaction
of the population.
The new government made several attempts at issuing new coins during
this period. In 1813, after temporarily recovering the city of Potos´, the first
ı
Argentinean coin was minted. A year later, however, the Independence Army
lost Potos´ to royal forces and the minting stopped.26 Some minting of silver
ı
pesos took place in the province of C´ rdoba during 1815, but in very limited
o
amounts (El´a 1942a, 433). At that time, illegal private minting of cobs and
ı
other (very low quality) silver counterfeits was common in the northwest
region of the country (Bonura 1992, 73). In 1817 Governor G¨ emes officially
u
authorized the circulation of “illegal” coins (after being officially stamped) in
the territory under his jurisdiction in the northwest part of the country. He gave
as a justification for this resolution the “evils associated with the lack of means
of payment” (El´a 1942a, 435). A year later, the federal authority banned the
ı
circulation of these “G¨ emes” coins, establishing a severe punishment for
u
those who accepted and/or held them. Overall, no real progress was made
in providing the economy with appropriate means of payment during the first
decade after the revolution (Bonura 1992, 81).
Besides the general monetary disorder, some specific episodes suggest
the existence of shortages of fractional money. In this respect, two situations
appear most relevant: the provision of copper coins approved in 1821 after
several years of discussions and the authorization granted to the Bank of
Buenos Aires to issue paper notes of relatively low denomination in 1823.27

Copper Coins
In June 1815 the newly created government in Buenos Aires started evaluating
the introduction of “provisional money” in the form of copper coins (Bonura
1992, 61). For this purpose, the government commissioned an extraordinary
consulting body of experts to study the issue. The authority’s motivation for the
introduction of these coins was twofold. The first was that shortages of small
change were a recurrent problem that needed to be fixed. In August 1815 the
body of experts presented a detailed report in which they unanimously agreed
26 Minting of Argentinean coins in Potos´ resumed for a short period in 1815.
ı
27 Experiences in other countries influenced these two decisions. For example, the general

perception in Buenos Aires was that copper coins were being used with great success in Portugal
(Bonura 1992, 77). With respect to banking, reports of the benefits associated with the operation
of the Bank of England were one of the main motivations for the creation of the Bank of Buenos
Aires (Prebisch 1921, 199–200).

106

Federal Reserve Bank of Richmond Economic Quarterly

Figure 2 Copper Coin Minted at Boulton’s Mint

http://www.camoar.gov.ar/CecasProvinciales.htm

that introducing copper coins was essential for eliminating the inconveniences
resulting from the persistent lack of small change (Bonura 1992, 65).
The second motivation was the possibility of obtaining extra resources
for a government that was in desperate need of financing.28 This issue was
the subject of important disagreement among experts. They discussed the
estimated costs of minting copper coins extensively, but they did not reach an
agreement, so the implementation was postponed (Bonura 1992, 65). Sporadically, during the next five years, the government authorities in Buenos
Aires revisited the possibility of issuing copper coins but never managed to
implement the idea.29
Finally, in October 1821, a law was passed allowing the government to
arrange the minting of 100,000 pesos in copper coins of one-tenth of a real
(El´a 1942a, 437). These coins, the first Argentinean copper coins, were
ı
minted in Birmingham, England (at Boulton’s mint). Fifty thousand pesos of
those coins were received and put into circulation in July 1823.30 El´a (1942a,
ı
28 Token coins are usually circulated at a value greater than their intrinsic value. For this
reason, they have the potential to become a source of revenue for the monetary authority. During
1817, the government was evaluating the possibility of opening a mint in the city of C´ rdoba.
o
After concluding that the project would not be profitable for the government, the idea was abandoned. The evidence seems to indicate that there was a problem of insufficient scale of production.
Apparently, the set-up costs of operating a mint were very high, and the quantity of metal available from the mines of Famatina, the planned source of basic input, was not enough to make the
enterprise profitable (Bonura 1992, 72).
29 The issue of minting copper coins was again extensively discussed in 1818 when the new
government was evaluating the possibility of establishing an official mint in Buenos Aires (Bonura
1992, 75).
30 To put some perspective on these numbers, note, for example, that in 1923 total tax
revenue for the province of Buenos Aires was around two million pesos (see Bordo and Vegh 2002,

H. M. Ennis: The Problem of Small Change

107

437) reports that the public immediately absorbed that first lot of coins and
the government then requested that the Birmingham mint deliver the rest of
the coins as soon as possible. Both the small denomination of these copper
coins and their generalized acceptance by the public seem indicative of the
high level of unsatisfied demand for fractional money that existed during that
period.31

Paper Money
In June 1822 the government in Buenos Aires gave a group of local businessmen an exclusive 20-year concession to create the first (and only) bank in the
region. The Bank of Buenos Aires (also called Banco de Descuentos) was
supposed to be fully funded with private capital. Part of the government’s
justification for allowing the creation of the Bank was the need to provide
appropriate means of payment to the community (Irigoin 2003, 65).32 Trade
liberalization after the revolution resulted in a substantial increase in commercial activity, in turn creating the urgent need for more developed monetary and
financial institutions in the region.
However, this was not the only motivation for the creation of the Bank.
In fact, there is some evidence indicating that the primary reason was to allow
the government to access cheaper financing. By 1822 the government was
heavily involved in a civil war and was quickly running out of resources
(Prebisch 1921, 201). The plan was that the government would take loans
from the bank at preferential rates.
Some of the factors that triggered the creation of the Bank of Buenos
Aires seem indicative of the persistent shortage of low-denomination money.
First, from discussions at the time it is clear that private IOUs (vales) and
contrase˜ as were still in circulation when the Bank was created in 1822 (El´a
n
ı
1942b, 323; Prebisch 1921, 199; Irigoin 2003, 65). The use of vales and
contrase˜ as can be taken as evidence of the need for fractional money. To the
n
extent that parties in a transaction were willing to accept these very imperfect
means of payment, which were clearly associated with significant risk of fraud
Table 1). In other words, the gross revenue from the introduction of this first batch of copper
coins in 1823 was about 2.5 percent of total annual tax revenue.
31 The circulation of token coins was never automatic in the early stages of monetary development. It often happened that the population, accustomed to full-bodied coinage, distrusted the
validity of token copper coins as an acceptable means of payment. See Butlin (1953, 37) for the
case of Australia and Sargent and Velde (2002, 210) for the case of France in the 1590s (see also
footnote 13 in this paper). Of course, counterfeiting was always a potential problem in the case
of circulating token coins (see, for example, Sargent and Velde 2002, 217–18). The fact that these
first copper coins were minted in England using frontier technology at the time probably reduced
the risk of counterfeiting making the coins more likely to circulate.
32 Redish (1984) reports that a similar justification was used at the time of the creation of
the first Canadian banks at the beginning of the 19th century.

108

Federal Reserve Bank of Richmond Economic Quarterly

and counterfeiting, it must be the case that no better payment methods were
available (Prebisch 1921, 199).
Second, while initially the Bank of Buenos Aires was only allowed to
issue notes in denominations no lower than 20 hard pesos, in mid-1823 the
government agreed to authorize the Bank to start issuing lower-denomination
bills. These bills came to replace some Treasury notes of similar denomination
that the Ministry of Finance had introduced only months before. The Bank
issued bills of one, three, and five hard pesos, convertible to gold and silver
coins upon presentation at the Bank’s window (El´a 1942b, 326). While
ı
these denominations were not, by any means, the lowest of the prevailing
structure, they were commonly used in domestic transactions. Also, they
were probably considered the natural intermediate step in the move toward
lower denominations.
During the first two years of its existence, the Bank issued convertible
money notes well in excess of its reserves of gold and silver that resulted
in a confidence crisis in 1825. Early in 1826, the Bank was taken over by
the government and its money notes were declared inconvertible. The notes
stayed in circulation but only based upon government fiat. The perceived
insufficiency of means of payment prevailing in the region was then replaced
by excessive printing of inconvertible paper money. A regime of high inflation
followed, which lasted for many decades.33

3.

FINAL REMARKS

In this article I reviewed evidence that suggests that shortages of small change
were a problem in the economy of the Rio de la Plata area during the colonial
period and the first two decades after independence. Evidence of this sort is
already available for several other regions around the world. It is interesting
33 After 1825 the authorities in both Bolivia and Buenos Aires started a period of sustained
monetary expansion and inflation (Irigoin 2003, 60). In Bolivia, the minting and systematic debasement of silver coins (moneda feble) was the main fiscal instrument of the new government.
These coins circulated also in the northwest regions of Argentina. In Buenos Aires, the government printed large amounts of inconvertible paper money to finance increasing fiscal deficits. The
paper peso depreciated around 200 percent during 1826 and continued depreciating in the following years. For a detailed discussion of the monetary history of Argentina during this period, see
Irigoin (2003), Bordo and V´ gh (2002), and Irigoin (2000). In general, the paper money from
e
Buenos Aires did not circulate in the provinces. In the interior of the country, several provincial
governments attempted to issue their own paper money but faced substantial problems in inducing
its circulation, as the general population deeply mistrusted the viability of fiat money. In 1826 the
province of Corrientes issued 3,000 hard pesos in low-denomination notes, but acceptance was limited and the experiment became a complete failure (Irigoin 2003, 67). Sometimes the government
introduced extreme legislation to try to encourage the circulation of the money notes. For example,
in 1840 the provincial authority in Tucum´ n instituted the death penalty for those not accepting in
a
exchange the paper money printed by the Northern League, a coalition of northwestern provinces
(Halper´n Donghi 1979, 91).
ı

H. M. Ennis: The Problem of Small Change

109

to verify that a similar monetary phenomenon took place in Argentina during
the early stages of its political and economic development.
Several features made the evidence presented here especially interesting.
First, while most of the European evidence comes from economies where free
minting was in place, in Argentina during the late colonial period the supply
of coins was under the direct control of the Spanish Crown. Minting policies,
then, were not uniquely directed to improve the smooth functioning of the
monetary economy in the colonies. Spain was the main provider of highquality silver coins to the rest of Europe during that time. For this reason,
a major motivation for the Crown’s policies was to maintain an international
reputation of high quality for Spanish coins. These competing objectives
probably increased the chances of misalignments between demand and supply
of denominations.
Second, the evidence clearly illustrates the interaction between money
and credit during the early stages of monetary development. To bypass the
problem of small change, agents in the economy developed rudimentary credit
arrangements that allowed them to trade with one another. Two schemes were
prevalent. In one scheme, the buyer would extend credit to the seller through
the use of contrase˜ as; in the other, the seller would grant credit to the buyers
n
by allowing them to accumulate a debit in a temporary account. It is a general
principle in monetary economics that money and credit act as close substitutes.
In general, however, the emphasis has been on explaining how monetary exchange increases the trading possibilities in an economy where credit is not
always feasible (as in Kiyotaki and Wright 1993). The evidence presented
here highlights the reciprocal fact that when the convenience of monetary exchange is undermined by, in this case, the lack of small change, agents turn to
imperfect credit arrangements to carry out their economic transactions. (See
Jin and Temzelides [2004] and Cuadras-Morat´ [2005] for a formal discussion
o
of some of these issues.)
Finally, it was interesting to see the newly created government confronting
all the basic economic issues involved in the provision of small change when
deciding to introduce copper coins. On one hand, effective fractional coins
needed to be of relatively high quality to avoid counterfeiting. On the other
hand, high-quality, low-denomination coins were very costly to produce. The
government realized that only a large scale of production could lower the
unitary cost of production to an acceptable level. The lack of sufficient mineral
input delayed production of copper coins for several years. In the end, the
government resorted to importing the coins from England, an international
producer of coins for which scale of production was obviously not an issue.

110

Federal Reserve Bank of Richmond Economic Quarterly

REFERENCES
Amaral, Samuel. 1987. “Rural Production and Labour in Late Colonial
Buenos Aires.” Journal of Latin American Studies 19 (2): 235–78.
Bonura, Elena. 1992. “Aproximaciones al Estudio del Problema Monetario
de las Provincias del Rio de la Plata, 1810–1820.” Historiograf´a
ı
Rioplatense 4: 39–84.
Bordo, Michael D., and Carlos A. V´ gh. 2002. “What If Alexander Hamilton
e
Had Been Argentinean? A Comparison of the Early Monetary
Experiences of Argentina and the United States.” Journal of Monetary
Economics 49: 459–94.
Butlin, Sydney James. 1953. Foundations of the Australian Monetary System
1788–1851. Melbourne University Press: Victoria.
Cort´ s Conde, Roberto, and George McCandless. 2001. “Argentina: From
e
Colony to Nation: Fiscal and Monetary Experience of the Eighteen and
Nineteenth Centuries.” In Transferring Wealth and Power from the Old
to the New World, ed. Michael D. Bordo and R. Cort´ s Conde.
e
Cambridge University Press: 378–413.
Cuadras-Morat´ , Xavier. 2005. “Circulation of Private Notes During a
o
Currency Shortage.” Mimeo, Universitat Pompeu Fabra.
Dargent Chamot, Eduardo. 2005. Las Casas de Moneda Espa˜ olas en
n
Am´ rica del Sur. Available at http://www.tesorillo.com (accessed on
e
March 30, 2006).
El´a, Oscar Horacio. 1942a. “Evoluci´ n de la Moneda en la Rep´ blica
ı
o
u
Argentina: Desde sus Origenes hasta 1822.” Revista de Ciencias
Econ´ micas (April): 415–37.
o
. 1942b. “Nuestro Primer Banco.” Revista de Ciencias
Econ´ micas (May): 323–30.
o
Halper´n Donghi, Tulio. 1972. Revoluci´ n y Guerra: Formaci´ n de una Elite
ı
o
o
Dirigente en la Argentina Criolla. Siglo Veintiuno Editores: Buenos
Aires.
Hamilton, Earl J. 1944. “Monetary Problems in Spain and Spanish America.
1751–1800.” Journal of Economic History 4 (1): 21–48.
Hanson, John R. II. 1979. “Money in the Colonial American Economy: An
Extension.” Economic Inquiry 17 (April): 281–86.
Irigoin, Mar´a Alejandra. 2000. “Inconvertible Paper Money, Inflation, and
ı
Economic Performance in Early Nineteenth Century Argentina.”
Journal of Latin American Studies 32: 333–59.

H. M. Ennis: The Problem of Small Change

111

. 2003. “La Fabricaci´ n de Moneda en Buenos Aires y
o
Potos´ y la Transformaci´ n de la Econom´a Colonial en el Rio de la Plata
ı
o
ı
(1820 y 1860).” In La Desintegraci´ n de la Econom´a Colonial, ed.
o
ı
Mar´a A. Irigoin and Roberto Schmit. Editorial Biblos, Buenos Aires:
ı
57–91.
Jin, Yi, and Ted Temzelides. 2004. “On the Local Interaction of Money and
Credit.” Review of Economic Dynamics 7: 143–56.
Kiyotaki, Nobuhiro, and Randall Wright. 1989. “On Money as a Medium of
Exchange.” Journal of Political Economy 97 (June): 927–54.
. 1993. “A Search-Theoretic Approach to Monetary
Economics.” American Economic Review 83 (March): 63–77.
Lee, Manjong, Neil Wallace, and Tao Zhu. 2005. “Modeling Denomination
Structures.” Econometrica 73 (May): 949–60.
Prebisch, Ra´ l. 1921. “Anotaciones Sobre Nuestro Medio Circulante.”
u
Revista de Ciencias Econ´ micas (October): 190–205.
o
Redish, Angela. 1984. “Why Was Specie Scarce in Colonial Economies? An
Analysis of the Canadian Currency, 1796–1830.” Journal of Economic
History 44 (3): 713–28.
Rolnick, Arthur J., and Warren E. Weber. 2003. Book Review. Journal of
Political Economy 111 (2): 459–63.
Romano, Ruggiero. 1998. Moneda, Seudomonedas y Circulaci´ n Monetaria
o
en las Econom´as de M´ xico. Fondo de Cultura Econ´ mica: Mexico.
ı
e
o
Sargent, Thomas J., and Fran¸ ois Velde. 1999. “The Big Problem of Small
c
Change.” Journal of Money, Credit and Banking 31 (May): 137–61.
. 2002. The Big Problem of Small Change. Princeton, N.J.:
Princeton University Press.
Schmit, Roberto. 2003. “Enlaces Conflictivos: Comercio, Fiscalidad y
Medios de Pago en Entre R´os durante la Primera Mitad del Siglo XIX.”
ı
In La Desintegraci´ n de la Econom´a Colonial, ed. Mar´a A. Irigoin and
o
ı
ı
Roberto Schmit. Editorial Biblos: Buenos Aires: 251–76.
Supple, Barry E. 1957. “Currency and Commerce in the Early Seventeenth
Century.” Economic History Review, New Series 10 (2): 239–55.
Tandeter, Enrique. 1992. Coacci´ n y Mercado. Sudamericana: Buenos Aires.
o
Wallace, Neil. 2003. “Modeling Small Change: A Review Article.” Journal
of Monetary Economics 50 (6): 1391–1401.
, and Ruilin Zhou. 1997. “A Model of a Currency Shortage.”
Journal of Monetary Economics 40 (3): 555–72.

Implementation of Optimal
Monetary Policy
Michael Dotsey and Andreas Hornstein

R

ecently the study of optimal monetary policy has shifted from an
analysis of the welfare effects of simple parametric policy rules to
the solution of optimal planning problems. Both approaches evaluate
the welfare effects of monetary policy in an explicit monetary model of the
economy, but they differ in the scope of analysis. The first approach is more
restrictive in that it finds the optimal policy within a class of prespecified policy rules for the monetary policy instrument. On the other hand, the second
approach finds the optimal monetary policy among all allocations that are
consistent with a competitive equilibrium in the monetary economy. Since
monetary policy, in general, does not choose the economy’s allocation but
implements policy through a rule for the policy instruments, it is natural to
ask whether the policy rule implied by the solution to the planning problem
implements the optimal planning allocation. In most work on optimal planning problems, it is indeed taken for granted that the solution of the planning
problem can be implemented through some policy rule for the monetary policy
instrument but, as we show in this article, this need not always be the case.
There is a vast literature on optimal monetary policy that studies the solution to planning problems. The environments examined are diverse, ranging
from models in which there are no private sector distortions other than an inflation tax to models where economies are subject to various types of nominal
rigidities. The policymaker is assumed to choose among all the allocations
that are consistent with a market equilibrium in the given environment. In
addition, different assumptions are made as to whether a policymaker can or
cannot commit to his future choices. Under a full-commitment policy, we asMichael Dotsey is with the Federal Reserve Bank of Philadelphia. Andreas Hornstein is with
the Federal Reserve Bank of Richmond. The authors thank Andrew Foerster, John Weinberg,
and Alexander Wolman for useful comments. The views expressed in this article are those
of the authors and not necessarily those of the Federal Reserve Bank of Richmond or the
Federal Reserve System.

Federal Reserve Bank of Richmond Economic Quarterly Volume 92/2 Spring 2006

113

114

Federal Reserve Bank of Richmond Economic Quarterly

sume that the policymaker chooses all current and future actions in an initial
period. Alternatively, under time consistency we assume that in every period
a policymaker chooses the optimal action, taking past outcomes as given. For
either specification, the solution to the planning problem specifies a rule that
determines the allocation, and part of the allocation is the setting of the policy
instrument.
The question is whether the policy rule implied by the solution to the
planning problem (or a variation thereof) can implement the optimal allocation
for the planning problem. Specifically, how would the competitive economy
behave if the monetary authority simply announced the policy rule implied by
the solution to the planning problem? In particular, conditional on the policy
rule, will there be a unique competitive equilibrium?
Giannoni and Woodford (2002a, 2002b) discuss the implementability of
optimal policy for local approximations of the planning problem with full commitment. This starts with a log-linear approximation around the steady state
of the solution to the full-commitment problem. Within the approximation
framework, implementability of the optimal policy rule is equivalent to the
existence and uniqueness of rational expectations equilibria in linear models.
As such, implementability is concerned with “dynamic” uniqueness, that is,
the existence of a unique stochastic process that characterizes the competitive
equilibrium.
King and Wolman (2004) discuss the implementation of Markov-perfect
policy rules for time-consistent solutions to the planning problem. King and
Wolman (2004) show that Markov-perfect policies with an optimal nominal
money stock instrument can imply equilibrium indeterminacy at two levels.
First, it can imply multiple steady states. Second, around each steady state
it can imply static price level indeterminacy, that is, conditional on future
outcomes there can be multiple current equilibrium prices.
In this article, we review implementability of both the optimal
full-commitment and time-consistent Markov-perfect monetary policies when
the policymaker uses a nominal money stock instrument. We study optimal
policy in a simple New Keynesian economic model as described in Wolman
(2001) and King and Wolman (2004). We first characterize the solution to a linearized version of the first-order conditions (FOCs) of the planning problems.
We show that optimal monetary policy locally implements the planning allocation for the full-commitment and the Markov-perfect case. We then study
whether the policy rules implement the planning allocations globally. We review King and Wolman’s (2004) argument that the Markov-perfect policy rule
cannot implement the planning allocation. Finally, we provide a partial argument that the full-commitment policy rule globally implements the planning
allocation.

M. Dotsey and A. Hornstein: Monetary Policy Implementation

115

1. A SIMPLE ECONOMY WITH STICKY PRICES
We investigate the question of the implementability of optimal monetary policy
within the confines of a simple New Keynesian economic model. The model
contains an infinitely lived representative household with preferences over
consumption and leisure. The consumption good is produced using a constantreturns-to-scale technology with a continuum of differentiated intermediate
goods. Each intermediate good is produced by a monopolistically competitive
firm with labor as the only input. Intermediate goods firms set the nominal
price for their products for two periods, and an equal share of intermediate
firms adjust their nominal price in any period. We describe a symmetric
equilibrium for the economy, and we characterize the two distortions that make
the equilibrium allocation suboptimal relative to the Pareto-optimal allocation.

The Representative Household
The representative household’s utility is a function of consumption, ct , and
the fraction of time spent working, nt ,
∞

β t [ln ct − χ nt ] ,

E0

(1)

t=0

where χ ≥ 0, and 0 < β < 1. The household’s period budget constraint is
Pt ct + Bt+1 + Mt ≤ Wt nt + Rt−1 Bt + Mt−1 + Dt + Tt ,

(2)

where Pt (Wt ) is the money price of consumption (labor), Bt+1 (Mt+1 ) are the
end-of-period holdings of nominal bonds (money), Rt−1 is the gross nominal
interest rate on bonds, Tt are lump-sum transfers, and Dt is profit income from
firms owned by the representative household. The household is assumed to
hold money in order to pay for consumption purchases
M t = Pt c t .

(3)

We will use the term “real” to denote nominal variables deflated by the price
of consumption goods, and we use lower-case letters to denote real variables.
For example, real balances are m ≡ M/P .
The FOCs of the representative household’s problem are
= wt /ct , and
Rt
ct
·
.
1 = βEt
ct+1 Pt+1 /Pt

χ

(4)
(5)

Equation (4) states that the marginal utility derived from the real wage equals
the marginal disutility from work. Equation (5) is the Euler equation, which
states that if the real rate of return increases, then the household increases
future consumption relative to today’s consumption.

116

Federal Reserve Bank of Richmond Economic Quarterly

Firms
The consumption good is produced using a continuum of differentiated intermediate goods as inputs to a constant-returns-to-scale technology. Producers
of the consumption good behave competitively in their markets. There is a
measure one of intermediate goods, indexed j ∈ [0, 1]. Production of the
consumption good c as a function of intermediate goods, y (j ), used is
ε/(ε−1)

1

ct =

yt (j )

(ε−1)/ε

dj

,

(6)

0

where ε > 1. Given nominal prices, P (j ) , for the intermediate goods, the
nominal unit cost and price of the consumption good is
1/(1−ε)

1

Pt =

Pt (j )1−ε dj

.

(7)

0

For a given level of production, the cost-minimizing demand for intermediate
good j depends on the good’s relative price, p (j ) ≡ P (j )/P ,
yt (j ) = pt (j )−ε ct .

(8)

Each intermediate good is produced by a single firm, and j indexes both
the firm and good. Firm j produces y(j ) units of its good using a constantreturns technology with labor as the only input,
yt (j ) = ξ t nt (j ),

(9)

and ξ t is a positive iid productivity shock with mean one. Each firm behaves
competitively in the labor market and takes wages as given. Real marginal
cost in terms of consumption goods is
ψ t = wt /ξ t .

(10)

Since each intermediate good is unique, intermediate goods producers
have some monopoly power, and they face downward sloping demand curves,
(8). Intermediate goods producers set their nominal price for two periods, and
they maximize the discounted expected present value of current and future
profits:
max
Pt (j )

Pt (j )
ct
− ψ t yt (j ) + βEt
·
Pt
ct+1

Pt (j )
− ψ t+1 yt+1 (j ) . (11)
Pt+1

Since the firm is owned by the representative household, the household’s
intertemporal marginal rate of substitution is used to discount future profits.
Using the definition of the firm’s demand function, (8), the first-order condition
for profit maximization can be written as

M. Dotsey and A. Hornstein: Monetary Policy Implementation
Pt (j )
Pt

0 =

+βEt

1−ε

1−μ
Pt (j )
Pt+1

117

ψt
Pt (j ) /Pt

1−ε

1−μ

ψ t+1
Pt (j ) /Pt+1

(12)
,

with μ = ε/ (ε − 1).

A Symmetric Equilibrium
We will assume a symmetric equilibrium, that is, all firms who face the same
constraints behave the same. Each period, half of all firms have the option to
adjust their nominal price. This means that in every period there will be two
firm types: the firms who adjust their nominal price in the current period, type
0 firms with relative price p0 , and the firms who adjusted their price in the last
period, type 1 firms with current relative price p1 .
Conditional on a description of monetary policy, the equilibrium of the
economy is completely described by the sequence of marginal cost, relative
prices, inflation rates, nominal interest rates, aggregate output, and real balances {ψ t , p0,t , p1,t , π t , Rt , ct , mt } such that (3), and
= χ ct /ξ t ,
1 1−ε
1−ε
1 =
p
+ p1,t ,
2 0,t
ψ
ψ
1−ε
1−ε
0 = p0,t 1 − μ t + βEt p1,t+1 1 − μ t+1
p0,t
p1,t+1
p0,t
π t+1 =
, and
p1,t+1
Rt
ct
1 = βEt
·
.
ct+1 π t+1
ψt

(13)
(14)
, (15)
(16)
(17)

Equation (13) uses the optimal labor supply condition (4) in the definition of
marginal cost (10). Equation (14) is the price index equation (7) and equation
(15) is the profit maximization condition (12) for the two firm types. Equation
(16) just restates how next period’s preset relative price p1,t+1 is related to the
relative price that is set in the current period, p0,t , through the inflation rate
π t+1 . Finally, equation (17) is the household’s Euler equation, (5).

Distortions
Allocations in this economy are not Pareto-optimal because of two distortions.
The first distortion results from the monopolistically competitive structure of
intermediate goods productions: the price of an intermediate good is not equal
to its marginal cost. The average markup in the economy is the inverse of the

118

Federal Reserve Bank of Richmond Economic Quarterly

real wage, Pt /Wt , that is, according to equation (10), the inverse marginal cost,
1/ ξ t ψ t . The second distortion reflects inefficient production when relative
prices are different from one. Using the expressions for the production of final
goods and the demand functions for intermediate goods, (6) and (8), we can
obtain the total demand for labor as a function of relative prices and aggregate
output. Solving aggregate labor demand for aggregate output, we obtain an
“aggregate” production function
−ε
−ε
dt ct = ξ t nt with dt ≡ (1/2) p0,t + p1,t .

(18)

Given the symmetric production structure, equations (6) and (9), efficient
production requires that equal quantities of each intermediate good are produced. Allocational efficiency is reflected in the term dt ≥ 1. The allocation
is efficient if p0,t = p1,t = dt = 1.
For the following analysis of optimal policy, it is useful to rewrite the
household’s period utility from the equilibrium allocation as a “reduced form”
utility function of the markup and efficiency distortion. Combining expression
(13) for equilibrium consumption as a function of marginal cost and productivity with the characterization of the aggregate production function (18) yields
equilibrium work effort
nt = dt ψ t /χ .

(19)

We can substitute expressions (13) and (19) for consumption and work effort
in the household’s utility function and obtain the reduced form utility function
∞

β t ln ψ t − dt ψ t ,

E0

(20)

t=0

after dropping any constant or additive exogenous terms.

2.

MONETARY POLICY

Since the allocation of the above-described monopolistically competitive equilibrium with sticky prices is suboptimal, there is the potential for welfareimproving policy interventions. In view of the role of nominal rigidities, we
want to characterize optimal monetary policy. In particular, we want to know
how optimal monetary policy can be implemented given some choice of policy instrument. We examine the implications of choosing the nominal money
stock as the policy instrument. This is the policy instrument considered in
King and Wolman (2004), where they assume that the policymaker chooses
a sequence for the nominal money stock {Mt }. Alternatively the policymaker
could select the nominal interest rate, Rt , as the policy instrument. The choice
of policy instrument can be crucial for questions of the implementability of
optimal monetary policy, and we will get back to this issue in the conclusion.
For the analysis of the monetary policy planning problem, it is convenient
to define monetary policy in terms of the money stock normalized relative to

M. Dotsey and A. Hornstein: Monetary Policy Implementation

119

the preset nominal prices,
m1t =

Mt
,
P1,t

(21)

rather than the nominal money stock, Mt , directly. This normalization is not
restrictive for the analysis of a policymaker that can commit to future policy
choices, the full-commitment case. In the case of time-consistent policies,
when a policymaker cannot commit to future policy choices, we will argue that
for the particular class of Markov-perfect policies that we study, the normalized
money stock is the relevant choice variable. Combining the policy rule with the
cash-holding condition, (3), and using P1,t = P0,t−1 , we obtain an equilibrium
condition for consumption
ct = p1,t m1t .

(22)

Optimal Monetary Policy
The objective of monetary policy is well-defined: the policymaker is to choose
an allocation that maximizes the representative household’s utility subject to
the constraint that the allocation can be supported as a competitive equilibrium.
For our simple example, any allocation that satisfies equations (13)–(16), (18),
and (22) is a competitive equilibrium. We summarize these constraints as
Et h xt+1 , xt ; ξ t+1 , ξ t

= 0 for t ≥ 0.

(23)

The vector xt = (yt , zt ) contains the private sector variables,
yt = p0,t , p1,t , π t , ψ t , dt , ct , and the policy instrument, zt = m1t .1 Formally, the policymaker’s optimization problem is then defined as
∞

β t u (xt ) s.t. Et h xt+1 , xt ; ξ t+1 , ξ t

max E0

= 0 for t ≥ 0,

(24)

t=0

where u denotes the period utility function of the representative household as
defined in equation (20). A solution to this problem will have xt as a function
of the current and past state of the economy.
We will solve two alternative versions of the planning problem. First, we
assume that the policymaker at time zero chooses once and for all the optimal
allocation among all feasible allocations that can be supported as a competitive
equilibrium. This approach delivers the constrained optimal allocation, but
frequently the chosen allocation is not time consistent. The allocation is not
time consistent in the sense that if a policymaker gets the option to reconsider
his choices after some time, he would want to deviate from the initially chosen
1 The characterization of the private sector involves equilibrium prices and quantities. With
some abuse of standard terminology, we will call the vector y the equilibrium allocation.

120

Federal Reserve Bank of Richmond Economic Quarterly

path. The alternative approach then finds optimal time-consistent monetary
policies. In particular, we will restrict attention to Markov-perfect policy
rules, that is, rules that make policy choices contingent on payoff-relevant
state variables only.
For the planning problem, we are not specific about how the policymaker
can implement the policy: we simply assume that the policymaker can select any allocation subject to the constraint that the allocation is consistent
with a competitive equilibrium allocation. We will say that a policy can be
implemented if a unique rational expectations equilibrium exists when the
policymaker sets the policy instrument, zt , according to the state-contingent
rule implied by the planning problem.
Optimal Policy with Full Commitment

Suppose that at time zero the policymaker chooses a sequence {xt } for the
market allocation and the policy instrument that solves problem (24). We
assume that the policymaker is committed to this outcome for all current and
future values of the market outcome and the instrument. The FOCs for this
constrained maximization problem are
0 = Du (xt ) + λt Et D2 h xt+1 , xt ; ξ t+1 , ξ t

(25)

+λt−1 D1 h xt , xt−1 ; ξ t , ξ t−1 for t > 0, and
0 = Du (xt ) + λt Et D2 h xt+1 , xt ; ξ t+1 , ξ t

for t = 0.

(26)

Note that the FOC for the initial time period, t = 0, is essentially the same as
the FOCs for future time periods, t > 0, if we assume that the lagged Lagrange
multiplier in the initial time period is zero, λ−1 = 0. This simply means that
in the initial time period, the policymaker’s choices are not constrained by
past market expectations of outcomes in the initial period.
Marcet and Marimon (1998) show how to rewrite the planning problem as
a recursive saddlepoint problem such that dynamic programming techniques
can be applied. Following their approach, the Lagrange multiplier, λt−1 , can
be interpreted as a state that reflects the past commitments of the planner.
Given the dynamic programming formulation, the optimal policy choice will
then be a function of the state of the economy,
F
F
xt = gx C λt−1 , ξ t and λt = gλ C λt−1 , ξ t .

(27)

The policymaker’s optimization problem is not time consistent because of
the particular status of the initial period. If a policymaker gets the opportunity
to reevaluate his choices at some time t > 0, then equation (25) will no
longer characterize the optimal decision at t . Rather equation (26) will apply
at the time t , and, in general, the policymaker would want to deviate from
his original decision. If the policymaker has no way to precommit to future
policy actions, the optimal policy will therefore not be time consistent.

M. Dotsey and A. Hornstein: Monetary Policy Implementation

121

Markov-Perfect Optimal Policy

We study a particular class of time-consistent policies, namely Markov-perfect
policies. For a Markov-perfect policy, the optimal policy rule is restricted to
depend on payoff-relevant state variables only, that is, predetermined variables
that constrain the attainable allocations of the economy. We can think of
today’s policymaker as taking his own future actions as given by a policy rule
that makes his choices contingent on the future payoff-relevant state variables.
Given these future choices, the policymaker’s optimal choice for today will
then also depend on payoff-relevant state variables only.
In our environment, predetermined nominal prices do not constrain the
policymakers’ choices among the allocations that are consistent with a competitive equilibrium. Even though the nominal price set by a firm that adjusted
its price in the last period, P1,t , is predetermined, the relevant variable is that
firm’s relative price, p1,t , which is not predetermined. Since the predetermined nominal price is not payoff-relevant, the policymaker has to choose the
nominal money stock in a way such that the predetermined nominal price cannot affect outcomes. But this just means that the policymaker cannot choose
the nominal money stock, Mt , but has to choose the normalized money stock,
m1t .
Our environment as described by (23) then has the feature that, except for
the exogenous shocks, ξ t , there are no predetermined variables that constrain
the equilibrium allocation. In other words, in any time period the values for the
variables that characterize the competitive equilibrium have to be consistent
with future values of the same variables, but the variables can be chosen
independently of any values they took in the past.
In a Markov-perfect equilibrium, the current policymaker then assumes
that future choices and outcomes are time invariant functions of ξ , xt =
MP
gx
ξ t , for t > t. For this reason, current policy choices have no effect on
future outcomes, and the policymaker’s choice problem simplifies to
MP
xt∗ ξ t ; gx

= arg max u (xt )

(28)

x

MP
s.t. 0 = Et h gx
ξ t+1 , xt ; ξ t+1 , ξ t

.

The FOCs for this problem coincide with the FOCs of the optimization problem
with commitment for the initial period, equation (26).2 In a time-consistent
MP
Markov-perfect equilibrium, the optimal policy choice satisfies xt∗ ξ t ; gx
=
MP
gx
ξt .
2 In general, the FOCs for a Markov-perfect optimal policy are different from the initial
period FOCs for an optimal policy with full commitment. If there are endogenous state variables,
then even with Markov-perfect optimal policies, a policymaker can influence future policy choices
by changing next period’s state variables and thereby affecting the constraint set of next period’s
policymaker.

122

Federal Reserve Bank of Richmond Economic Quarterly

Implementability of Optimal Policy
If the only requirement for feasible monetary policy is the consistency with a
competitive equilibrium, then there is no reason to distinguish between private
sector choices, yt , and the policy instrument, zt . We might as well assume
that the policymaker chooses both variables, xt , subject to the consistency
requirements. Now suppose that the outcome of the optimization problem is
a policy rule that specifies choices for the instrument and the private sector
allocation contingent on outcomes that may include the current and past states
of the economy
zt = gzt (·) and yt = gxt (·) .

(29)

A somewhat narrower definition of what constitutes a feasible monetary policy not only requires that the allocations implied by g are consistent with a
competitive equilibrium, but also requires that, conditional on the rule for the
policy instrument, gz , the rule for the private sector allocation, gy , is the unique
competitive equilibrium outcome. That is, gy is the unique solution of
Et h yt+1 , gz,t+1 (·) , yt , gzt (·) ; ξ t+1 , ξ t

= 0 for t ≥ 0.

(30)

If we cannot find a unique solution, gy , to this dynamic system, then we say
that the optimal policy cannot be implemented since the associated competitive
equilibrium is indeterminate.
In the case of full-commitment policy rules, we can consider an expanded
version of the planner’s policy rule. Suppose that the planner can respond
contemporaneously to deviations of the competitive equilibrium allocation
from the allocation implied by the full-commitment policy rule. Then we can
define a modified rule for the policy instrument
F
FC
gz C yt , λt−1 , ξ t = gz C λt−1 , ξ t + H yt − gy,t λt−1 , ξ t
˜F

,

where H (0) = 0. Since the choice of the function H is arbitrary, except for the
origin normalization, it then appears that, under these circumstances, a planner
can always implement the full-commitment solution. Note that a Markovperfect policy rule cannot be augmented in this way since the contemporaneous
private sector allocation is not a payoff-relevant state variable.

3.

LOCAL PROPERTIES OF OPTIMAL POLICY

We now discuss the local dynamics of full-commitment and Markov-perfect
optimal policy for our simple economy from Section 1. We derive necessary
conditions for the optimal policy and characterize the deterministic steady
state of the economy for the types of policy. We then study the properties of
optimal policy for a local approximation around its steady state. Our approach
follows King and Wolman (1999) and Khan, King, and Wolman (2003) in that
we study the dynamics of a linear approximation to the FOCs and constraints

M. Dotsey and A. Hornstein: Monetary Policy Implementation

123

of the optimal planning problem.3 The two optimal policies imply different policy rules for a money stock instrument. We show that for the local
approximation, both implied that policy rules implement a unique rational
expectations equilibrium.
Consider a policymaker who uses the money supply as an instrument,
that is, the policymaker chooses the money stock according to equation (21).
We can then write the competitive equilibrium conditional on the instrument
choice in terms of the variables yt = p1,t and zt = m1t . Conditional on the
relative preset price and the policy instrument, consumption is determined by
(22); the relative flexible price is determined by (14); allocational efficiency
is determined by (18); and marginal cost is determined by (13) and (22).
The nominal interest rate is determined residually from equation (17). The
policymaker’s objective function is
∞

β t ln m1t p1,t − χ d p1,t m1t p1,t /ξ t ,

E0

(31)

t=0

and the FOC for profit maximization (15) corresponds to the dynamic constraint (23) for t ≥ 0:
0 = p0 p1,t

1−ε

1 − μχ

p1,t m1t
p0 p1,t ξ t

1−ε
+βEt p1,t+1 1 − μχ

m1t+1
ξ t+1

(32)

.

Optimal Policy with Full Commitment
Under full commitment, the policymaker maximizes the value function (31)
subject to the constraints (32). The FOCs corresponding to equations (25) for
t > 0 are
1
1−ε
−ε
0 =
–χ dt p1t –μχ λt pot p1t –μχ λt−1 p1t , and
(33)
m1t /ξ t
1
∂dt
m1t
m1t
0 =
–χ
dt –χ
p1t
(34)
p1t
ξt
ξt
∂p1t
3 Another common approach to the analysis of optimal monetary policy starts with a linearquadratic approximation of the planning problem, e.g., Giannoni and Woodford (2002a, 2002b).
For this alternative approach, one obtains a quadratic approximation of the objective function and
a linear approximation of the constraints around the steady state of the planning problem and
then solves the linear-quadratic (LQ) optimization problem. In general, the results from the two
approaches will differ since the LQ approach does not use the second-order terms in the constraint
functions, whereas the approach that linearizes the first-order conditions does use this information.
Recently, Benigno and Woodford (2005) have shown how to modify the LQ problem such that
the analysis of the LQ problem is equivalent to the analysis of the linearized FOCs.

124

Federal Reserve Bank of Richmond Economic Quarterly

m1t
m1t p1t ∂p0t
p1t ∂p0t
–μχ
1–
ξ t p0t ∂p1t
ξt
p0t ∂p1t
m1t
−ε
.
+λt−1 p1t (1–ε) 1–μχ
ξt
Equation (33) denotes the FOC with respect to real balances, m1 , and equation
(34) denotes the FOC with respect to the relative price, p1 .
−ε
+λt p0t (1–ε) 1–μχ

The Deterministic Steady State of the Full-Commitment
Policy

In the deterministic steady state of the full-commitment policy, there is zero
inflation (King and Wolman 1999; Wolman 2001). We can easily verify that
π = p0 = p1 = d = 1 is indeed a deterministic steady state of equations (32),
(33), and (34). Combining equation (13) with the monetary policy equation
(22) yields an expression that relates marginal cost to real balances and the
preset relative price
ψ = χ m1 p1 .

(35)

We can substitute this expression for marginal cost in the FOC for profit
maximization of price-adjusting firms, (32), and, using the definition of the
inflation rate π, (16), obtain
m1 =

1 1 + βπ ε−1
.
χ μ 1/π + βπ ε−1

(36)

Thus conditional on no inflation, π F C = 1, real balances are mF C = 1/ (χ μ),
1
and marginal cost is ψ F C = 1/μ. Substituting for marginal cost in the FOC
for real balances (33) yields the steady state value for the Lagrange multiplier
λF C = (1 − 1/μ) /2, and the FOC for preset relative prices, (34), is satisfied.
Local Properties of the Full-Commitment Solution

First, we show that the solution to the full-commitment problem stabilizes the
prices in response to productivity shocks (King and Wolman 1999). Second,
we show that the full-commitment policy rule implements the competitive
equilibrium. In the following, let a hat denote the percentage deviation of a
variable from its steady state value.
The log-linear approximation of equations (32), (33) and (34) around the
no-inflation steady state for t > 0 are
ˆ
ˆ
0 = 2p1t + m1t − ξ t +βEt m1,t+1 − ξ t+1 ,
ˆ
ˆ
ˆ

(37)

ˆ
ˆ
ˆ
ˆ
0 = p1t + m1t − ξ t +λF C λt + λt−1 , and
ˆ

(38)

0 =

μ

2μ − 1
ˆ
ˆ
ˆ
p1t + [1+χ (μ − 1)] m1t − ξ t + (μ − 1) λt . (39)
ˆ
μ−1

M. Dotsey and A. Hornstein: Monetary Policy Implementation

125

We solve this linear difference equation system through the method of undetermined coefficients. Given the structure of the equation system, it is reasonable
to guess that the only relevant state variable is the lagged Lagrange multiplier,
λt−1 , and that the solution is of the form
ˆ
ˆ
ˆ
ˆ
ˆ
ˆ
m1t − ξ t = γ λt−1 , p1t = θ λt−1 , and λt = ωλt−1 for t > 0.
ˆ

(40)

Now substitute these expressions in equations (37)–(39) and confirm that they
solve the difference equation system. This procedure yields three equations
that can be solved for the unknowns (ω, γ , ρ).
The optimal full-commitment policy increases normalized real balances
m1 with productivity shocks such that relative prices are not affected, (40).
Relative prices respond to past commitments of the policymaker as reflected in
the Lagrange multiplier λ, and the Lagrange multiplier evolves independently
of productivity shocks. When the Lagrange multiplier attains its steady state
value it stays there and optimal policy from thereon fixes the price level and
relative prices. We do not prove it, but for reasonable numerical values of
(β, μ, χ) the coefficient ω is negative but less than one in absolute value, that
is, the system oscillates, but it is stable. In Figure 1 we graph the transitional
dynamics of the economy for some parameter values that are standard for
quantitative economic analysis, β = 0.99, μ = 1.1, and χ = 1. As we can see,
all variables display dampened oscillations around their steady state values.
As discussed above, the FOCs for the initial period of the full-commitment
ˆ
problem are equivalent to the FOCs (38) and (39) with λ−1 = 0, that is, λ−1 =
−1. Thus during a transition period, as the Lagrange multiplier converges to
its steady state value, relative prices change in proportion to the value of the
Lagrange multiplier.
The money-supply policy rule, defined as the first and third expression in
(40), implements the optimal allocation as a competitive equilibrium. To see
this, substitute the policy rule into the log-linear approximation of the optimal
pricing equation (37), and we get
1
ˆ
γ (1 + βω) λt−1 .
(41)
2
Thus, conditional on the full-commitment optimal policy rule for real balances,
there exists a unique rational expectations equilibrium (REE) for the economy.
p1t =
ˆ

Markov-Perfect Optimal Policy
For a Markov-perfect optimal monetary policy, the policymaker at time t
maximizes the value function (31) subject to the constraints (32), assuming
that future policy choices are some function of the future exogenous shock.
The FOCs for this problem correspond to equations (26) for t = 0 and are

126

Federal Reserve Bank of Richmond Economic Quarterly

Figure 1

Percent Deviation from Steady State

0

-50

-100
1

Percent Deviation from Steady State

Real Balances, m 1,t

2

3

4

5

6

7

2.5
2.0
1.5
1.0
0.5
0.0
-0.5
-1.0

8

1

Preset Relative Price, p 1,t
0.2
0.0
-0.2
-0.4
-0.6
1

2

3

4

5

2

3

5

4

6

7

8

7

8

Expected Inflation Rate, E[P /Pt ]
t+1

6

7

8

Percent Deviation from Steady State

Percent Deviation from Steady State

Lagrange Multiplier, λ t-1
50

0.25
0.20
0.15
0.10
0.05
0.00
-0.05
-0.10
-0.15
1

2

3

1
−ε
–χ dt p1t –μχ λt pot p1t , and
m1t /ξ t
1
m1t
m1t
∂dt
0 =
–χ
dt –χ
p1t
p1t
ξt
ξt
∂p1t
m1t
m1t p1t ∂p0t
−ε
+λt p0t (1–ε) 1–μχ
–μχ
ξ t p0t ∂p1t
ξt

4

5

6

0 =

(42)
(43)
1–

p1t ∂p0t
p0t ∂p1t

.

Equation (42) denotes the FOC with respect to real balances, m1 , and equation
(43) denotes the FOC with respect to the relative price, p1 .
The Deterministic Steady State of the Markov-Perfect Policy

The deterministic steady state of the Markov-perfect equilibrium has positive
inflation, as opposed to the steady state of the full-commitment solution. It
is straightforward to show that optimal policy does not stabilize prices in the
steady state. Suppose to the contrary that there is no inflation in the steady
state, p0 = p1 = 1, then evaluating equations (32), (42), and (43) at their
deterministic steady state implies that ∂d/∂p1 < 0. But with stable prices,

M. Dotsey and A. Hornstein: Monetary Policy Implementation

127

Figure 2

m1

MP

m1

1/µ X

1

MP

Π1

MP

Π2

Π =p 0 /p 1

π = 1, the derivative of allocational efficiency with respect to p1 ,
−ε−1
π −ε−1 − 1 ,
∂d/∂p1 = εp1

(44)

is zero, and we have a contradiction. On the other hand, with positive inflation,
the impact of p1 on allocational efficiency is negative. This suggests that the
steady state inflation rate is positive, as indeed shown by Wolman (2001). We
can find the steady state inflation rate as the solution to the following fix-point
problem. Conditional on some inflation rate, π , use equations (35) and (36) to
determine steady state real balances, m1 , and marginal cost, ψ. Conditional on
(π , m1 , ψ) , use equation (42) to obtain the steady state Lagrange multiplier
λ. Finally, we have to verify that equation (43) is satisfied.
The competitive equilibrium constraint (32), together with the FOCs for
optimal policy, (42) and (43), evaluated at their deterministic steady state
indeed yield a unique solution for the steady state, π MP , mMP , ψ MP . Note,
1
however, that contingent on the steady state Markov-perfect real balances
mMP , the competitive equilibrium constraint alone is consistent with multiple
1
steady states. In Figure 2, we graph real balances as a function of the inflation
rate, π , based on equation (36). Notice that as the inflation rate increases, real
balances first increase and then decline. This means that for a given choice of
real balances that is not too high, m1 > mF C = 1/ (χ μ), there are two steady
1
state inflation rates.

128

Federal Reserve Bank of Richmond Economic Quarterly

Local Properties of the Markov-Perfect Policy

For a local approximation of the optimal Markov-perfect policy we can show
that the policy stabilizes prices around the trend growth path in response to
productivity shocks. Because the steady state involves positive inflation, the
expressions for the local approximations are quite convoluted, and we do not
display them here. Suffice it to say that locally the optimal Markov-perfect
solution is of the form
ˆ
ˆ
p1t = m1t − ξ t = λt = 0.
ˆ
ˆ

(45)

We can substitute the local approximation of the Markov-perfect policy rule,
second and third equalities of (45), into the log-linear approximation of the
optimal pricing equation (15) when the steady state has non-zero inflation and
get
p1t = β
ˆ

(ε − 1) (1 − μχ m1 ) π ε
(ε − 1) π e − μχ m1 1 + επ ε−1

p1,t+1 .
ˆ

(46)

Note that for a steady state with zero inflation, the coefficient on the right-hand
side term is zero. Since the steady state of the Markov-perfect equilibrium
involves only a very small amount of inflation, the coefficient on future prices
is close to zero and certainly less than one. Thus, solving the equation forward
implies that there exists a unique REE, p1t = 0.
ˆ

4.

GLOBAL PROPERTIES OF OPTIMAL POLICY

We now show that the policy rule implied by a Markov-perfect optimal policy
does not globally implement the optimal policy allocation. We also conjecture
that the policy rule implied by the full-commitment policy may not always be
implementable. An augmented full-commitment policy rule that can respond
to contemporaneous variables as described in Section 2, however, is likely to
implement the optimal policy allocation.
For the analysis of the global properties of policy rules, it will be useful to rewrite a firm’s profit maximization condition (12), which represents
the competitive equilibrium constraint for the planning problem. Solve this
expression for a firm’s optimal relative price as a markup over the average
marginal cost for which the price is set
ψ t + βEt ψ t+1 (Pt+1 /Pt )ε
Pt (j )
.
=μ
Pt
1 + βEt (Pt+1 /Pt )ε−1

(47)

We can think of this expression as a firm’s optimal relative price choice on
the left-hand side, p0t , conditional on the relative prices set by all other firms,
p0t , determining the right-hand side of the equation. The behavior of the other
¯
firms is reflected in the equilibrium values of marginal cost and the inflation

M. Dotsey and A. Hornstein: Monetary Policy Implementation

129

rate. For our argument, we will assume that there are no shocks to the economy,
that is, productivity is constant. Using the equilibrium conditions (13), (16),
and (22) for the right-hand side of (47), we then get
p0,t = μχ

m1t p1 p0,t + βm1,t+1 p0,t π ε−1
¯
¯
t+1
1 + βπ ε−1
t+1

with π t+1 = p0,t /p1 p0,t+1 .
¯
¯
(48)

Markov-Perfect Policy
The Markov-perfect policy rule not only stabilizes prices in response to small
productivity shocks, but stabilization is the globally optimal response to shocks,
m1t = mMP ξ t .
1

(49)

We can verify that (49) is the optimal response to productivity shocks by
substituting the expression for m1t into equations (32), (42), and (43). This
policy rule reflects the definition of a Markov-perfect policy: it depends only
on payoff-relevant state variables, that is, ξ t only in our case.
In general, the Markov-perfect policy rule cannot implement the planning
allocation as a competitive equilibrium outcome. King and Wolman (2004)
argue that a Markov-perfect optimal policy introduces strategic complementarities into the firms’ price-setting behavior and thereby makes multiple equilibria possible. With constant normalized real balances of the Markov-perfect
policy and no productivity shocks, the optimal pricing condition (48) simplifies to
p0,t = μχ mMP
1

p1 p0,t + β p0,t π ε−1
¯
¯
t+1
1 + βπ ε−1
t+1

.

(50)

Strategic complementarities are said to be present if a representative firm
increases its own control variable when it perceives that all other firms increase
their control variable. In terms of the price-setting equation (50): a firm
increases its own relative price, p0t , on the left-hand side of the expression
if all other firms increase their relative price, p0t , on the right-hand side of
¯
the expression. Essentially, if all other firms increase their price, p0t , then
¯
the expected inflation rate increases, and therefore a firm will increase its own
relative price in order to prevent an erosion of its relative price in the next
period. Since the equilibrium relative price is a fix-point of expression (50),
p0t = p0t , strategic complementarities raise the possibility of multiple fixed
¯
points, that is, multiple equilibria.
In Figure 3 we graph the RHS of (50), conditional on some value
for p1,t+1 . If we evaluate the RHS of (50) at p0t = 1, we get p0t = 1 and
¯
RH S = μχ mMP > 1. If we consider the limit of the RHS as p0t becomes
¯
1
arbitrarily large, we see that p1t converges to a finite value and the inflation
¯
rate becomes arbitrarily large, thus the RHS converges to a line through the

130

Federal Reserve Bank of Richmond Economic Quarterly

Figure 3

RHS

p 0t

µ x m1

MP

1

p*0t

origin with slope μχ mMP > 1. Without a further analysis of the behavior of
1
the RHS for finite positive values of p0t , this at least suggests the possibility of
¯
two intersection points of the RHS with the diagonal. Furthermore we know
MP
that in the steady state, when p1,t+1 = p1 , there are indeed two solutions
for p0 to equation (36). King and Wolman (2004) show that, in general, there
exist two intersection points. Thus there is no unique equilibrium and the
Markov-perfect policy rule does not implement the planning allocation.

Full-Commitment Policy
Optimal full-commitment monetary policy stabilizes prices in response to
productivity shocks not only locally around the steady state, but also globally,
m1t = m1t ξ t , m1 =
˜
˜

(λt−1 ) , p1t =

(λt−1 ) , and λt =

(λt−1 ) .

(51)

To see this, simply note that equations (32), (33), and (34) define a system in
˜
(m1t , p1t , λt−1 ) that is independent of productivity shocks. Different from the
Markov-perfect policy, the Lagrange multiplier on the competitive equilibrium
constraint is not constant and therefore the normalized real balances are not
constant.
We do not have unambiguous results on the implementation of the planning
allocation through the full-commitment policy rule. On the one hand, we
can show that if the Lagrange multiplier has attained its steady state value,
λt−1 = λF C , then the full-commitment policy rule implements the planning

M. Dotsey and A. Hornstein: Monetary Policy Implementation

131

solution. On the other hand, as long as the Lagrange multiplier has not attained
its steady state, the full-commitment policy rule suffers from some of the same
problems as does the Markov-perfect policy.
Suppose that the Lagrange multiplier has attained its steady state value,
λt−1 = λF C . If we substitute the value for the Lagrange multiplier in the FOCs
(33) and (34), we can see that they will always be satisfied from there on. But
this means that from there on the normalized real balances attain their steady
state value, mF C , and the competitive equilibrium constraint (32) simplifies to
1
1−ε
0 = p0t
1−

p1t
p0t

(52)

.

FC
Therefore p1t = p0t , that is PtF C = Pt−1 , and prices are determined.
Now consider the transitional phase when the Lagrange multiplier differs
from its steady state value. Given the implied policy rule (51), we can construct
future nominal money stocks recursively as functions of the initial value of
the Lagrange multiplier

(53)
(λt−1 ) · Pt , and
p0 p1,t−1
p0
(λt−2 )
p0,t−1
Pt−1 . (54)
Pt =
Pt−1 =
Pt−1 =
p1,t
p1,t
(λt−1 )
With full commitment, a policymaker can always announce a time path for
the nominal money supply and follow through on that announcement. Given
the nominal money supply rule, we can rewrite the optimal pricing condition
(48) in nominal terms and get
Mt

= ξt ·

P0t

(λt−1 ) ·

= μχ

Mt + βMt+1 (Pt+1 /Pt )ε−1
1 + β (Pt+1 /Pt )ε−1

Pt+1
=
with
Pt

1−ε
1−ε
P0,t+1 + P0,t
1−ε
1−ε
P0,t + P0,t−1

(55)
1/(1−ε)

.

As we do for the analysis of the Markov-perfect policy, we are looking for
a fix point in the optimal nominal price, P0t , conditional on the past and
future nominal prices, P0,t−1 and P0,t+1 , and the nominal money stocks, Mt
and Mt+1 . Clearly for a constant money supply, that is, the constant steady
state Lagrange multiplier, there is a unique solution for P0t . If the Lagrange
multiplier converges globally to its steady state, then if the difference between
Mt and Mt+1 is small enough, we will also have a unique solution. We do
not, however, prove that there is a unique solution for the initial phase of the
transition period.
Note that for full-commitment policy, we have only outlined the same
potential for multiple equilibria as King and Wolman (2004) have shown to
exist for the Markov-perfect policy rule. We have not proven that the fullcommitment policy rule cannot implement the planning allocation. Whether
or not the full-commitment policy rule implements the planning allocation

132

Federal Reserve Bank of Richmond Economic Quarterly

may be irrelevant if one believes that a policymaker can always respond to
contemporaneous variables. If such a response is feasible, then an augmented
full-commitment policy rule as described in Section 2 may always implement
the planning allocation.

5.

CONCLUSION

This paper has considered optimal monetary policy as the solution to both fullcommitment and time-consistent Markov-perfect planning problems. The solutions are consistent with rational expectations competitive equilibria. The
optimal solution to the planning problem implies a rule for the assumed policy
instrument, in our case, a money supply instrument. We have then verified
that, for local approximations to the solution of the optimal policy problem,
the implied policy rules implement the planning allocations, that is, the planning allocation is the unique rational expectations equilibrium conditional on
the implied policy rule. However, following on the insights of King and Wolman (2004), we have then examined whether the implied policy rules also
implement the allocation globally. We find that a money supply rule that is
Markov-perfect does not implement the planning solution. We provide a partial argument that the full-commitment money supply rule does implement the
planning solution, but we do not have a complete proof for this statement.
For the analysis, we have taken the choice of monetary instrument, in this
case the nominal money stock, as given but this choice is not innocuous. In
other work (Dotsey and Hornstein 2005), we have argued that equilibrium
indeterminacy may depend on the choice of policy instrument. In particular,
if the Markov-perfect policy uses the nominal interest rate as an instrument,
the equilibrium is determinate.

REFERENCES
Benigno, Pierpaolo, and Michael Woodford. 2005. “Inflation Stabilization
and Welfare: The Case of a Distorted Steady State.” Journal of the
European Economic Association 3 (December): 1–52.
Dotsey, Michael, and Andreas Hornstein. 2005. Unpublished Notes.
Giannoni, Marc P., and Michael Woodford. 2002a. “Optimal Interest Rate
Rules: I. General Theory.” NBER Working Paper 9419 (January).

M. Dotsey and A. Hornstein: Monetary Policy Implementation

133

. 2002b. “Optimal Interest Rate Rules: II. Applications.”
NBER Working Paper (December).
Khan, Aubhik, Robert G. King, and Alexander L. Wolman. 2003. “Optimal
Monetary Policy.” Review of Economic Studies 70 (4): 825–60.
King, Robert G., and Alexander L. Wolman. 1999. “What Should the
Monetary Authority Do When Prices Are Sticky?” In Monetary Policy
Rules, ed. John B. Taylor. Chicago: University of Chicago Press: 349–98.
. 2004. “Monetary Discretion, Pricing Complementarity,
and Dynamic Multiple Equilibria.” Quarterly Journal of Economics
199: 1513–53.
Marcet, Albert, and Ramon Marimon. 1998. “Recursive Contracts.” Working
Paper, Universitat Pompeu Fabra, Barcelona. Available at
http://www.econ.upf.edu/crei/people/marcet/papers.html (Last accessed
on May 22, 2006).
Wolman, Alexander L. 2001. “A Primer on Optimal Monetary Policy With
Staggered Price Setting.” Federal Reserve Bank of Richmond Economic
Quarterly 87 (4): 27–52.

Can Feedback from the
Jumbo CD Market Improve
Bank Surveillance?
R. Alton Gilbert, Andrew P. Meyer, and Mark D. Vaughan

I

n recent years, policymakers in the Basel countries have begun exploring
strategies for harnessing financial markets to contain bank risk. Indeed,
the new Accord counts market discipline, along with supervisory review
and capital requirements, as an explicit pillar of bank supervision.1 A popular proposal for implementing market discipline in the United States would
require large banks to issue a standardized form of subordinated debt (Board
of Governors 1999; Board of Governors 2000; Meyer 2001). Advocates of
Critical feedback from a number of sources greatly improved this work. We would specifically
like to thank the examiners and supervisors (Carl Anderson, John Block, Joan Cronin, Ben
Jones, Kim Nelson, and Donna Thompson) as well as the economists (Gurdip Bakshi, Rosalind
Bennett, Mark Carey, Margarida Duarte, Kathleen McDill, Bill Emmons, Doug Evanoff, Mark
Flannery, John Jordan, John Hall, Jim Harvey, Tom King, John Krainer, Bill Lang, Jose
Lopez, Dan Nuxoll, Evren Ors, Jeremy Piger, James Thomson, Sherrill Shaffer, Scott Smart,
Haluk Unal, Larry Wall, John Walter, and John Weinberg) who provided helpful comments.
We also profited from exchanges with seminar participants at Baylor University, the Federal
Deposit Insurance Corporation, the Office of the Comptroller of the Currency, and Washington
University in St. Louis (Department of Economics and the Olin School of Business), as well
as exchanges at the Federal Reserve Surveillance Conference, the Federal Reserve Committee
on Financial Structure meetings, and the Financial Management Association meetings. Any
remaining errors and omissions are ours alone. The views expressed do not represent official
positions of the Federal Reserve Bank of Richmond, the Federal Reserve Bank of St. Louis,
or the Federal Reserve System.
1 The cornerstone of supervisory review—the most important of the pillars—is thorough, regularly scheduled, on-site examinations. The Federal Deposit Insurance Corporation Improvement
Act of 1991 (FDICIA) requires most U.S. banks to submit to a full-scope examination every 12
months. These examinations focus on six components of safety and soundness—capital protection
(C), asset quality (A), management competence (M), earnings strength (E), liquidity risk exposure
(L), and market risk sensitivity (S)—CAMELS. At the close of each exam, an integer ranging
from 1 (best) through 5 (worst) is awarded for each component. Supervisors then use these component ratings to assign a composite CAMELS rating reflecting overall condition—also on a 1-to-5
scale. In general, banks with composite ratings of 1 or 2 are considered satisfactory while banks
with ratings of 3, 4, or 5 are unsatisfactory and subject to supervisory sanctions. (Footnote 10
offers more details about these sanctions.) At year-end 2005, 4.63 percent of U.S. banks held
unsatisfactory ratings.

Federal Reserve Bank of Richmond Economic Quarterly Volume 92/2 Spring 2006

135

136

Federal Reserve Bank of Richmond Economic Quarterly

this proposal argue that high-powered performance incentives in the subordinated debt (sub-debt) market will produce accurate risk assessments. And, in
turn, these assessments—expressed for risky institutions through rising yields
or difficulties rolling over maturing debt—will pressure bank managers to
maintain safety and soundness (Calomiris 1999; Lang and Robertson 2002).
Even if financial markets apply little direct pressure to curb risk taking, market data could still enhance supervisory review by improving offsite surveillance.2 Off-site surveillance involves the use of accounting data
and anecdotal evidence to monitor the condition of supervised institutions
between scheduled exams.3 Market assessments could enhance surveillance
in three ways: (1) by flagging banks missed by conventional off-site tools,
(2) by reducing uncertainty about banks flagged by other tools, or (3) by providing earlier warning about developing problems in banks flagged by these
tools (Flannery 2001). Such enhancements would reduce failures over time
by enabling supervisors to take action earlier to address safety-and-soundness
problems.
One concern about attempts to incorporate market data into surveillance is
regulatory burden—current proposals would require large banking organizations to float a standardized issue of sub-debt. That most large banks currently
issue sub-debt does not imply the burden is negligible.4 Voluntary issuance
varies considerably over time with market conditions. For example, the number of sub-debt issues by the top-50 banking organizations rose from 3 in 1988
to 108 in 1995, only to fall to 42 in 1999 (Covitz, Hancock, and Kwast 2002).
Moreover, banks currently issuing sub-debt may be choosing maturities unlikely to produce valuable risk signals, so a mandated maturity would still
impose a regulatory burden. Before placing additional burden on the banking
sector, particularly at a time when other sizable regulatory changes (Basel II)
are in the offing, supervisors should first assess the power of risk signals from
existing securities.
One potential source of risk assessments that can be mined without increasing regulatory burden is the market for jumbo certificates of deposit (CDs).
2 Bliss and Flannery (2001) found that managers of holding companies do not respond to
market pressure to contain risk, though Rajan (2001) questioned the ability of their framework to
unearth such evidence.
3 Examination is the most effective tool for spotting safety-and-soundness problems, but it is
costly and burdensome—costly because of the examiner resources required and burdensome because
of the intrusion into bank operations. Surveillance reduces the need for unscheduled visits by
prodding bankers to contain risk between scheduled exams. It also helps supervisors plan exams
by highlighting risk exposures. For example, if pre-exam surveillance reports indicate a bank has
significant exposure to interest rate fluctuations, supervisors will staff the exam team with additional
market risk expertise.
4 Mandating issuance of a security with specific attributes is tantamount to a tax on capital
structure. Although we know of no direct evidence about the burden of this tax, heterogeneity in
sub-debt maturities, outstanding volume over time, and the source of issue (bank vs. bank holding
company) suggest it is nontrivial.

R. A. Gilbert, A. P. Meyer, and M. D. Vaughan: Jumbo CD Market

137

Jumbo CDs are time deposits with balances exceeding $100,000. The typical
bank relies on a mix of deposits to fund assets—checkable deposits, passbook
savings accounts, retail CDs, and jumbo CDs. Both retail and jumbo CDs
have fixed maturities (as opposed to checkable deposits which are payable
on demand); they differ by Federal Deposit Insurance Corporation (FDIC)
coverage. Only the first $100,000 of deposits is eligible for insurance, so the
entire retail CD (which is less than $100,000) is insured while only the first
$100,000 of a jumbo CD is covered. Checkable deposits, passbook savings,
and retail CDs are often collectively referred to as “core deposits” because
balances respond little to changes in bank condition and market rates. Full
FDIC coverage makes these deposits a stable and cheap source of funding. At
year-end 2005, U.S. banks funded on average 67.1 percent of assets with core
deposits and 14.4 percent with jumbo CDs. The average jumbo CD balance
in the fourth quarter of 2005 was $330,886; the average balance in 95 percent
of the U.S. banks exceeded $152,115. The average maturity was just over
one year. Jumbo CDs are considered a “volatile” liability because relatively
large uninsured balances and short maturities force issuing banks to match
yields (risk-free rates plus default premiums) available in the money market
or lose the funding. This pressure to “price” new conditions quickly makes
the jumbo CD market, in theory, an important source of feedback for off-site
surveillance.5
Potentially valuable jumbo CD data are currently available for most commercial banks. In contrast, only very large banking organizations now issue
sub-debt. These organizations may be the most important from a systemic-risk
standpoint, but the focus of off-site surveillance—indeed of all U.S. prudential supervision—is on the bank, and most banks do not issue or belong to
holding companies that issue sub-debt. Moreover, a negative risk signal from
a holding company claim would not, by itself, help supervisors identify the
troubled subsidiary. Jumbo CDs constitute a large class of direct claims on
both large and small banks. At year-end 2005, U.S. banks with more than
5 Since the early 1990s, financial innovation has offered households a growing array of substitutes for traditional bank deposits. As a result, the supply of core deposits has declined secularly,
forcing banks to turn to more volatile funding sources such as jumbo CDs. Between 1992 and
2005, for example, the average core deposit-to-total asset ratio for U.S. banks tumbled from 80.1
percent to 67.1 percent, while average jumbo CD dependence jumped from 7.5 percent to 14.4
percent of assets. Increasing reliance on jumbo CDs implies greater exposure to liquidity and
market risk—a bad outcome from the perspective of a bank supervisor. At the same time, the
$100,000 ceiling on deposit insurance makes jumbo CD holders savvier about bank risk than other
depositors. So the jumbo CD market could exert pressure on bank managers to contain risk—either
directly through the impact of higher yields and lower balances on profits or indirectly through
supervisory responses to risk signals conveyed by yields and withdrawals. Such pressure would
complement supervisory review. Hence, another contribution of this article is to offer insight into
the tradeoff by quantifying the potential contribution of jumbo CD data to off-site surveillance. See
Feldman and Schmidt (1991) for further discussion of the tradeoff between greater risk exposure
and more reliable market data implied by rising jumbo CD dependence. Our results suggest this
rising dependence makes supervisors on balance worse off.

138

Federal Reserve Bank of Richmond Economic Quarterly

$500 million in assets funded 14.6 percent of assets with jumbo CDs; for
banks with less than $500 million, the average jumbo-CD-to-total-asset ratio
was 14.3 percent. Finally, risk signals in the form of yields and withdrawals
can be cheaply and easily constructed because banks report jumbo CD interest expense and balances quarterly to their principal supervisor. Also, nearly
30 years of research—much of which relies on these interest-expense and
account-balance data—has produced robust evidence of risk pricing in the
jumbo CD market.
Data from the jumbo CD market might prove particularly useful in
community bank surveillance. Community banks specialize in making loans
to and taking deposits from small towns or city suburbs. For regulatory purposes, the Financial Modernization Act of 1999 established an asset threshold
of $500 million—expressed in constant 1999 dollars. At year-end 2005, nearly
90 percent of U.S. banks operated on this scale. Not surprisingly, most failures are community banks. They also frequently operate on extended exam
schedules, with up to 18 months elapsing between full-scope, on-site visits.
This schedule diminishes the quality of quarterly financial statements, thereby
reducing the effectiveness of off-site monitoring.6 It is possible that holders
of community bank jumbo CDs supplement public financial data with independent “Peter-Lynch-type” research.7 Or, inside information about bank
condition could leak from boards of directors, which typically include prominent local businesspeople. (Community bank jumbo CDs are often held by
such “insiders.”) Thus, sudden changes in yields or withdrawals might signal trouble more quickly or reliably than surveillance tools based on financial
statements.
In short, jumbo CDs fund a large portion of bank assets and furnish a
cheap source of market data, yet no study has formally tested the surveillance value of yields and withdrawals. We do so with an early warning model
and out-of-sample timing conventions designed to mimic current surveillance
practices. Specifically, we generate risk rankings using jumbo CD default
premiums and quarter-over-quarter withdrawals for banks with satisfactory
supervisory ratings. We rank the same banks by CAMELS-downgrade probability as estimated by an econometric surveillance model. Finally, out-ofsample performance for all three rankings is compared over a sequence of
two-year windows running from 1992 to 2005, counterfactually as if super6 Verification of financials is an important source of value created by exams (Berger and
Davies 1998; Flannery and Houston 1999). Indeed, recent research has documented large adjustments in asset-quality measures following on-site visits, particularly for banks with emerging
problems (Gunther and Moore 2000).
7 Peter Lynch ran Fidelity’s Magellan Fund from 1977 to 1990. During this period, fund
value rose over 2,700 percent. Lynch was famous for looking past financial statements to the real
world, observing consumer and firm behavior in malls, for example. For more details, see Lynch
and Rothchild (2000).

R. A. Gilbert, A. P. Meyer, and M. D. Vaughan: Jumbo CD Market

139

visors in the fourth quarter of each year possessed data only up to that point.
We find that jumbo CD signals would not have flagged banks missed by the
CAMELS-downgrade model or would not have reduced uncertainty about
banks flagged by the model. We also find that jumbo CD signals would not
have provided earlier warning about developing problems in banks flagged by
the CAMELS-downgrade model. These results are broadly consistent with
other recent work, so we close by exploring reasons the surveillance value of
market data may have been overestimated.

1.

PRIOR LITERATURE

Research on the jumbo CD market since the mid-1970s—mostly with 1980s
data—has consistently found evidence of risk pricing (see Table 1). Some 20
articles have been published using a mix of time series and panel approaches:
18 articles exploited U.S. data, 11 examined only yields, 4 examined only
runoff (i.e., deposit withdrawals), and 5 studied both. Most drew heavily on
quarterly financial statements. Only one article—the first contribution to the
literature in 1976—found no link between bank risk and yields or runoff. In
some ways, the robustness of these results is striking because U.S. samples
mostly predate the Federal Deposit Insurance Corporation ImprovementAct of
1991 (FDICIA). Before this Act, the majority of failures were resolved through
purchases and assumptions, whereby the FDIC offered cash to healthy banks
to assume the liabilities of failed ones. So, even though jumbo CD holders
faced default risk in theory, many were shielded from losses in practice.8
Although evidence from prior literature about our out-of-sample test windows (1992–2005) is thinner, intuition and history make a case for significant risk sensitivity. The handful of articles looking at 1990s data found
risk pricing, but no study examined jumbo CD data for the post-2000 period. Nonetheless, economic intuition suggests sensitivity should be strong
because of three important institutional changes in the 1990s. First, as noted,
the FDICIA directed the FDIC to resolve failures in the least costly way, which
implies imposing a greater share of losses on uninsured bank creditors (Benston and Kaufman 1998; Kroszner and Strahan 2001).9 This change should
8 Before 1991, expected losses had three components: (1) the probability of bank failure,
(2) the loss if the failed bank were not purchased by a healthy one, and (3) the probability the
failed bank would not be purchased. Even if (1) and (2) were positive, expected losses would
still be approximately zero if jumbo CD holders expected all failures to be resolved with purchase
and assumptions. The need to model FDIC behavior, therefore, complicates estimation of risk
sensitivity for the pre-1991 regime. Suppose, for example, (1) and (2) fall, reducing expected
losses, incentives to monitor risk, and jumbo CD risk sensitivity. But the FDIC responds by
curtailing implicit coverage—perhaps because of the reduced threat of contagious runs. If large
enough, this offsetting effect could induce a rise in measured sensitivity to bank condition.
9 As discussed in footnote 8, expected losses equal zero if jumbo CD holders anticipate
resolution through purchase and assumptions. But the FDICIA should have changed expectations

140

Federal Reserve Bank of Richmond Economic Quarterly

have increased expected losses for jumbo CD holders and their incentive to
monitor bank condition. Second, the Financial Institutions Reform, Recovery, and Enforcement Act of 1989 required supervisors to disclose serious
enforcement actions (Gilbert and Vaughan 2001).10 Third, in the late 1990s,
the FDIC began putting quarterly financial data for individual banks on the
Web, along with tools for comparing performance with industry peers. The
second and third change should have lowered the cost to jumbo CD holders of
monitoring bank condition. Evidence from U.S. banking history also implies
our sample should feature strong risk pricing. Gorton (1996), for example,
documented a link between discounts on state bank notes and issuer condition during the free-banking era, while Calomiris and Mason (1997) observed
sizable differences in yields and runoff for weak and strong Chicago banks
prior to the 1932 citywide panic. Friedman and Schwartz (1960) also noted
that public identification of banks receiving loans from the Reconstruction
Finance Corporation triggered runs in August 1932. More recently, Continental Illinois began hemorrhaging uninsured deposits when the extent of its
problems became public in May 1984 (Davison 1997). In all these cases, uninsured claimants monitored and reacted to changes in bank condition, thereby
impounding risk assessments into prices or quantities.
Evidence of risk pricing in the jumbo CD market does not imply that
yield and runoff data would add value in surveillance. First, stable in-sample
estimates of reactions to current bank condition and reliable out-of-sample
forecasts of emerging safety-and-soundness problems are not the same thing.
Evidence from the market efficiency literature, for example, has demonstrated
that trading strategies based on well-documented pricing anomalies, such as
calendar effects, size effects, and mean revision, do not offer abnormal returns
when tested in real time by fund managers (Roll 1994; Malkiel 2003). Second,
just as assessing the profitability of trading rules requires a benchmark, such as
the return from an index fund, assessing the surveillance value of market data
requires a baseline for current practices. It is not enough to note that jumbo
CD signals flag problem banks because supervisors already have systems in
place for these purposes. The true litmus test is this: Does integration of
about FDIC behavior. Between 1988 and 1990, jumbo CD holders suffered losses in only 15
percent of bank failures. From 1993 to 1995, they lost money 82 percent of the time.
10 The term “enforcement action” refers to a broad range of powers used to address suspect
practices of depository institutions and institution-affiliated parties—the supervisory sanctions mentioned in footnote 1. Typically, these actions are imposed in response to adverse exam findings,
but they can also be triggered by deficient capital levels under Prompt Corrective Action or by
negative information gathered through off-site surveillance. Usually enforcement actions are implemented in a graduated manner, with informal preceding formal actions. An informal action is the
most common; it is simply a private, mutual understanding between a bank and its supervisory
agency about the steps needed to correct problems. Formal actions are far more serious. Supervisors resort to them only when violations of law or regulations continue or when unsafe and
abusive practices occur. Formal enforcement actions are legally enforceable and, in most cases,
publicly disclosed.

R. A. Gilbert, A. P. Meyer, and M. D. Vaughan: Jumbo CD Market

141

yields and runoff into actual surveillance routines consistently and materially
improve out-of-sample forecast accuracy?11
Four recent articles have gauged the surveillance value of market data
against a current practices benchmark. Evanoff and Wall (2001) compared regulatory capital ratios and sub-debt yields as predictors of supervisory ratings,
finding that sub-debt yields modestly outperform capital ratios in one-quarterahead tests. Gunther, Levonian, and Moore (2001), meanwhile, observed insample improvement in model fit when estimated default frequencies (EDFs,
as produced by Moody’s KMV) were included in an econometric model designed to predict holding company supervisory ratings with accounting data.
Krainer and Lopez (2004) also experimented with equity market variables—
in this case, cumulative abnormal stock returns as well as EDFs—in a model
of holding company ratings. Unlike Gunther, Levonian, and Moore (2001),
they assessed value added in one-quarter-ahead forecasts. Like Evanoff and
Wall (2001), they noted only a modest improvement in out-of-sample performance. Finally, Curry, Elmer, and Fissel (2003) added various equity signals
to an econometric model built to predict four-quarter-ahead supervisory ratings, again witnessing only a slight increase in forecast accuracy.
Recent tests against a surveillance benchmark have advanced the market
data literature, to be sure, but the absence of empirical tests modeled on actual
practice mutes the potential impact on supervisory policy. Evanoff and Wall
(2001), for example, proxied supervisor perceptions of safety and soundness
with regulatory capital ratios—a practice that was problematic because capital
is the sole criterion only when Prompt Corrective Action (PCA) thresholds are
violated. Otherwise, a variety of measures are weighed.12 In addition, Gunther, Levonian, and Moore (2001) and Krainer and Lopez (2004) conducted
performance tests with holding company data—a problematic approach because, as noted, off-site surveillance focuses on individual banks. Indeed,
the Federal Reserve, which has responsibility for holding company supervision, does not maintain an econometric model estimated on holding company
data.13 Finally, Gunther, Levonian, and Moore (2001) and Curry, Elmer, and
Fissel (2003) relied on tests unlikely to impress supervisors: the first assessing
11 As discussed in footnote 10, supervisors use enforcement actions to induce banks to address
safety-and-soundness problems. Some are quite severe, going as far as permanent removal from
the banking industry. The earlier actions are imposed, the more likely problems can be corrected.
But enforcement actions impose significant costs on the bank, so supervisors prefer to wait for
compelling evidence of serious problems. Hence, jumbo CD signals could add supervisory value
by reinforcing conclusions yielded by other surveillance tools, thereby facilitating swifter action.
12 Footnote 1 discusses the CAMELS framework supervisors use to assess bank condition.
In any event, evidence from counterfactual applications of PCA to late 1980s/early 1990s data
(Jones and King 1995; Peek and Rosengren 1997) suggests the thresholds are too low to affect
supervisor behavior.
13 Each article estimated a unique holding company model to benchmark surveillance procedures. Both tested joint hypotheses: (1) the model approximates the one the Fed would use and
(2) equity market signals enhance the performance of that model.

142

Federal Reserve Bank of Richmond Economic Quarterly

in-sample performance only and the second assessing out-of-sample performance with a contemporaneous holdout (rather than a period-ahead sample).
Our work improves on this research by employing an econometric model
used in surveillance, out-of-sample timing conventions patterned on current
practices, and data taken from bank (rather than holding company) financial
statements and supervisor assessments. Even more important, we contribute
a coherent framework for use in future research on the surveillance value of
market data.

2. THE DATA
To test the surveillance value of jumbo CD data, we built a long panel containing financial data and supervisory assessments for all U.S. commercial
banks. This data set contained income statement and balance sheet series as
well as CAMELS composite and management ratings from 1988:Q1 through
2005:Q4.14 The accounting data came from the Call Reports—formally the
Reports of Condition and Income—which are collected under the auspices of
the Federal Financial Institutions Examination Council (FFIEC). The FFIEC
requires all U.S. commercial banks to submit such data quarterly to their principal supervisor; most reported items are publicly available. CAMELS ratings were pulled from a nonpublic portion of the National Information Center
database; only examiners, analysts, and economists involved in supervision at
the state or federal level can access these series. Only one substantive sample
restriction was imposed—exclusion of banks with operating histories of under five years. Financial ratios for these start-up, or de novo, banks often take
extreme values that do not imply safety-and-soundness problems (DeYoung
1999). For instance, de novos often lose money in their early years, so earnings ratios are poor. Extreme values could introduce considerable noise into
risk rankings, making it more difficult to assess relative performance. Another
reason for dropping de novos is that supervisors already monitor these banks
closely. The Federal Reserve, for example, examines newly chartered banks
every six months until they earn a composite rating of 1 or 2 in consecutive
exams.
Although our testing framework improves on prior research, our data still
contain measurement error. Only a small number of money center banks issue
negotiable instruments that are actively traded, so true market yields are not
available for a cross section of the industry. It is possible, however, to construct
average yields from the Call Reports for all U.S. banks by dividing quarterly in14 Two data notes: (1) Explicit assessment of market risk sensitivity (S) was added in 1997,
so pre-1997 composites are CAMEL ratings, and (2) none of our empirical exercises exploits the
entire dataset (1988:Q1–2005:Q4); each uses a suitable sub-sample. For example, estimation of the
downgrade model ends in 2003 to permit out-of-sample tests on 2004–2005.

R. A. Gilbert, A. P. Meyer, and M. D. Vaughan: Jumbo CD Market

143

terest expense by average balance. Subtracting rates on comparable-maturity
Treasuries from these yields produces something that looks like a default premium series. Other researchers have successfully tested hypotheses about
bank risk with this approach (for example, James 1988; Keeley 1990; and,
more recently, Martinez-Peria and Schmukler 2001). Still, two related types
of measurement error must be acknowledged; the proxy is an average rather
than marginal measure (and, therefore, somewhat backward looking), and it
is a quarterly accounting rather than real-time economic measure.
Measurement error in this series does not imply that jumbo CD data taken
from the Call Report lack surveillance value. Jumbo CD holders may react
to rising risk by withdrawing funds, and changes in account balances (deposit
runoff) can be measured error-free with accounting data.15 Moreover, distress
models based on financial statements have been a cornerstone of public- and
private-sector surveillance for decades (Altman and Saunders 1997). Indeed,
federal and state supervisors alike give heavy weight to book-value measures
of credit risk and capital protection in routine surveillance, yet both contain
serious measurement error (Barth, Beaver, and Landsman 1996; Reidhill and
O’Keefe 1997). Finally, and most importantly, the supervisory return on
jumbo CD signals—or any market signal for that matter—depends not on the
value of the signal alone, but rather on that value net of the cost of extraction.
Current surveillance routines are built around the Call Reports and, as noted,
these reports already contain the data necessary to construct yield and runoff
series for jumbo CDs. Even if the marginal surveillance value of jumbo CD
signals were low relative to pure market signals because of measurement error,
the marginal cost of extracting jumbo CD signals is near zero. The cost of
integrating market signals into off-site surveillance is not as low because of
the regulatory burden associated with any compulsory security issues and the
training burden associated with changes in supervisory practices. It is possible,
therefore, that jumbo CD data add more net value than pure market signals.
In short, the surveillance value of jumbo CD data is ultimately an empirical
issue.
Still, the net contribution of jumbo CD signals to surveillance cannot be
positive if measurement error renders the data hopelessly noisy. So, as a check,
we performed a simple test on yields and another on runoff—both suggested
that bank condition is priced. In the first test, we compared quarterly yields—
that is, jumbo CD interest expense divided by average balance—for the 5
percent of banks most and least at risk of failure each year from 1992 to 2005
15 In the literature, “runoff” is used loosely as a synonym for withdrawals. For this test, we
define it as quarter-over-quarter percentage changes in a bank’s total dollar volume of jumbo CDs.
Later, we define “simple” deposit runoff similarly.

144

Federal Reserve Bank of Richmond Economic Quarterly

(the period used in out-of-sample testing).16 Over this period, yields at highrisk banks topped yields at low-risk banks by an average of 25 basis points.
(By way of comparison, the average spread between yields on three-month
nonfinancial commercial paper and three-month Treasury bills for 1992 to
2005 was 24 basis points.) Institutional changes in the 1990s appear to have
strengthened risk pricing. Despite declining money market rates, the mean
spread between “risky” and “safe” banks climbed from 14 basis points for
1992–1997 to 33 for 1998–2005 (difference significant at 1 percent). In the
second test, we examined quarterly jumbo CD growth at the 169 U.S. banks
that failed between 1992 and 2005 for two distinct periods in the migration to
failure: two to four years out and zero to two years out. Mean growth two to
four years prior to failing was a healthy 8.4 percent. But in the final two years,
quarterly growth turned sharply negative, averaging -4.0 percent—a pattern
consistent with jumbo CD holders withdrawing funds to avoid losses.
As a final check, we regressed yields and runoff on failure probability and
suitable controls; the results also attested to risk pricing. The sample contained observations for all non-de-novo banks with satisfactory supervisory
ratings from 1988:Q1 to 2004:Q4.17 (Table 2 contains the results.) Both coefficients of interest were “correctly” signed and significant at the 1-percent
level, implying a rise in failure risk translated into higher yields and larger
runoff: coefficient magnitudes were economically small, but it is important
to remember that risk sensitivity is a cardinal concept whereas risk ranking is
an ordinal one. Recent back-testing of the Focus Report highlights the difference. The Focus Report is a Call-Report-based, Federal Reserve tool for
predicting the impact of a 200-basis-point interest rate shock on bank capital.
For the 1999–2002 interest rate cycle, Sierra and Yeager (2004) found that estimates of bank losses were very noisy, but risk rankings based on these losses
were quite accurate. Our criterion for assessing jumbo CD data is analogous.
16 We estimated failure probabilities with the Risk-Rank model—one of two econometric
surveillance models used by the Federal Reserve. See footnote 18 for more discussion of this
model.
17 We controlled for factors suggested by academic literature, examiner interviews, and specification tests. These factors included term-to-maturity, the rate on Treasury securities with comparable maturities, economic conditions (dummies for quarters and states in the union), power in
local deposit markets (dummy for banks operating in an MSA), access to parent-company support (dummy for banks in holding companies), and demand for funding in excess of local supply
(dummy for banks with brokered deposits). The estimation sample included only satisfactory banks
to parallel the performance tests of downgrade probability and jumbo CD rankings. Confining the
analysis to 1- and 2-rated banks may seem odd, akin to testing risk sensitivities of AA or better
corporate debt, but there are theoretical as well as practical justifications. Managers of nonregulated
firms operate with considerable latitude up to the point of bankruptcy. Bank managers, in contrast,
lose much of their discretion when an unsatisfactory rating is assigned. So market data for 3-,
4-, and 5-rated banks contain assessments of ongoing supervisory intervention as well as inherent
risk. Excluding unsatisfactory banks also produces more relevant evidence about the surveillance
value of market feedback. Supervisors continuously monitor these institutions, so market data are
unlikely to yield new information. But knowledge of deteriorating 1s and 2s would be valued
because these banks do not face constant scrutiny between exams.

R. A. Gilbert, A. P. Meyer, and M. D. Vaughan: Jumbo CD Market

145

Surveillance value is measured not by the precision of estimated sensitivities
to bank risk, but rather by the improvement in risk rankings traceable to jumbo
CD yields and runoff.18

3.

MARKET ASSESSMENTS OF RISK: THE JUMBO CD
RANKINGS

The first step in assessing the value of jumbo CD data was obtaining default
premiums for all sample banks with satisfactory supervisory ratings. We
created two measures—a “simple” and a “complex” default premium—to
reduce the likelihood that performance tests would be biased by one, possibly
poor, proxy. At the root of each measure was average yield—the ratio of jumbo
CD interest expense to average balance, computed with Call Report data for
each bank in each quarter. To convert yields into simple default premiums, we
adjusted for the average maturity of a bank’s jumbo CD portfolio. To obtain
a complex premium series, we used regression analysis to adjust yields for
maturity and nonmaturity factors likely to affect jumbo CD rates.19 Simple
18 Besides measurement error, there are several idiosyncratic aspects of the jumbo CD market that might weaken risk pricing. Jumbo CD holders often receive other bank services—loan
commitments and checking accounts, for example—so the issuer might price the relationship comprehensively. Another potential explanation is that many jumbo CDs are held by state or local
governments and are, therefore, practically risk-free. (Most states require banks to “pledge” Treasury or agency securities against uninsured public deposits, thereby eliminating all but fraud risk.)
Still another possibility is that many banks no longer fund at the margin with jumbo CDs—these
instruments are now essentially core deposits because of the declining cost of commercial paper
issuance and the increasing availability of Federal Home Loan Bank advances. A final, related possibility is that posted jumbo CD rates are sticky, “clustering” around integers and even fractions
like retail CD rates (Kahn, Pennacchi, and Sopranzetti 1999). These market characteristics may
account for modest risk sensitivities in the yield and runoff regressions. Still, evidence presented
in this section suggests the data contain information about bank condition, thereby satisfying the
necessary condition for jumbo CD risk rankings to add value in surveillance.
19 Default premiums were obtained with maturity and nonmaturity controls from the Call
Report. The reporting convention for maturities changed in the middle of our sample. From
1989 to 1997, the FFIEC required banks to slot jumbo CDs in one of four buckets: “less than
3 months remaining,” “3 months to 1 year remaining,” “1 to 5 years remaining,” and “over 5
years remaining.” In 1997, the two longest maturity buckets became “1 to 3 years remaining”
and “over 3 years remaining.” These maturity measures are crude—jumbo CDs in the shortest
bucket might have been issued years ago—but they offer the only means of controlling for term
structure. We produced simple premiums by first multiplying each bank’s jumbo CD balance for
each maturity bucket by that quarter’s yield on Treasuries of comparable maturity. The sum of
the resulting values, divided by average jumbo CD balances, approximated that bank’s risk-free
yield. Simple default premiums for a quarter were then the difference between a bank’s risk-free
yield and its average jumbo CD yield that quarter. Complex premiums controlled for other factors
likely to affect jumbo CD demand or supply. Specifically, average yields were regressed on average
jumbo CD maturity, maturity-weighted Treasury yield (the portion of a sample bank’s CDs in each
maturity bucket, multiplied by that quarter’s yield on a comparable-maturity Treasury), and the same
nonmaturity controls used in the data-check equations in Section 3. Regression residuals served
as the complex premium series. Carefully controlling in this way for maturity and nonmaturity
influences on yields should render the resulting default premium series a cleaner measure of default
risk.

146

Federal Reserve Bank of Richmond Economic Quarterly

and complex default premiums were highly correlated, exhibiting an average
year-by-year correlation coefficient of 0.88.
The second step was generating a deposit-runoff series for all banks with
satisfactory ratings. When significant transaction or information frictions are
present, jumbo CD holders are apt to withdraw funds as failure probability rises
(Park and Peristiani 1998). Another reason to examine runoff is that a bank’s
demand for jumbo CDs could depend on its condition. Billett, Garfinkel,
and O’Neal (1998) and Jordan (2000) have documented a tendency for risky
banking organizations to substitute insured for uninsured deposits to escape
market discipline. If such substitution is important, escalating risk would
show up in declining jumbo CD balances rather than rising default premiums.
To explore these possibilities, we again computed two measures of runoff:
“simple” and “complex.” Simple deposit runoff was defined for each sample
bank as the quarterly percentage change in jumbo CD balances.20 The complex
series was constructed by adjusting simple runoff with the same approach used
to identify complex default premiums—that is, regressions of quarterly deposit
runoff on maturity and nonmaturity factors likely to affect jumbo CD demand
or supply. The correlation coefficient for simple and complex runoff was
35 percent, somewhat less than the correlation between simple and complex
default premiums.

4. THE SURVEILLANCE BENCHMARK—DOWNGRADE
PROBABILITY RANKINGS
Since the 1980s, econometric models have played an important role in bank
surveillance at all three federal supervisory agencies.21 We benchmark the
performance of these models with the CAMELS-downgrade model developed by Gilbert, Meyer, and Vaughan (2002).22 This model is a probit regres20 Technically, a positive number implies growth while a negative number implies runoff. To
simplify, we refer to all percentage changes as runoff. By our nomenclature, a bank can experience
positive or negative jumbo CD runoff.
21 Since the early 1990s, the Federal Reserve has relied on two econometric models, collectively known as SEER—the System for Estimating Examination Ratings. One model, the Risk-Rank
model, exploits quarterly Call Report data to estimate the probability of failure over the next two
years. The other model, the Ratings model, produces “shadow” CAMELS ratings—that is, the
composite that would have been assigned had an examination been performed using the latest Call
Report submission. Every quarter, analysts at the Board of Governors feed the data into the SEER
models and forward the results to the Reserve Banks. The surveillance unit at each Bank, in turn,
follows up on flagged institutions. The FDIC and the OCC use similar approaches in off-site
monitoring of the banks they supervise (Reidhill and O’Keefe 1997).
22 The model is discussed in detail here because it is possible in-sample performance has
deteriorated since the Gilbert, Meyer, Vaughan (2002) estimation sample ended in 1996. Such
deterioration would bias performance tests in this research in favor of the jumbo CD rankings. So,
we explain the rationale for the explanatory variables and present evidence of in-sample fit to make
the case that the CAMELS-downgrade model is still a good benchmark for current surveillance
practices.

R. A. Gilbert, A. P. Meyer, and M. D. Vaughan: Jumbo CD Market

147

sion estimating the likelihood a bank with a satisfactory supervisory rating (a
CAMELS 1 or 2 composite) will migrate to an unsatisfactory rating (a 3, 4, or 5
composite) in the coming eight quarters. Explanatory variables were selected
in 2000 based on a survey of prior research and interviews with safety-andsoundness examiners. Table 3 describes the independent variables, as well as
the expected relationship between each variable and downgrade risk. Table 5
contains summary statistics for these variables. Most variables are financial
performance ratios related to leverage risk, credit risk, and liquidity risk—
three risks that have consistently produced financial distress in commercial
banks (Putnam 1983; Cole and Gunther 1998).
We benchmark current surveillance procedures with a CAMELSdowngrade model. Traditionally, the most popular econometric surveillance
tool has been a failure-prediction model. But failures have been rare since
the early 1990s, preventing re-estimation of these models. Any resulting
“staleness” in coefficients could bias performance tests by compromising the
surveillance benchmark used to assess jumbo CD data. Unlike failures, migration to unsatisfactory ratings remains common, so a downgrade model can
be updated quarterly. (Table 4 contains 1992–2005 data on downgrade frequency.) Recent research confirms that a CAMELS-downgrade model would
have improved slightly over a failure-prediction model in the 1990s (Gilbert,
Meyer, and Vaughan 2002). Even more important, a downgrade model is best
suited to support current supervisory practice. Institutions with unsatisfactory ratings represent significant failure risks; supervisors watch them closely
and constantly to ensure progress toward safety and soundness. Most 1- and
2-rated banks, in contrast, are monitored between exams through quarterly
Call Report submissions. As noted, early supervisory intervention improves
chances for arresting financial deterioration. So a tool that more accurately
flags deteriorating banks with Call Report data would yield the most surveillance value. These considerations have prompted one Federal Reserve Bank
to “beta test” a CAMELS-downgrade model in routine surveillance and the
Board of Governors to add a downgrade model to the System surveillance
framework in 2006.
The CAMELS-downgrade model relies on six measures of credit risk, the
risk that borrowers will not render promised interest and principal payments.
These measures include the ratio of loans 30 to 89 days past due to total assets,
the ratio of loans over 89 days past due to total assets, the ratio of loans in
nonaccrual status to total assets, the ratio of other real estate owned to total
assets (OREO), the ratio of commercial and industrial loans to total assets,
and the ratio of residential real estate loans to total assets. High past-due
and nonaccruing loan ratios increase downgrade probability because, historically, large portions of these loans have been charged off. OREO consists
primarily of collateral seized after loan defaults, so a high OREO ratio signals
poor credit-risk management. Past due loans, nonaccruing loans, and OREO

148

Federal Reserve Bank of Richmond Economic Quarterly

are backward looking; they register asset quality problems that have already
emerged (Morgan and Stiroh 2001). The ratio of commercial and industrial
loans to total assets is forward looking because, historically, losses on these
loans have been relatively high. The ratio of residential real estate loans to
total assets also provides a forward-looking dimension because, historically,
the loss rate on mortgages has been relatively low. Other things equal, an
increase in dependence on commercial loans or a decrease in dependence on
mortgage loans should translate into greater downgrade risk.
The model contains two measures of leverage risk—the risk that losses
will exceed capital. Measures of leverage risk include the ratio of total equity
(minus goodwill) to total assets and the ratio of net income to average assets
(or, return on assets). Return on assets is part of leverage risk because retained
earnings are an important source of capital for many banks, and higher earnings
provide a larger cushion for withstanding adverse economic shocks (Berger
1995). Increases in capital protection or earnings strength should reduce the
probability of migration to an unsatisfactory rating.
Liquidity risk, the risk that loan commitments cannot be funded or
withdrawal demands cannot be met at a reasonable cost, also figures in the
CAMELS-downgrade model. This risk is captured by two ratios: investment
securities as a percentage of total assets and jumbo CD balances as a percentage of total assets. A large stock of liquid assets, such as investment
securities, indicates a strong ability to meet unexpected funding needs and,
therefore, should reduce downgrade probability. Liquidity risk also depends
on a bank’s reliance on non-core funding, or “hot money.” Non-core funding,
which includes jumbo CDs, can be quite sensitive to changes in money market rates. Other things equal, greater reliance on jumbo CDs implies greater
likelihood of a funding runoff or an interest expense shock and, hence, a larger
risk of receiving a 3, 4, or 5 rating in a future exam.
Finally, the model uses three control variables to capture downgrade risks
not strictly associated with current financials. These controls include the natural logarithm of total assets because large banks are better able to reduce risk
by diversifying across product lines and geographic regions. As Demsetz and
Strahan (1997) have noted, however, such diversification relaxes a constraint,
enabling bankers to assume more risk, so the ex ante relationship between
asset size and downgrade probability is ambiguous. We also add a dummy
variable for 2-rated banks because they migrate to unsatisfactory status more
often than 1-rated banks. (See Table 4 for supporting evidence.) The list of
control variables rounds out with a dummy for banks with management component ratings higher (weaker) than their composite rating. In such banks,
examiners have raised questions about managerial competence, even though
problems have yet to appear in financial statements.

R. A. Gilbert, A. P. Meyer, and M. D. Vaughan: Jumbo CD Market

149

We estimated the CAMELS-downgrade model for 13 overlapping twoyear windows running from 1990–1991 to 2002–2003.23 Each equation regressed downgrade incidence (1= downgraded, 0= not downgraded) in years
t + 1 and t + 2 on accounting and supervisory data for banks with satisfactory ratings in the fourth quarter of year t. For example, to produce the first
equation (1990–1991 in Table 6), downgrade incidence in 1990–1991 was
regressed on 1989:Q4 data for all 1- and 2-rated banks that were not de novos.
We continued with this timing convention, estimating equations year by year,
through a regression of downgrade incidence in 2002–2003 on 2001:Q4 data.
Observations ranged from 6,367 (2002–2003 equation) to 8,682 (1995–1996
equation); the count varied because bank mergers and supervisory reassessments altered the number of satisfactory institutions over the estimation period.
The model fit the data relatively well throughout the estimation sample.
(Table 6 contains the results.)24 The hypothesis that model coefficients jointly
equaled zero could be rejected at the 1 percent level for all 13 equations. The
pseudo-R2, the approximate proportion of variance in downgrade/no downgrade status explained by the model, was in line with numbers in prior early
warning studies—ranging from 15.0 percent (1994–1995 equation) to 22.6
percent (1991–1992 equation). Estimated coefficients for seven explanatory
variables—the jumbo-CD-to-total-asset ratio, the past due and nonaccruing
loan ratios, the net-income-to-total-asset ratio, and the two supervisor rating dummies—were statistically significant with expected signs in all eight
equations. The coefficient on the logarithm of total assets had a mixed-sign
pattern, which is not surprising given ex ante ambiguity about the relationship
between size and risk. The coefficients on the other six explanatory variables
were statistically significant with the expected sign in at least three equations.
Comparing out-of-sample performance of jumbo CD and downgrade probability rankings is not as biased as it may first appear. True, jumbo CD rankings
draw on one variable—either default premiums or deposit runoffs—while the
downgrade probability rankings draw on 13 variables. But theory suggests
23 Gilbert, Meyer, and Vaughan (2002) estimated the model for six windows running from
1990–1991 to 1995–1996. We re-estimated the model for these windows because Call Report data
have since been revised, which implies slight changes in coefficients. We also wanted to use
a consistent approach and consistent data for the entire estimation sample to insure subsequent
out-of-sample tests of jumbo CD data were not biased against the surveillance benchmark.
24 This table presents the results of probit regressions of downgrade status on financialperformance ratios and control variables. The dependent variable equals “1” for a downgrade and
“0” for no downgrade in calendar years t + 1 and t + 2. Values for independent variables are taken
from the fourth quarter of year t. Standard errors appear in parentheses below the coefficients.
One asterisk denotes statistical significance at the 10-percent level, two at the 5-percent level, and
three at the 1-percent level. The pseudo-R2 indicates the approximate proportion of variance in
downgrade status explained by the model. Overall, the downgrade-prediction model fit the data
well. For all eight regressions, the hypothesis that all model coefficients equal zero could be rejected at the 1-percent level of significance. In addition, eight of the 13 regression variables are
significant with the predicted sign in all eight years, and all variables were significant in at least
some years.

150

Federal Reserve Bank of Richmond Economic Quarterly

premiums and runoff should summarize overall bank risk, not just one type
of exposure such as leverage or credit risk. Put another way, jumbo CD holders should sift through all available information about the condition of the
issuing bank, note any changes in expected losses, and react to heightened
exposures by demanding higher yields or withdrawing funds. This process
should impound all relevant information—financial as well as anecdotal—into
default premiums and deposit runoff just as the econometric model impounds
all relevant Call Report data into a CAMELS-downgrade probability.

5. ASSESSING OUT-OF-SAMPLE PERFORMANCE: POWER
CURVE AREAS
We assessed out-of-sample performance using both Type 1 and Type 2 error
rates. Both forecast errors are costly. A missed downgrade to unsatisfactory
status—Type 1 error—is costly because accurate downgrade predictions give
supervisors more warning about emerging problems. A predicted downgrade
that does not materialize—Type 2 error—is costly because unwarranted supervisory intervention wastes scarce examiner resources and disrupts bank operations. A tradeoff exists between the two errors—supervisors could eliminate
overprediction of downgrades by assuming no banks are at risk of receiving
an unsatisfactory rating in the next two years.
For each risk ranking, it is possible to draw a power curve indicating
the minimum achievable Type 1 error rate for any desired Type 2 error rate
(Cole, Cornyn, and Gunther 1995). For example, tracing the curve for simple
default premium rankings starts by assuming no sample bank is a downgrade
risk. This assumption implies all subsequent downgrades are surprises—a
100 percent Type 1 error rate. Because no banks are incorrectly classified as
downgrade risks, the Type 2 error rate is zero. The next point on the curve
is obtained by selecting the bank with the highest simple default premium
(maturity-adjusted spread over Treasury). If that bank suffers a downgrade in
the following eight quarters, then the Type 1 error rate decreases slightly. The
Type 2 error rate remains zero because, again, no institutions are incorrectly
classified as downgrade risks. If the selected bank does not suffer a downgrade,
then the Type 1 error rate remains 100 percent, and the Type 2 error rate
increases slightly. Selecting banks from highest to lowest default premium
and recalculating error rates each time produces a power curve. At the lower
right extreme of the curve, all banks are considered downgrade risks—the
Type 1 error rate is 0 percent, and the Type 2 error rate is 100 percent. Figure
1 illustrates with the power curves for downgrade probability and jumbo CD
rankings for the 1992–1993 test window.
Areas under power curves provide a basis for comparing out-of-sample
performance across risk rankings. The area for each ranking is expressed as a
percentage of the total area of the box. A smaller percentage implies a lower

R. A. Gilbert, A. P. Meyer, and M. D. Vaughan: Jumbo CD Market

151

Figure 1 How Well Do Jumbo CD and Downgrade Probability
Rankings Perform Out-of-Sample? 1992–1993 Test Window
100
90
80

Complex
Default Premium
Rankings
52.20%

70

Type 1

60

Random Rankings
50%

50

Simple
Default Premium
Rankings
47.56%

40
30

Complex
Runoff Rankings
53.08%
Simple
Runoff Rankings
49.54%

20

Downgrade Probability
Rankings
20.20%

10
0
0

10

20

30

40

50

60

70

80

90

100

Type 2

Notes: This Figure depicts the power curves for risk rankings based on jumbo CD default
premiums, jumbo CD runoff, and downgrade probabilities (as produced by the CAMELSdowngrade model) for the 1992–1993 out-of-sample test window. These curves reflect
the tradeoff between Type 1 and Type 2 errors. (Type 1 errors are missed downgrades,
and Type 2 errors are overpredicted downgrades.) A convenient way to compare the
performance of risk rankings is to calculate the area under the power curve for each
ranking and express that area as a percentage of the total for the box. Smaller areas are
desired because they imply a simultaneous reduction in both types of errors. The 50percent line is the power curve produced when downgrade risks are selected randomly
over a large number of trials. The power curves above show that rankings based on
downgrade probabilities would have significantly outperformed rankings based on jumbo
CD default premiums and runoff. Indeed, jumbo CD rankings would not have improved
materially over random rankings.

overall Type 1 and Type 2 error rate and, hence, a more accurate forecast.
The area for a “random-ranking” power curve offers an example as well as a
yardstick for evaluating the economic significance of differences in forecast
accuracy. Random selection of downgrade candidates, over a large number
of trials, will produce power curves with an average slope of negative one.
Put another way, the area under the random-ranking power curves, on average, equals 50 percent of the total area of the box. Power curve areas can be

152

Federal Reserve Bank of Richmond Economic Quarterly

compared—jumbo CD ranking against downgrade probability rankings or either ranking against a random ranking—for any error rate. Assessing forecast
accuracy this way, though somewhat atheoretical, makes best use of existing
data. A more appealing approach would minimize a loss function explicitly
weighing the benefits of early warning about financial distress against the costs
of wasted examination resources and unnecessary regulatory burden. Then,
the relative performance of risk rankings could be assessed for the optimal
Type 1 (or Type 2) error rate. The requisite data, however, are not available.
A specific example will clarify the mechanics of the “horse race” we run for
risk rankings. To assess the surveillance value of simple default premiums for
1992–1993, we start by assuming it is early 1992, just after fourth quarter 1991
data became available. In accordance with standard surveillance procedures,
1990–1991 downgrade incidences are regressed on 1989:Q4 data for the 13
explanatory variables in the CAMELS-downgrade model. Model coefficients
are then applied to 1991:Q4 data to estimate the probability that each 1- and
2-rated bank will migrate to an unsatisfactory condition between 1992:Q1
and 1993:Q4. These banks are then ranked from highest to lowest downgrade
probability. At the same time, all banks with satisfactory supervisory ratings
are ranked from highest to lowest simple default premium (maturity adjusted
spread over Treasury), also using 1991:Q4 data, under the assumption that high
spreads map into high downgrade probabilities. After two years, the record
of missed and overpredicted downgrades is compiled to generate power curve
areas for each ranking. A smaller area for the downgrade probability ranking
would imply that simple default premiums added no surveillance value in the
1992–1993 test window.

6.

EMPIRICAL EVIDENCE

Downgrade Model Rankings and Jumbo CD
Rankings—Full-Sample Results
The evidence suggests jumbo CD default premiums would have contributed
nothing to bank surveillance between 1992 and 2005 when used to forecast
downgrades two years out. (Columns 2, 3, and 4 of Table 7 contain the
relevant power curve areas.) Over the 13 test windows, the average area
under the simple default premium power curve (45.63 percent) and the average
area under the complex default premium power curve (49.70 percent) did not
differ statistically or economically from the random-ranking benchmark (50
percent). In contrast, the average area under the downgrade model power curve
(19.66 percent) came to less than half of that benchmark. Power curve areas
for individual two-year test windows showed the same patterns. Specifically,
downgrade model areas ranged from 15.24 percent (1996–1997) to 22.39
percent (1994–1995); simple default premium areas ran from 41.56 percent
(2003–2004) to 50.57 percent (1994–1995); and complex default premium

R. A. Gilbert, A. P. Meyer, and M. D. Vaughan: Jumbo CD Market

153

areas varied from 45.69 percent (1998–1999) to 52.20 percent (1992–1993).
The poor performance of jumbo CD rankings relative to downgrade model
rankings suggests default premiums would not have flagged banks missed
by conventional surveillance. The poor performance relative to the randomselection benchmark suggests default premiums would not have increased
supervisor confidence about rankings produced by the CAMELS-downgrade
model.
Out-of-sample performance of risk rankings based on jumbo CD runoff
was no better. (Columns 5 and 6 of Table 7 contain the relevant power curve
areas.) The average area for simple runoff rankings across all test windows
was 46.12 percent while the average for complex runoff was 50.47 percent—
again, statistically and economically indistinguishable from random selection.
And once again, patterns were consistent across individual two-year test windows. Power curve areas for simple runoff rankings varied from 43.56 percent
(1999–2000) to 49.54 percent (1992–1993), areas under complex runoff curves
from 45.75 percent (1998–1999) to 53.08 percent (1992–1993). This consistently poor performance suggests runoff rankings would not have helped spot
downgrade risks two years out between 1992 and 2005.
Changing forecast horizons did not alter the results. Over 13 one-year
windows, downgrade model rankings produced an average power curve area
of 17.37 percent (standard deviation across test windows of 2.39 percent).
In contrast, simple default premium rankings produced an average area of
45.38 percent (standard deviation of 2.95 percent) and complex default premium rankings, an average area of 50.29 percent (standard deviation of 3.26
percent). Areas for runoff rankings were even closer to the random-ranking
benchmark—46.79 percent on average for simple runoff (standard deviation
across the 13 windows of 2.47 percent) and 50.87 percent for complex runoff
(standard deviation of 3.45 percent). Lengthening the forecast horizon to
three years yielded similar numbers. This evidence goes to the timeliness
of information in jumbo CD rankings. As noted, market data could enhance
surveillance by flagging problems before existing tools. But, between 1992
and 2005, jumbo CD data would not have improved over random selection
at any forecasting horizon, much less current surveillance procedures. Put
another way, feedback from the jumbo CD market would not have provided
supervisors with earlier warning about developing problems.
Jumbo CD rankings constructed from both default premiums and runoff
did not improve over random selection, either. In theory, price and quantity
signals from the jumbo CD market, though weak when used singly, could
jointly capture useful information about future bank condition. If so, a model
relying only on multiple signals could add value—even if performance relative to the benchmark was poor—by reducing supervisor uncertainty about
banks flagged by conventional surveillance. We explored this possibility by
estimating a downgrade model with (1) only simple default premiums and

154

Federal Reserve Bank of Richmond Economic Quarterly

runoff as explanatory variables and (2) only complex default premiums and
runoff as explanatory variables. Out-of-sample performance was then tested
over a variety of forecasting horizons for 1992–2005. Columns 7 and 8 in
Table 7 contain the results for two-year windows; they are representative.
Over all 13 tests, the power curve area for the bivariate “simple” model averaged 45.55 percent (standard deviation across individual windows of 2.49
percent) and 50.05 percent for the bivariate “complex” model (standard deviation of 2.99 percent). Further perspective can be gained by comparing these
numbers to power curve areas produced by a “pared down” model including
only the dummy variables in the baseline CAMELS-downgrade model. The
average power curve area for this model across the 13 two-year windows was
30.07 percent. Taken together, this evidence suggests jumbo CD data would
not have reduced supervisory uncertainty about banks flagged by conventional
surveillance tools.

Downgrade Model Rankings and Jumbo CD
Rankings—Sub-Sample Results
Although jumbo CD risk rankings would not have contributed to general
surveillance of 1- and 2-rated banks, default premiums and runoff might improve monitoring of specific cohorts such as banks with short jumbo CD
portfolios, large asset portfolios, no foreign deposits, low capital ratios, or
significant “deposits at risk.”
The marginal-average problem noted earlier could in part account for the
weak performance of default premium rankings. As an arithmetic matter,
today’s average yield will be more representative of today’s risk levels if
jumbo CD maturities are short. To explore this possibility, we replicated all
out-of-sample tests described earlier in Section 6 on a sub-sample of banks
with weighted-average portfolio maturities under six months. The results did
not change. At the two-year horizon, for example, the average area under
complex default premium power curves was 42.97 percent. At the one-year
horizon, the average area was 42.30 percent. Put simply, long jumbo CD
maturities do not account for the poor performance of default premiums.
The jumbo CD market might emit stronger risk signals for large, complex banking organizations. Jumbo CDs at community banks may be more
like core deposits than money market instruments. And because prices and
quantities of core deposits are known to be sticky (Flannery 1982), yields and
runoff of community bank jumbos could respond sluggishly to changes in
risk no matter how short the maturity of the portfolio. Another reason large
bank signals may be more informative is that monitoring costs for their uninsured depositors are lower—these institutions have publicly traded securities
and are closely followed by market analysts. To test for an asset-threshold
effect, we reproduced all out-of-sample tests from Section 6 on a sub-sample

R. A. Gilbert, A. P. Meyer, and M. D. Vaughan: Jumbo CD Market

155

of banks holding more than the median level of assets. Out-of-sample results for this sub-sample were qualitatively similar to the results from the full
sample. Across the 13 two-year test windows, for example, the area under
the simple default premium power curves averaged 44.90 percent (compared
with 45.63 percent for the full sample). We also tested risk rankings for banks
holding more than $1 billion in assets and for banks with SEC registrations.
Each time, we compared results from the large bank sub-sample with results
from the remaining sub-sample (i.e., banks holding less than median assets,
banks holding less than $1 billion in assets, and banks with no SEC registration), looking for performance differences across size cohorts. Size-split
evidence was consistent: for large as well as community banks, the CAMELSdowngrade model proved to be the far superior surveillance tool, and rankings
based on default premiums and runoff barely improved over random rankings.
Jumbo CD default premiums and runoff might improve off-site monitoring
of banks with no foreign deposits. The National Depositor Preference Act of
1993 elevated claims of domestic depositors over claims of foreign depositors,
reducing expected losses for jumbo CD holders (Marino and Bennett 1999).
Domestic holders of jumbo CDs issued by banks with foreign offices may have
perceived no default-risk exposure because of the financial cushion provided
by foreign deposits. To test for a depositor-preference effect, we screened
out banks with foreign deposits and replicated all out-of-sample tests. Again,
the results mirrored the full-sample results; for example, for the two-year test
windows, the average power curve area under the simple default premium
rankings was 45.70 percent, virtually unchanged from the full sample (45.63
percent). Even for banks with no foreign-deposit cushion, jumbo CD rankings
contained no useful supervisory information.
Finally, the jumbo CD market might yield clues about emerging problems
in banks with high levels of uninsured deposits or low levels of capital. In
theory, jumbo CD holders with more exposure—either because their uninsured
balances are high or bank capital levels are low—have greater incentive to
monitor and discipline risk. So we produced rankings for the quartile of
sample banks with the largest volume of “deposits at risk” and the quartile
with the lowest ratios of equity-to-assets (adjusted for bank size). Again,
default premium and runoff rankings did not improve over random selection,
much less conventional surveillance. As a final check, we looked at various
intersections of the sub-samples—banks with high deposits at risk and low
capital, banks with no foreign deposits and short jumbo CD maturities, etc. We
generated rankings based on default premiums, deposit runoff, and both default
premiums and deposit runoff. The results across all tests were consistent—
jumbo CD rankings did not improve materially over random rankings at any
forecast horizon.

156

Federal Reserve Bank of Richmond Economic Quarterly

Default Premiums and Runoff as Regressors in the
Downgrade Model
Although default premium and runoff perform poorly as independent risk
signals, they could add value as regressors in the CAMELS-downgrade model.
Indeed, previous research has identified surveillance ratios with this property
(Gilbert, Meyer, and Vaughan 1999). To pursue this angle, we estimated an
“enhanced” CAMELS-downgrade model, adding both simple and complex
measures of premiums and runoff to the 13 baseline explanatory variables.
As before, out-of-sample performance was gauged by impact on power curve
areas—first when default premiums and runoff were added to the baseline
model and then when these variables were dropped from the enhanced model.
As a further check, we assessed performance with the quadratic probability
score (QPS)—a probit analogue for root mean square error (Estrella 1998;
Estrella and Mishkin 1998).25 If default premiums and runoff enhance the
CAMELS-downgrade model, removing them from the enhanced model will
boost QPS. Columns 9 and 10 in Table 7 contain power curve areas for the
simple and complex enhancements of the downgrade model. Column 2 of
panel A in Table 8 notes the impact of the two simple jumbo CD series on
power curve areas; column 2 of panel B in Table 8 shows the impact of the series
on QPS. (Results for complex default premiums and runoff are not reported
because they mirror results for the simple series.) To facilitate interpretation,
we note the impact on QPS and power curve areas of other variable blocks—
such as the leverage-risk variables (equity-to-asset ratio and return on assets)
and control variables (log of total assets, dummy for composite rating of 2,
and dummy for management component rating weaker than the composite
rating)—in columns 3 through 6 of panels A and B in Table 8. In Table 6,
changes in QPS and power curve areas are expressed in percentage-change
terms to permit direct comparison.
In performance tests for 1992–2005, default premiums and runoff did
not enhance the CAMELS-downgrade model. Adding simple versions of the
series increased (worsened) average power curve area by 4.17 percent (0.82
percentage points, from 19.66 percent for the baseline model to 20.48 percent).
Removing these series from the enhanced downgrade model improved performance slightly by the power curve metric (reduced average power curve area by
0.26 percent) and worsened performance even more slightly by the QPS metric
(increased average QPS by 0.06 percent). The leverage-risk variables provide
25 To obtain QPS, we first computed downgrade probability for each sample bank with the
CAMELS-downgrade model. Then, we subtracted Rt —a binary variable equal to one if the bank
was downgraded in the out-of-sample window and zero if not—from the downgrade probability
estimate. Finally, we squared the difference, multiplied the result by two, and averaged across
all sample banks. An ideal model generates probabilities close to unity for banks with subsequent
downgrades and probabilities close to zero for non-downgrades, so higher QPS figures imply weaker
out-of-sample performance, just as higher power curve areas imply weaker performance.

R. A. Gilbert, A. P. Meyer, and M. D. Vaughan: Jumbo CD Market

157

some perspective on the economic significance of these changes—dropping
both of them increased the power curve area (worsened performance) by an average of 5.20 percent and increased the average QPS (worsened performance)
by an average of 2.03 percent. These results held up in tests on the various
sample cuts and forecasting horizons described in the previous subsections.

7.

DISCUSSION

It is possible that some combination of measurement error and idiosyncrasies
in the jumbo CD market accounts for our results. These factors may not be
important enough to remove all evidence of risk pricing from jumbo CD data,
but they may be important enough to prevent risk rankings based on the data
from imparting valuable surveillance information.
But data problems and market frictions are unlikely to explain away the
findings. As noted, recent studies using actual debt and equity market data
rather than accounting proxies have found only modest surveillance value
in market signals. Rather, the economic environment since the early 1990s
probably plays an important role. Over this period, bank profitability and
capital ratios soared to record highs. Some economists attribute these trends
to an unprecedented economic boom that allowed banks to reap the upside
of expansions into risky new markets and product lines (Berger et al. 2000).
Others argue that stakeholders of large complex banking organizations insisted on greater capital cushions because of increasingly sophisticated risk
exposures (Flannery and Rangan 2002). In such a high-profit, high-capital
environment, jumbo CD signals—no matter how accurately measured or precisely determined—would convey little information because the benefits of
monitoring are so low. Such an explanation would account for the successful
use of average yields in bank-risk studies on data from the 1980s—a time
when financial distress was fairly common and failures were sharply rising.
Such an explanation would also account for the evidence in Martinez-Peria
and Schmukler (2001). With a data set and research strategy similar to ours,
they studied the impact of banking crises on market discipline in Argentina,
Chile, and Mexico, finding little discipline before, but significant discipline
after, the crises.

8.

CONCLUSION

The evidence suggests that feedback from the jumbo CD market would have
added no value in bank surveillance between 1992 and 2005. Throughout
the decade, risk rankings produced by a CAMELS downgrade—a model chosen to benchmark current surveillance practices—would have significantly
outperformed risk rankings based on default premiums and runoff. Moreover, jumbo CD rankings would have improved little over random orderings.

158

Federal Reserve Bank of Richmond Economic Quarterly

Finally, adding jumbo CD signals to the downgrade model would not have
improved its out-of-sample performance. These results hold up for a variety
of sample cuts and forecast horizons. Taken together, these results imply that
the marginal surveillance value of jumbo CD signals is less than the marginal
production cost—even if that cost is very low.
Our results carry mixed implications for proposals to incorporate market
data more formally into bank supervision. On the one hand, the evidence suggests available jumbo CD data would do little to enhance surveillance, thereby
clearing the way for experimentation with other, “purer” market signals. On
the other hand, if the “unique sample period” explanation for our results is
true, then it is likely the surveillance value of signals from the market for bank
debt and equity will vary over time. Other things equal, such time variation
would lower the net benefit of integrating market data into current surveillance routines. Interpreted in this light, our findings imply that future policy
and research work on market data should focus on identifying the specific
bank claims that yield the most surveillance value in each state of the business cycle. Put another way, our findings—when viewed with other recent
research—suggest the supervisory return from reliance on a single market
signal through all states of the world may have been overestimated.

Authors
Crane (1979)
Goldberg & Lloyd-Davies (1985)
Baer & Brewer (1986)
Hannan & Hanweck (1988)
James (1988)
Cargill (1989)
James (1990)
Keeley (1990)
Ellis & Flannery (1992)
Cook & Spellman (1994)
Crabbe & Post (1994)
Brewer & Mondschean (1994)
Park (1995)
Park & Peristiani (1998)
Jordan (2000)
Martinez Peria & Schmuckler (2001)
Goldberg & Hudgins (2002)
Birchler & Maechler (2002)
Maechler & McDill (2003)
Hall, King, Meyer, & Vaughan (2005)

Issuer of
Jumbo CD
Bank
Bank
Bank
Bank
Bank
Bank
Bank
Bank
Bank
Thrift
Bank
Thrift
Bank
Thrift
Bank
Bank
Thrift
Bank
Bank
Bank

Country
United States
United States
United States
United States
United States
United States
United States
United States
United States
United States
United States
United States
United States
United States
United States
Argentina, Chile, Mexico
United States
Switzerland
United States
United States

Sample Dates
1974
1976–1982
1979–1982
1985
1984–1986
1984–1986
1986–1987
1984–1986
1982–1988
1987–1988
1986–1991
1987–1989
1985–1992
1987–1991
1989–1995
1981–1997
1984–1994
1987–2001
1987–2000
1988–90, 1993–95

Yield or
Runoff?
Yield
Yield
Yield
Yield
Yield
Yield
Yield
Yield
Yield
Yield
Runoff
Yield
Both
Both
Both
Both
Runoff
Runoff
Runoff
Both

Risk
Pricing?
Somewhat
Yes
Yes
Yes
Yes
Yes
Yes
Yes
Yes
Yes
No
Yes
Yes
Yes
Yes
Yes
Yes
Yes
Yes
Yes

Notes: This table summarizes the literature on risk pricing by jumbo CD holders. (“Bank” refers to commercial banks, bank
holding companies, and thrift institutions. “Risk pricing” refers to price or quantity responses to a change in bank condition.)
These studies used both cross-section and time-series techniques along with a variety of risk proxies and control variables.
The weight of the evidence indicates bank condition is priced, suggesting that jumbo CD data might add value in off-site
surveillance.

R. A. Gilbert, A. P. Meyer, and M. D. Vaughan: Jumbo CD Market

Table 1 Do Prior Studies Point to Risk Pricing in the Jumbo CD Market?

159

160

Federal Reserve Bank of Richmond Economic Quarterly

Table 2 Do Jumbo CD Data Contain Evidence of Risk Pricing?
Evidence from Regressions of Yields and Runoff on Failure
Probabilities

Sensitivity of Jumbo CD Yields and Runoff to Failure Probability
1988: Q1–2004:Q4

Independent Variable

Dependent Variable:
Yields
Coefficient (Std. Error)

Dependent Variable:
Runoff
Coefficient (Std. Error)

Failure Probability
(Lagged One Quarter)

0.0108***
(0.0038)

-0.3309***
(0.0505)

Maturity-Weighted Treasury Yield

0.6878***
(0.0391)

0.5324
(0.5211)

Average Portfolio Maturity

0.0820***
(0.0209)

-1.8743***
(0.2785)

Maturity-Treasury Interactive

-0.0140***
(0.0147)

-0.3408*
(0.1959)

Holding Company Dummy

-0.0595
(0.0136)

-0.9042***
(0.1808)

Brokered Deposit Dummy

0.1403***
(0.0270)

-0.0868
(0.3592)

MSA Dummy

0.1384***
(0.0913)

1.7813
(1.2164)

R2
F-statistic Control Variables

0.2288
394.16***

0.0125
33.25***

F-statistic Time Dummies

155.69***

21.88***

F-statistic State Dummies

21.53***

15.21***

Observations

229,486

229,486

Notes: This table reports results for regressions of jumbo CD yields and runoff on failure
probabilities and various controls. Equations were estimated on a sub-sample of banks
with satisfactory CAMELS ratings. Asterisks denote statistical significance at the 10(*), 5- (**), and 1- (***) percent levels. A positive and significant failure-probability
coefficient in the yield equation and/or a negative and significant coefficient in the runoff
equation constitute evidence that bank condition is priced. The results indicate that greater
risk of failure translates, on average, into higher yields and larger runoff, suggesting that
jumbo CD data may have surveillance value in our sample.

Credit Risk

Leverage Risk
Liquidity Risk
Controls

Independent Variables
Loans past due 30-89 days (% of total assets)
Loans past due over 89 days (% of total assets)
Loans in nonaccrual status (% of total assets)
“Other real estate owned” (% of total assets)
Commercial & industrial loans (% of total assets)
Residential real estate loans (% of total assets)
Equity capital minus goodwill (% of total assets)
Net income (% of average assets)
Book value of investments (% of total assets)
Time deposits over $100,000 (% of total assets)
Natural logarithm of total assets (thousands of dollars)
Dummy for banks with composite CAMELS rating=2
Dummy for banks with management rating >
composite rating

Notes: This table lists the independent variables in the
between each variable and the likelihood of downgrade
rating (CAMELS 3, 4, or 5 composite). For example,
things equal, higher capital levels reduce the likelihood

Symbol
PAST DUE 30
PAST DUE 90
NONACCRUING
OREO
COMMERICAL
RESIDENTIAL
NET WORTH
ROA
SECURITIES
JUMBO CDs
SIZE
CAMELS-2

Impact on Downgrade Risk
+
+
+
+
+
−
−
−
−
+
Ambiguous
+

MANAGEMENT

+

CAMELS-downgrade model. Signs note the hypothesized relationship
from a satisfactory (CAMELS 1 or 2 composite) to an unsatisfactory
the negative sign on the NET WORTH variable indicates that, other
of migration to an unsatisfactory rating over the next two years.

R. A. Gilbert, A. P. Meyer, and M. D. Vaughan: Jumbo CD Market

Table 3 Which Variables Predict Migration to an Unsatisfactory
CAMELS Rating?

161

Table 4 How Common Is Migration to Unsatisfactory CAMELS Ratings? Evidence from 1992–2005

1993
1994
1995

1997
1998
1999
2000
2001
2002
2003
2004
2005

Banks with
1 & 2 Rating
1,959
5,275
2,289
5,978
2,919
5,742
3,106
4,905
3,295
4,518
3,250
3,744
3,027
3,101
3,064
3,041
2,843
3,084
2,661
3,153
2,449
3,216
2,283
3,101
2,111
2,950
2,573
3,650

Number Migrating to
3, 4, or 5 Rating
22
403
7
175
9
153
8
94
10
117
7
118
19
135
19
179
12
183
12
219
11
219
16
177
10
114
10
85

Percentage Migrating
to 3, 4, or 5 Rating
1.12
7.64
0.31
2.93
0.31
2.66
0.26
1.92
0.30
2.59
0.22
3.15
0.63
4.35
0.62
5.89
0.42
5.93
0.45
6.95
0.45
6.81
0.70
5.71
0.47
3.86
0.39
2.33

Total Downgrades
to 3, 4, or 5 Rating
425
182
162
102
127
125
154
198
195
231
230
193
124
95

Notes: This table demonstrates that banks with satisfactory composites ratings (CAMELS 1 or 2) frequently migrate to unsatisfactory ratings
(3, 4, or 5), thereby permitting yearly re-estimation of the CAMELS-downgrade model. The data also show that 2-rated banks are much
more likely to migrate to unsatisfactory ratings than 1-rated banks.

Federal Reserve Bank of Richmond Economic Quarterly

1996

Rating at
Beginning of Year
1
2
1
2
1
2
1
2
1
2
1
2
1
2
1
2
1
2
1
2
1
2
1
2
1
2
1
2

162

Year of Migration
to 3, 4, or 5 Rating
1992

R. A. Gilbert, A. P. Meyer, and M. D. Vaughan: Jumbo CD Market

163

Table 5 Selected Summary Statistics—Jumbo CD Data and
Regressors for the CAMELS-Downgrade Model

Credit Risk

Leverage Risk
Liquidity Risk
Controls

Variable
PAST DUE 30
PAST DUE 90
NONACCRUING
OREO
COMMERICAL
RESIDENTIAL
NET WORTH
ROA
SECURITIES
JUMBO CDs
SIZE
CAMELS-2
MANAGEMENT
“Simple” Default Premium
“Complex” Default Premium
“Simple” Deposit Runoff
“Complex” Default Premium

Median
0.68
0.07
0.19
0.03
7.65
14.26
8.84
1.17
28.65
8.00
11.07
1.00
0.00
0.47
NA
9.39
NA

Mean
0.90
0.21
0.38
0.20
9.28
15.91
9.79
1.23
30.41
9.33
11.21
0.61
0.18
0.42
NA
19.25
NA

Standard
Deviation
0.84
0.39
0.56
0.46
6.97
10.68
4.74
2.05
14.87
6.56
1.29
0.49
0.39
2.83
2.18
52.02
33.03

Notes: This table contains summary statistics for the independent variables used in the
CAMELS-downgrade prediction model, computed over all year-end regression observations from 1989 to 2001. Summary statistics for the default premiums and deposit runoff
series used in jumbo CD risk rankings are also provided for comparison. The “complex”
measures of premium and runoff are regression residuals, so means and medians are not
meaningful, but standard deviations are roughly in line with their “simple” counterparts.
The correlation coefficients between the “simple” and “complex” measures are 88 percent
for default premiums and 35 percent for the runoff.

Table 6 How Well Does the CAMELS-Downgrade Model Fit the Data?
Downgrade Years: 1990–2005

Intercept
Credit Risk

PAST DUE 30
PAST DUE 90

OREO
COMMERCIAL
RESIDENTIAL
Leverage Risk

NET WORTH
ROA

Liquidity Risk

SECURITIES
JUMBO CDs

Controls

SIZE
CAMELS-2
MANAGEMENT
Number of Observations
Pseudo-R2

1990–1991
-2 .087***
(0 .246)
0 .112**
(0 .021)
0 .376***
(0 .039)
0 .235***
(0 .029)
0 .220***
(0 .030)
0 .009***
(0 .003)
-0 .005***
(0. 002)
-0 .054***
(0 .010)
-0 .241***
(0 .035)
-0 .016***
(0 .002)
0 .017***
(0 .003)
0 .079
(0 .017)
0 .633***
(0 .062)
0 .488***
(0 .051)
8 ,494
0 .219

1991–1992
-0 .957***
(0 .264)
0 .150***
(0 .022)
0 .328***
(0 .040)
0 .199***
(0 .030)
0 .216***
(0 .032)
0 .013***
(0 .003)
-0 .004
(0. 002)
-0 .048***
(0 .011)
-0 .318***
(0 .039)
-0 .017***
(0 .002)
0 .019***
(0 .003)
-0 .029
(0 .019)
0 .517***
(0 .068)
0 .401***
(0 .054)
8 ,065
0 .226

1992–1993
-0 .081
(0 .318)
0 .136***
(0 .026)
0 .239***
(0 .047)
0 .291***
(0 .036)
0 .145***
(0 .031)
0 .009**
(0 .004)
-0 .004
(0 .003)
-0 .073***
(0 .013)
-0 .200***
(0 .043)
-0 .013***
(0 .002)
0 .015***
(0 .004)
-0 .125***
(0 .024)
0 .509***
(0 .087)
0 .478***
(0 .061)
7 ,837
0 .209

1993–1994
0 .048
(0 .375)
0 .174***
(0 .033)
0 .304***
(0 .060)
0 .178***
(0 .045)
0 .167***
(0 .043)
0 .002
(0 .005)
-0 .005
(0 .003)
-0 .074***
(0 .013)
-0 .263***
(0 .051)
-0 .009***
(0 .003)
0 .017***
(0 .005)
-0 .147***
(0 .030)
0 .432***
(0 .102)
0 .466***
(0 .069)
8 ,060
0 .161

Federal Reserve Bank of Richmond Economic Quarterly

NONACCRUING

Period of Downgrade in CAMELS Rating

164

Independent Variable

Table 6 (Continued) How Well Does the CAMELS-Downgrade Model Fit the Data?

Intercept
Credit Risk

PAST DUE 30
PAST DUE 90
NONACCRUING
OREO
COMMERICAL
RESIDENTIAL

Leverage Risk

NET WORTH
ROA

Liquidity Risk

SECURITIES
JUMBO CDs

Controls

SIZE
CAMELS-2
MANAGEMENT
Number of Observations
Pseudo-R2

Period of Downgrade in CAMELS Rating
1994–1995
-0.780*
(0.402)
0.119***
(0.035)
0.296***
(0.064)
0.192***
(0.046)
0.192***
(0.044)
0.007
(0.005)
-0.002
(0.004)
-0.032**
(0.014)
-0.229***
(0.052)
-0.002
(0.003)
0.024***
(0.005)
-0.150***
(0.033)
0.594***
(0.104)
0.389***
(0.075)
8,665
0.150

1995–1996
-0.011
(0.436)
0.164***
(0.035)
0.322***
(0.074)
0.145***
(0.051)
0.153***
(0.052)
0.013***
(0.005)
-0.013***
(0.004)
-0.034***
(0.013)
-0.164***
(0.038)
-0.010***
(0.003)
0.020***
(0.005)
-0.202***
(0.035)
0.589***
(0.103)
0.510***
(0.078)
8,682
0.188

1996–1997
-0.162
(0.415)
0.093***
(0.029)
0.347***
(0.057)
0.187***
(0.044)
0.156**
(0.067)
0.005
(0.005)
0.000
(0.003)
-0.020*
(0.012)
-0.110**
(0.044)
-0.011***
(0.003)
0.019***
(0.004)
-0.101***
(0.030)
0.760***
(0.093)
0.535***
(0.081)
8,585
0.223

1997–1998
-1.371***
(0.388)
0.189***
(0.033)
0.399***
(0.064)
0.157***
(0.046)
0.091
(0.059)
0.010**
(0.005)
-0.009***
(0.003)
-0.036***
(0.014)
-0.393***
(0.063)
-0.015***
(0.003)
0.023***
(0.005)
-0.150***
(0.032)
0.501***
(0.099)
0.406***
(0.083)
8,314
0.184

1998–1999
-1.603***
(0.352)
0.186***
(0.030)
0.182***
(0.058)
0.163***
(0.044)
0.118
(0.087)
0.015
(0.004)
-0.004
(0.003)
-0.011
(0.010)
-0.133
(0.040)
-0.007**
(0.003)
0.008*
(0.004)
-0.071***
(0.027)
0.716***
(0.078)
0.518***
(0.077)
7,818
0.166

R. A. Gilbert, A. P. Meyer, and M. D. Vaughan: Jumbo CD Market

Independent Variable

165

Table 6 (Continued) How Well Does the CAMELS-Downgrade Model Fit the Data?
Independent Variable

Credit Risk

PAST DUE 30
PAST DUE 90
NONACCRUING

COMMERICAL
RESIDENTIAL
Leverage Risk

NET WORTH
ROA

Liquidity Risk

SECURITIES
JUMBO CDs

Controls

SIZE
CAMELS-2
MANAGEMENT
Number of Observations
Pseudo-R2

Note: See footnote 24.

2000–2001
-1 .061***
(0 .358)
0 .184***
(0 .032)
0 .417***
(0 .059)
0 .165***
(0 .045)
0 .157*
(0 .087)
0 .013***
(0 .004)
0 .002
(0 .003)
-0 .036***
(0 .010)
-0 .254***
(0 .043)
-0 .005**
(0 .003)
0 .017***
(0 .004)
-0 .106***
(0 .028)
0 .799***
(0 .081)
0 .538***
(0 .084)
6 ,968
0 .206

2001–2002
-1 .788***
(0 .342)
0 .170***
(0 .026)
0 .321***
(0 .059)
0 .250***
(0 .042)
0 .175*
(0 .090)
0 .010***
(0 .004)
0 .001
(0 .003)
-0 .008
(0 .009)
-0 .135***
(0 .039)
-0 .010***
(0 .003)
0 .023***
(0 .004)
-0 .069***
(0 .026)
0 .878***
(0 .083)
0 .338***
(0 .092)
6 ,582
0 .210

2002–2003
-1 .528***
(0 .342)
0 .171***
(0 .027)
0 .147**
(0 .061)
0 .285***
(0 .041)
0 .082
(0 .073)
0 .004
(0 .004)
-0 .005
(0 .003)
-0 .025**
(0 .010)
-0 .134***
(0 .037)
-0 .005*
(0 .002)
0 .015***
(0 .004)
-0 .065**
(0 .026)
0 .797***
(0 .083)
0 .491***
(0 .088)
6 ,367
0 .184

Federal Reserve Bank of Richmond Economic Quarterly

OREO

1999–2000
-1 .118***
(0 .360)
0 .169***
(0 .029)
0 .217***
(0 .055)
0 .227***
(0 .044)
0 .117
(0 .076)
0 .013***
(0 .004)
-0 .002
(0 .003)
-0 .044***
(0 .011)
-0 .199***
(0 .046)
-0 .002
(0 .003)
0 .015***
(0 .004)
-0 .099***
(0 .029)
0 .780***
(0 .079)
0 .564***
(0 .080)
7 ,341
0 .190

166

Intercept

Period of Downgrade in CAMELS Rating

Table 7 Do Jumbo CD Default Premiums or Runoff Add Value in Bank Surveillance?
Full-Sample, Two-Year Horizon

CAMELSDowngrade
Model
(2)
20.20
21.81
22.39
17.51
15.24
19.24
21.39
19.55
18.77
18.92
19.46
20.36
20.76
19.66

Simple
Default
Premiums
(3)
47.56
47.10
50.57
43.50
46.93
46.34
47.27
45.62
44.31
43.82
45.35
41.56
43.29
45.63

Complex
Default
Premiums
(4)
52.20
47.48
49.57
46.81
49.01
48.12
45.69
48.83
51.78
51.78
51.61
51.30
51.89
49.70

Simple
Runoff
(5)
49.54
47.22
45.06
47.42
46.31
45.01
47.20
43.56
46.10
44.93
46.27
46.12
44.85
46.12

Complex
Runoff
(6)
53.08
48.62
50.90
48.49
51.03
49.33
45.75
48.94
52.03
51.85
51.56
52.54
52.00
50.47

Complex
Premium
+
Runoff
Model
(8)
52.88
49.30
49.55
46.48
48.79
48.92
45.70
48.64
52.13
51.90
51.45
52.30
52.59
50.05

Downgrade
Model
+
Simple
Premiums/
Runoff
(9)
21.12
23.22
22.67
18.64
18.38
19.72
21.87
19.33
20.11
19.64
19.56
21.18
20.77
20.48

Downgrade
Model
+
Complex
Premiums/
Runoff
(10)
21.67
21.07
19.94
18.87
16.16
20.59
21.45
19.19
18.47
20.23
19.36
20.51
21.14
19.95

167

Notes: This table summarizes evidence about the surveillance value of jumbo CD data. Each cell in columns 2 through 10
contains the area under the power curve for a specific risk-ranking produced by a specific surveillance tool over a specific test
window. Smaller areas imply lower Type 1 and Type 2 error rates and, thus, better performance. Column 2 contains areas for
downgrade probability rankings to benchmark current practices. Columns 3 through 10 contain rankings based on various uses
of jumbo CD default premiums and runoff. The evidence suggests the data would have added no value in surveillance between
1992 and 2005. Risk rankings produced by the CAMELS-downgrade model (column 2) performed considerably better than
random rankings (average power curve area of 50 percent). But, rankings based on default premiums or runoff (columns 3
through 6) as well as rankings based on both series (columns 7 and 8) barely outperformed random rankings. Finally, default
premiums and runoff (columns 9 and 10) did not improve out-of-sample performance of the CAMELS-downgrade model.

R. A. Gilbert, A. P. Meyer, and M. D. Vaughan: Jumbo CD Market

Out-of-Sample
Test Window
(1)
1992–1993
1993–1994
1994–1995
1995–1996
1996–1997
1997–1998
1998–1999
1999–2000
2000–2001
2001–2002
2002–2003
2003–2004
2004–2005
All Years

Simple
Premium
+
Runoff
Model
(7)
50.51
48.31
46.18
45.52
46.39
45.22
47.24
43.41
45.38
44.40
44.34
43.02
42.18
45.55

168

Table 8 Do Jumbo CD Default Premiums or Runoff Enrich the
CAMELS-Downgrade Model?
Panel A: Percentage Change in Power Curve Area
Default Premiums
and Runoff
(2)
-4.36
0.64
-0.44
-1.18
0.13
-1.78
0.19
1.98
0.43
-0.94
0.15
1.15
0.63
-0.26

Leverage Risk
Variables
(3)
8.81
-1.21
3.93
4.50
-0.39
1.58
-0.28
3.43
4.89
1.15
6.52
5.64
5.79
5.20

Credit Risk
Variables
(4)
13.92
13.24
10.48
14.36
24.65
9.62
9.93
16.44
18.28
18.47
16.22
22.18
17.33
15.78

Liquidity Risk
Variables
(5)
8.00
6.16
0.75
6.98
5.87
4.89
0.05
-1.30
1.12
2.73
1.28
1.90
1.36
3.06

Control
Variables
(6)
4.36
12.91
19.36
27.70
16.90
22.24
16.82
20.40
19.08
13.59
14.12
12.64
19.62
16.90

Federal Reserve Bank of Richmond Economic Quarterly

Out-of-Sample
Window
(1)
1992–1993
1993–1994
1994–1995
1995–1996
1996–1997
1997–1998
1998–1999
1999–2000
2000–2001
2001–2002
2002–2003
2003–2004
2004–2005
Mean

Table 8 (Continued) Do Jumbo CD Default Premiums or Runoff Enrich the CAMELS-Downgrade Model?

Out-of-Sample
Window
(1)
1992–1993
1993–1994
1994–1995
1995–1996
1996–1997
1997–1998
1998–1999
1999–2000
2000–2001
2001–2002
2002–2003
2003–2004
2004–2005
Mean

Default Premiums
and Runoff
(2)
-1.06
0.43
0.20
0.00
-0.20
-0.17
0.00
0.00
-0.21
0.66
-0.38
0.72
0.82
0.06

Leverage Risk
Variables
(3)
3.65
6.45
3.59
0.23
1.43
0.86
0.40
0.91
0.84
1.98
2.67
0.96
2.45
2.03

Credit Risk
Variables
(4)
6.25
9.31
5.39
3.85
3.46
2.06
2.53
2.62
3.45
5.84
2.96
5.97
2.45
4.32

Liquidity Risk
Variables
(5)
4.55
1.43
0.60
0.68
0.61
1.03
0.40
0.57
0.31
0.85
0.29
-1.55
-0.49
0.71

Control
Variables
(6)
-1.70
-0.72
1.20
2.49
1.02
3.26
2.39
2.50
2.72
2.64
2.29
0.72
1.96
1.60

Notes: This table provides alternative measures of the contribution of simple default premiums and runoff to the CAMELSdowngrade model. Column 2 of Panel A shows the impact on power curve areas of removing the two jumbo CD series from
the enhanced downgrade model (baseline model plus premiums and runoff). Column 2 of Panel B notes the impact of removing
these series on quadratic probability score (QPS). Changes in the QPS and power curve areas are expressed in percentagechange terms to permit direct comparisons. Positive percentage changes for QPS or power curve areas imply that removing
the variable block weakens model performance. To facilitate interpretation of changes, columns 3 through 6 show the impact
of removing other variable blocks from the CAMELS-downgrade model, such as the control variables (asset size, dummy for
CAMELS rating, and dummy for management rating of 2). The evidence suggests default premiums and runoff add nothing
to the CAMELS-downgrade model.

R. A. Gilbert, A. P. Meyer, and M. D. Vaughan: Jumbo CD Market

Panel B: Percentage Change in QPS

169

170

Federal Reserve Bank of Richmond Economic Quarterly

REFERENCES
Altman, Edward I., and Anthony Saunders. 1997. “Credit Risk
Measurement: Developments Over the Last Twenty Years.” Journal of
Banking and Finance 21: 1721–42.
Baer, Herbert, and Elijah Brewer III. 1986. “Uninsured Deposits as a Source
of Market Discipline: Some New Evidence.” Federal Reserve Bank of
Chicago Economic Perspectives (September/October): 23–31.
Barth, Mary E., William H. Beaver, and Wayne R. Landsman. 1996.
“Value-Relevance of Banks’ Fair Value Disclosures Under SFAS No.
107.” The Accounting Review 71: 513–37.
Benston, George J., and George G. Kaufman. 1998. “Deposit Insurance
Reform in the FDIC Improvement Act: The Experience to Date.” The
Federal Reserve Bank of Chicago Economic Perspectives (2): 2–20.
Berger, Allen N. 1995. “The Relationship Between Capital and Earnings in
Banking.” Journal of Money, Credit, and Banking 27: 432–56.
, and Sally M. Davies. 1998. “The Information Content of
Bank Examinations.” Journal of Financial Services Research 14:
117–44.
Berger, Allen N., Seth D. Bonime, Daniel M. Covitz, and Diana Hancock.
2000. “Why are Bank Profits So Persistent? The Roles of Product Market
Competition, Informational Opacity, and Regional/Macroeconomic
Shocks.” Journal of Banking and Finance 24: 1203–35.
Billett, Matthew T., Jon A. Garfinkel, and Edward S. O’Neal. 1998. “The
Cost of Market Versus Regulatory Discipline in Banking.” Journal of
Financial Economics 48: 333–58.
Birchler, Urs W., and Andrea M. Maechler. 2002. “Do Depositors Discipline
Swiss Banks?” In Research in Financial Services: Private and Public
Policy 14, ed. George G. Kaufman. Oxford, U.K.: Elsevier Science.
Bliss, Robert R., and Mark J. Flannery. 2001. “Market Discipline in the
Governance of U.S. Bank Holding Companies: Monitoring Versus
Influence.” In Prudential Supervision: What Works and What Doesn’t,
ed. Frederic Mishkin. Chicago: University of Chicago Press.
Board of Governors of the Federal Reserve System, and the United States
Department of the Treasury. 2000. The Feasibility and Desirability of
Mandatory Subordinated Debt (December).

R. A. Gilbert, A. P. Meyer, and M. D. Vaughan: Jumbo CD Market

171

Board of Governors of the Federal Reserve System. 1999. “Using
Subordinated Debt as an Instrument of Market Discipline.” Staff Study
172 (December).
Brewer, Elijah III, and Thomas H. Mondschean. 1994. “An Empirical Test of
the Incentive Effects of Deposit Insurance: The Case of Junk Bonds at
Savings and Loan Associations.” Journal of Money, Credit, and Banking
26: 146–64.
Calomiris, Charles W. 1999. “Building an Incentive-Compatible Safety Net.”
Journal of Banking and Finance 23: 1499–1519.
, and Joseph R. Mason. 1997. “Contagion and Bank Failures
during the Great Depression: The 1932 Chicago Bank Panic.” American
Economic Review 87: 863–83.
Cargill, Thomas F. 1989. “CAMEL Ratings and the CD Market.” Journal of
Financial Services Research 3: 347–58.
Cole, Rebel A., and Jeffrey W. Gunther. 1998. “Predicting Bank Failures: A
Comparison of On- and Off-Site Monitoring Systems.” Journal of
Financial Services Research 13: 103–17.
Cole, Rebel A., Barbara G. Cornyn, and Jeffrey W. Gunther. 1995. “FIMS: A
New Monitoring System for Banking Institutions.” Federal Reserve
Bulletin 81: 1–15.
Cook, Douglas O., and Lewis J. Spellman. 1994. “Repudiation Risk and
Restitution Costs: Toward an Understanding of Premiums on Insured
Deposits.” Journal of Money, Credit, and Banking 26 (August): 439–59.
Covitz, Daniel M., Diana Hancock, and Myron L. Kwast. 2002. “Market
Discipline in Banking Reconsidered: The Roles of Deposit Insurance
Reform, Funding Manager Decisions and Bond Market Liquidity.”
Board of Governors Working Paper 2002-46 (October).
Crabbe, Leland, and Mitchell A. Post. 1994. “The Effect of a Rating
Downgrade on Outstanding Commercial Paper.” Journal of Finance 49:
39–56.
Crane, Dwight B. 1976. “A Study of Interest Rate Spreads in the 1974 CD
Market.” Journal of Bank Research 7: 213–24.
Curry, Timothy J., Peter J. Elmer, and Gary S. Fissel. 2003. “Using Market
Information to Help Identify Distressed Institutions: A Regulatory
Perspective.” Federal Deposit Insurance Corporation Banking Review 15
(3): 1–16.
Davison, Lee. 1997. “Continental Illinois and ‘Too Big to Fail.’ ” In History
of the Eighties: Lessons for the Future, vol. 1. Federal Deposit Insurance
Corporation: Washington DC.

172

Federal Reserve Bank of Richmond Economic Quarterly

Demsetz, Rebecca S., and Philip E Strahan. 1997. “Diversification, Size, and
Risk at Bank Holding Companies.” Journal of Money, Credit, and
Banking 29: 300–13.
DeYoung, Robert. 1999. “Birth, Growth, and Life or Death of Newly
Chartered Banks.” Federal Reserve Bank of Chicago Economic
Perspectives (3): 18–35.
Ellis, David M., and Mark J. Flannery. 1992. “Does the Debt Market Assess
Large Banks’ Risk? Time Series Evidence from Money Center CDs.”
Journal of Monetary Economics 30: 481–502.
Estrella, Arturo. 1998. “A New Measure of Fit for Equations with
Dichotomous Dependent Variables.” Journal of Business Economics and
Statistics 16: 198–205.
, and Frederic S. Mishkin. 1998. “Predicting U.S.
Recessions: Financial Variables as Leading Indicators.” Review of
Economics and Statistics 80: 45–61.
Evanoff, Douglas D., and Larry D. Wall. 2001. “Sub-debt Yield Spreads as
Bank Risk Measures.” Journal of Financial Services Research 20:
121–45.
Feldman, Ron, and Jason Schmidt. 2001. “Increased Use of Uninsured
Deposits.” Federal Reserve Bank of Minneapolis Fedgazette (March):
18–19.
Flannery, Mark J. 1982. “Retail Bank Deposits as Quasi-Fixed Factors of
Production.” American Economic Review 72: 527–36.
. 2001. “The Faces of ‘Market Discipline.’ ” Journal of
Financial Services Research 20: 107–19.
, and Joel F. Houston. 1999. “The Value of a Government
Monitor for U.S. Banking Firms.” Journal of Money, Credit, and
Banking 31: 14–34.
Flannery, Mark J., and Kasturi Rangan. 2002. “Market Forces at Work in the
Banking Industry: Evidence from the Capital Buildup of the 1990s.”
University of Florida Working Paper (September).
Friedman, Milton, and Anna Jacobson Schwartz.1963. A Monetary History
of the United States, 1867–1960. Princeton: Princeton University Press.
Gilbert, R. Alton, and Mark D. Vaughan. 2001. “Do Depositors Care About
Enforcement Actions?” Journal of Economics and Business 53:
283–311.
Gilbert, R. Alton, Andrew P. Meyer, and Mark D. Vaughan. 1999. “The Role
of Supervisory Screens and Econometric Models in Off-Site

R. A. Gilbert, A. P. Meyer, and M. D. Vaughan: Jumbo CD Market

173

Surveillance.” Federal Reserve Bank of St. Louis Review
(November/December): 31–56.
. 2002. “Could a CAMELS-Downgrade Model Improve
Off-Site Surveillance?” Federal Reserve Bank of St. Louis Review
(January/February): 47–63.
Goldberg, Lawrence G., and Sylvia C. Hudgins. 2002. “Depositor Discipline
and Changing Strategies for Regulating Thrift Institutions.” Journal of
Financial Economics 63: 263–74.
Goldberg, Michael A., and Peter R. Lloyd-Davies. 1985. “Standby Letters of
Credit: Are Banks Overextending Themselves?” Journal of Bank
Research 16 (Spring): 29–39.
Gorton, Gary. 1996. “Reputation Formation in Early Bank Note Markets.”
Journal of Political Economy 104: 346–97.
Gunther, Jeffery W., and Robert R. Moore. 2000. “Financial Statements and
Reality: Do Troubled Banks Tell All?” Federal Reserve Bank of Dallas
Economic and Financial Review (3): 30–5.
Gunther, Jeffery W., Mark E. Levonian, and Robert R. Moore. 2001. “Can
the Stock Market Tell Bank Supervisors Anything They Don’t Already
Know?” Federal Reserve Bank of Dallas Economic and Financial
Review (2): 2–9.
Hall, John R., Thomas B. King, Andrew P. Meyer, and Mark D. Vaughan.
2005. “Did FDICIA Improve Market Discipline? A Look at Evidence
from the Jumbo CD Market.” Federal Reserve Bank of St. Louis
Supervisory Policy Analysis Working Paper (April).
Hannan, Timothy, and Gerald A. Hanweck. 1988. “Bank Insolvency Risk
and the Market for Large Certificates of Deposit.” Journal of Money,
Credit, and Banking 20: 203–211.
James, Christopher. 1988. “The Use of Loan Sales and Standby Letters of
Credit by Commercial Banks.” Journal of Monetary Economics 22:
395–422.
. 1990. “Heterogeneous Creditors and LDC Lending.”
Journal of Monetary Economics 25: 325–46.
Jones, David S., and Kathleen K. King. 1995. “The Implementation of
Prompt Corrective Action.” Journal of Banking and Finance 19:
491–510.
Jordan, John S. 2000. “Depositor Discipline at Failing Banks.” Federal
Reserve Bank of Boston New England Economic Review (March/April):
15–28.

174

Federal Reserve Bank of Richmond Economic Quarterly

Kahn, Charles, George Pennacchi, and Ben Sopranzetti. 1999. “Bank
Deposit Rate Clustering: Theory and Empirical Evidence.” Journal of
Finance 56: 2185–214.
Keeley, Michael C. 1990. “Deposit Insurance, Risk, and Market Power in
Banking.” American Economic Review 80: 1183–98.
Krainer, John, and Jose A. Lopez. Forthcoming. “Incorporating Equity
Market Information into Supervisory Monitoring Models.” Journal of
Money, Credit, and Banking.
Kroszner, Randall S., and Philip E. Strahan. 2001. “Obstacles to Optimal
Policy: The Interplay of Politics and Economics in Shaping Bank
Supervision and Regulation Reforms.” In Prudential Supervision: What
Works and What Doesn’t, ed. Frederic Mishkin. Chicago: University of
Chicago Press.
Lang, William W., and Douglas Robertson. 2002. “Analysis of Proposals for
a Minimum Subordinated Debt Requirement.” Journal of Economics
and Business 54: 115–36.
Lynch, Peter, and John Rothchild. 2000. One Up on Wall Street: How to Use
What You Already Know to Make Money in the Market. New York:
Simon & Schuster.
Maechler, Andrea M., and Kathleen M. McDill. 2003. “Dynamic Depositor
Discipline in U.S. Banks.” Federal Deposit Insurance Corporation
Working Paper, 2003.
Malkiel, Burton G. 2003. “The Efficient Market Hypothesis and Its Critics.”
Journal of Economic Perspectives 17 (1): 59–82.
Marino, James A., and Rosalind Bennett. 1999. “The Consequences of
National Depositor Preference.” Federal Deposit Insurance Corporation
Banking Review 12: 19–38.
Martinez-Peria, Maria Soledad, and Sergio L. Schmukler. 2001. “Do
Depositors Punish Banks for Bad Behavior? Market Discipline, Deposit
Insurance, and Banking Crises.” Journal of Finance 56: 1029–51.
Meyer, Laurence H. 2001. “Supervising Large Complex Banking
Organizations: Adapting to Change.” In Prudential Supervision: What
Works and What Doesn’t, ed. Frederic Mishkin. Chicago: University of
Chicago Press.
Morgan, Donald P., and Kevin J. Stiroh. 2001. “Market Discipline of Banks:
The Asset Test.” Journal of Financial Services Research 20: 195–208.
Park, Sangkyun. 1995. “Market Discipline by Depositors: Evidence from
Reduced-Form Equations.” Quarterly Review of Economics and Finance
35: 497–514.

R. A. Gilbert, A. P. Meyer, and M. D. Vaughan: Jumbo CD Market

175

, and Stavros Peristiani. 1998. “Market Discipline by Thrift
Depositors.” Journal of Money, Credit, and Banking 30: 347–64.
Peek, Joe, and Eric S. Rosengren. 1997. “Will Legislated Early Intervention
Prevent the Next Banking Crisis?” Southern Economic Journal 64:
268–280.
Putnam, Barron H. 1983. “Early Warning Systems and Financial Analysis in
Bank Monitoring: Concepts of Financial Monitoring.” Federal Reserve
Bank of Atlanta Economic Review (November): 6–13.
Rajan, Raghuram G. 2001. “Comment on Bliss and Flannery.” In Prudential
Supervision: What Works and What Doesn’t, ed. Frederic Mishkin.
Chicago: University of Chicago Press.
Reidhill, Jack, and John O’Keefe. 1997. “Off-Site Surveillance Systems.” In
History of the Eighties: Lessons for the Future, vol. 1. Federal Deposit
Insurance Corporation: Washington D.C.
Roll, Richard. 1994. “What Every CFO Should Know about Scientific
Progress in Financial Economics: What is Known, and What Remains to
be Resolved.” Financial Management 23 (2): 69–75.
Sierra, Gregory E., and Timothy J. Yeager. 2004. “What Does the Federal
Reserve’s Economic Value Model Tell Us about Interest Rate Risk at
U.S. Community Banks?” Federal Reserve Bank of St. Louis Review
(November/December): 45–60.


Federal Reserve Bank of St. Louis, One Federal Reserve Bank Plaza, St. Louis, MO 63102